Merge branch 'open-webui:main' into rocm-compose

This commit is contained in:
Justin Hayes 2024-04-16 16:00:17 -04:00 committed by GitHub
commit 863602e074
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
6 changed files with 329 additions and 8 deletions

View File

@ -20,6 +20,7 @@ title: "📋 FAQ"
- [Q: I updated/restarted/installed some new software and now my WebUI isn't working anymore!](#q-i-updatedrestartedinstalled-some-new-software-and-now-my-webui-isnt-working-anymore)
- [Q: I updated/restarted and now my login isn't working anymore, I had to create a new account and all my chats are gone.](#q-i-updatedrestarted-and-now-my-login-isnt-working-anymore-i-had-to-create-a-new-account-and-all-my-chats-are-gone)
- [Q: I tried to login and couldn't, made a new account and now I'm being told my account needs to be activated by an admin.](#q-i-tried-to-login-and-couldnt-made-a-new-account-and-now-im-being-told-my-account-needs-to-be-activated-by-an-admin)
- [Q: Why does the WebUI project can't be started with ssl error?](#q-why-does-the-webui-project-cant-be-started-with-ssl-error)
#### **Q: Why am I asked to sign up? Where are my data being sent to?**
@ -88,4 +89,12 @@ Everything you need to run Open WebUI, including your data, remains within your
**A:** This situation occurs when you forget the password for the initial admin account created during the first setup. The first account is automatically designated as the admin account. Creating a new account without access to the admin account will result in the need for admin activation. Avoiding the loss of the initial admin account credentials is crucial for seamless access and management of Open WebUI. See the [Resetting the Admin Password](getting-started/troubleshooting#reset-admin-password) guide for instructions on recovering the admin account.
#### **Q: Why does the WebUI project can't be started with ssl error?**
**A:** The SSL error you're encountering when starting the WebUI project is likely due to the absence of SSL certificates or incorrect configuration of [huggingface.co](https://huggingface.co/). To resolve this issue, you could set up a mirror for huggingface, such as [hf-mirror.com](https://hf-mirror.com/), and specify it as the endpoint when starting the Docker container. Use the `-e HF_ENDPOINT=https://hf-mirror.com/` parameter to define the huggingface mirror address in the Docker run command. For example, you can modify the Docker run command as follows:
```bash
docker run -d -p 3000:8080 -e HF_ENDPOINT=https://hf-mirror.com/ --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main
```
#### If you have any further questions or concerns, please out our [GitHub Issues page](https://github.com/open-webui/open-webui/issues) or our [Discord channel](https://discord.gg/5rJgQTnV4s) for more help and information.

View File

@ -0,0 +1,33 @@
# Environment Variable Configuration
## App/Backend ##
Here is a list of supported environment variables used by `backend/config.py` intended to provide Open WebUI startup configurability. See also the [logging environment variables](/getting-started/logging#appbackend).
| Environment Variable | App/Backend |
| --------------------------------- | --------------------------------------------------------------------------- |
| `CUSTOM_NAME` | Sets `WEBUI_NAME` but polls _api.openwebui.com_ for metadata |
| `DEFAULT_MODELS` | Set a default Language Model, default: `None` |
| `ENABLE_SIGNUP` | Toggle user account creation, default: `"True"` |
| `ENV` | Environment setting, default: `"dev"` |
| `K8S_FLAG` | Support Kubernetes style Ollama hostname `.svc.cluster.local` |
| `MODEL_FILTER_ENABLED` | Toggle Language Model filtering, default: `"False"` |
| `MODEL_FILTER_LIST` | Set Language Model filter list |
| `OLLAMA_API_BASE_URL` | Deprecated, see `OLLAMA_BASE_URL` |
| `OLLAMA_BASE_URL` | Configure Ollama backend URL, default: `"http://localhost:11434"` |
| `OLLAMA_BASE_URLS` | Configure load balanced Ollama backend hosts, see `OLLAMA_BASE_URL` |
| `OPENAI_API_KEY` | Set OpenAI API key |
| `OPENAI_API_KEYS` | Support multiple Open API keys |
| `OPENAI_API_BASE_URL` | Configure OpenAI base API URL |
| `OPENAI_API_BASE_URLS` | Support balanced OpenAI base API URLs |
| `RAG_EMBEDDING_MODEL` | Configure a Sentence-Transformer model, default: `"all-MiniLM-L6-v2"` |
| `RAG_EMBEDDING_MODEL_AUTO_UPDATE` | Toggle automatic update of the Sentence-Transformer model, default: `False` |
| `USE_CUDA_DOCKER` | Build docker image with NVIDIA CUDA support, default: `False` |
| `USE_OLLAMA_DOCKER` | Build Docker image with bundled Ollama instance, default: `"false"` |
| `USER_PERMISSIONS_CHAT_DELETION` | Toggle user permission to delete chats, default: `"True"` |
| `WEBHOOK_URL` | Set webhook for integration with Slack/Microsoft Teams |
| `WEBUI_AUTH_TRUSTED_EMAIL_HEADER` | Define trusted request header for authentication |
| `WEBUI_NAME` | Main WebUI name, default: `"Open WebUI"` |
| `WEBUI_SECRET_KEY` | Override randomly generated string used for JSON Web Token |
| `WEBUI_VERSION` | Override WebUI version, default: `"v1.0.0-alpha.100"` |
| `WHISPER_MODEL_AUTO_UPDATE` | Toggle automatic update of the Whisper model, default: `False` |

View File

@ -198,7 +198,13 @@ For more details on networking in Docker and addressing common connectivity issu
<details>
<summary>Rootless (Podman) local-only Open WebUI with Systemd service and auto-update</summary>
- **Important:** Consult the Docker documentation because much of the configuration and syntax is interchangeable with [Podman](https://github.com/containers/podman). See also [rootless_tutorial](https://github.com/containers/podman/blob/main/docs/tutorials/rootless_tutorial.md). This example requires the [slirp4netns](https://github.com/rootless-containers/slirp4netns) network backend to facilitate server listen and Ollama communication over localhost only.
:::note
Consult the Docker documentation because much of the configuration and syntax is interchangeable with [Podman](https://github.com/containers/podman). See also [rootless_tutorial](https://github.com/containers/podman/blob/main/docs/tutorials/rootless_tutorial.md). This example requires the [slirp4netns](https://github.com/rootless-containers/slirp4netns) network backend to facilitate server listen and Ollama communication over localhost only.
:::
:::warning
Rootless container execution with Podman (and Docker/ContainerD) does **not** support [AppArmor confinment](https://github.com/containers/podman/pull/19303). This may increase the attack vector due to [requirement of user namespace](https://rootlesscontaine.rs/caveats). Caution should be exercised and judement (in contrast to the root daemon) rendered based on threat model.
:::
1. Pull the latest image:
```bash
@ -206,12 +212,22 @@ For more details on networking in Docker and addressing common connectivity issu
```
2. Create a new container using desired configuration:
**Note:** `-p 127.0.0.1:3000:8080` ensures that we listen only on localhost, `--network slirp4netns:allow_host_loopback=true` permits the container to access Ollama when it also listens strictly on localhost. `--add-host=ollama.local:10.0.2.2 --env 'OLLAMA_BASE_URL=http://ollama.local:11434'` adds a hosts record to the container and configures open-webui to use the friendly hostname. `10.0.2.2` is the default slirp4netns address used for localhost mapping. `--env 'ANONYMIZED_TELEMETRY=False'` isn't necessary since Chroma telemetry has been disabled in the code but is included as an example.
:::note
`-p 127.0.0.1:3000:8080` ensures that we listen only on localhost, `--network slirp4netns:allow_host_loopback=true` permits the container to access Ollama when it also listens strictly on localhost. `--add-host=ollama.local:10.0.2.2 --env 'OLLAMA_BASE_URL=http://ollama.local:11434'` adds a hosts record to the container and configures open-webui to use the friendly hostname. `10.0.2.2` is the default slirp4netns address used for localhost mapping. `--env 'ANONYMIZED_TELEMETRY=False'` isn't necessary since Chroma telemetry has been disabled in the code but is included as an example.
:::
```bash
podman create -p 127.0.0.1:3000:8080 --network slirp4netns:allow_host_loopback=true --add-host=ollama.local:10.0.2.2 --env 'OLLAMA_BASE_URL=http://ollama.local:11434' --env 'ANONYMIZED_TELEMETRY=False' -v open-webui:/app/backend/data --label io.containers.autoupdate=registry --name open-webui ghcr.io/open-webui/open-webui:main
```
:::note
[Podman 5.0](https://www.redhat.com/en/blog/podman-50-unveiled) has updated the default rootless network backend to use the more performant [pasta](https://passt.top/passt/about/). While `slirp4netns:allow_host_loopback=true` still achieves the same local-only intention, it's now recommended use a simple TCP forward instead like: `--network=pasta:-T,11434 --add-host=ollama.local:127.0.0.1`. Full example:
:::
```bash
podman create -p 127.0.0.1:3000:8080 --network=pasta:-T,11434 --add-host=ollama.local:127.0.0.1 --env 'OLLAMA_BASE_URL=http://ollama.local:11434' --env 'ANONYMIZED_TELEMETRY=False' -v open-webui:/app/backend/data --label io.containers.autoupdate=registry --name open-webui ghcr.io/open-webui/open-webui:main
```
3. Prepare for systemd user service:
```bash
mkdir -p ~/.config/systemd/user/
@ -241,6 +257,22 @@ For more details on networking in Docker and addressing common connectivity issu
podman auto-update --dry-run
```
:::tip
This process is compatible with Windows 11 WSL deployments when using Ollama within the WSL environment or using the Ollama Windows Preview. When using the native Ollama Windows Preview version, one additional step is required: enable [mirrored networking mode](https://learn.microsoft.com/en-us/windows/wsl/networking#mirrored-mode-networking).
:::
### Enabling Windows 11 mirrored networking
1. Populate `%UserProfile%\.wslconfig` with:
```
[wsl2]
networkingMode=mirrored
```
2. Restart WSL:
```
wsl --shutdown
```
</details>
### Alternative Installation Methods

View File

@ -18,6 +18,12 @@ If you're experiencing connection issues, its often due to the WebUI docker c
docker run -d --network=host -v open-webui:/app/backend/data -e OLLAMA_BASE_URL=http://127.0.0.1:11434 --name open-webui --restart always ghcr.io/open-webui/open-webui:main
```
If you're experiencing connection issues with the SSL error of huggingface.co, please checked the huggingface server, if it is down, you could set the `HF_ENDPOINT` to `https://hf-mirror.com/` in the `docker run` command.
```bash
docker run -d -p 3000:8080 -e HF_ENDPOINT=https://hf-mirror.com/ --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main
```
### General Connection Errors
**Ensure Ollama Version is Up-to-Date**: Always start by checking that you have the latest version of Ollama. Visit [Ollama's official site](https://ollama.com/) for the latest updates.

209
docs/tutorial/apache.md Normal file
View File

@ -0,0 +1,209 @@
# Hosting UI and Models separately
:::note
If you plan to expose this to the wide area network, consider implementing security like a [network firewall](https://github.com/chr0mag/geoipsets), [web application firewall](https://github.com/owasp-modsecurity/ModSecurity), and [threat intelligence](https://github.com/crowdsecurity/crowdsec).
Additionally, it's strongly recommended to enable HSTS possibly like `Header always set Strict-Transport-Security "max-age=31536000; includeSubDomains"` within your **HTTPS** configuration and a redirect of some kind to your **HTTPS URL** within your **HTTP** configuration. For free SSL certification, [Let's Encrypt](https://letsencrypt.org/) is a good option coupled with [Certbot](https://github.com/certbot/certbot) management.
:::
Sometimes, its beneficial to host Ollama, separate from the UI, but retain the RAG and RBAC support features shared across users:
# Open WebUI Configuration
## UI Configuration
For the UI configuration, you can set up the Apache VirtualHost as follows:
```
# Assuming you have a website hosting this UI at "server.com"
<VirtualHost 192.168.1.100:80>
ServerName server.com
DocumentRoot /home/server/public_html
ProxyPass / http://server.com:3000/ nocanon
ProxyPassReverse / http://server.com:3000/
</VirtualHost>
```
Enable the site first before you can request SSL:
:::warning
Use of the `nocanon` option may [affect the security of your backend](https://httpd.apache.org/docs/2.4/mod/mod_proxy.html#proxypass). It's recommended to enable this only if required by your configuration.
_Normally, mod_proxy will canonicalise ProxyPassed URLs. But this may be incompatible with some backends, particularly those that make use of PATH_INFO. The optional nocanon keyword suppresses this and passes the URL path "raw" to the backend. Note that this keyword may affect the security of your backend, as it removes the normal limited protection against URL-based attacks provided by the proxy._
:::
`a2ensite server.com.conf` # this will enable the site. a2ensite is short for "Apache 2 Enable Site"
```
# For SSL
<VirtualHost 192.168.1.100:443>
ServerName server.com
DocumentRoot /home/server/public_html
ProxyPass / http://server.com:3000/ nocanon
ProxyPassReverse / http://server.com:3000/
SSLEngine on
SSLCertificateFile /etc/ssl/virtualmin/170514456861234/ssl.cert
SSLCertificateKeyFile /etc/ssl/virtualmin/170514456861234/ssl.key
SSLProtocol all -SSLv2 -SSLv3 -TLSv1 -TLSv1.1
SSLProxyEngine on
SSLCACertificateFile /etc/ssl/virtualmin/170514456865864/ssl.ca
</VirtualHost>
```
I'm using virtualmin here for my SSL clusters, but you can also use certbot directly or your preferred SSL method. To use SSL:
### Prerequisites.
Run the following commands:
`snap install certbot --classic`
`snap apt install python3-certbot-apache` (this will install the apache plugin).
Navigate to the apache sites-available directory:
`cd /etc/apache2/sites-available/`
Create server.com.conf if it is not yet already created, containing the above `<virtualhost>` configuration (it should match your case. Modify as necessary). Use the one without the SSL:
Once it's created, run `certbot --apache -d server.com`, this will request and add/create an SSL keys for you as well as create the server.com.le-ssl.conf
# Configuring Ollama Server
On your latest installation of Ollama, make sure that you have setup your api server from the official Ollama reference:
[Ollama FAQ](https://github.com/jmorganca/ollama/blob/main/docs/faq.md)
### TL;DR
The guide doesn't seem to match the current updated service file on linux. So, we will address it here:
Unless when you're compiling Ollama from source, installing with the standard install `curl https://ollama.com/install.sh | sh` creates a file called `ollama.service` in /etc/systemd/system. You can use nano to edit the file:
```
sudo nano /etc/systemd/system/ollama.service
```
Add the following lines:
```
Environment="OLLAMA_HOST=0.0.0.0:11434" # this line is mandatory. You can also specify
```
For instance:
```
[Unit]
Description=Ollama Service
After=network-online.target
[Service]
ExecStart=/usr/local/bin/ollama serve
Environment="OLLAMA_HOST=0.0.0.0:11434" # this line is mandatory. You can also specify 192.168.254.109:DIFFERENT_PORT, format
Environment="OLLAMA_ORIGINS=http://192.168.254.106:11434,https://models.server.city" # this line is optional
User=ollama
Group=ollama
Restart=always
RestartSec=3
Environment="PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/s>
[Install]
WantedBy=default.target
```
Save the file by pressing CTRL+S, then press CTRL+X
When your computer restarts, the Ollama server will now be listening on the IP:PORT you specified, in this case 0.0.0.0:11434, or 192.168.254.106:11434 (whatever your local IP address is). Make sure that your router is correctly configured to serve pages from that local IP by forwarding 11434 to your local IP server.
# Ollama Model Configuration
## For the Ollama model configuration, use the following Apache VirtualHost setup:
Navigate to the apache sites-available directory:
`cd /etc/apache2/sites-available/`
`nano models.server.city.conf` # match this with your ollama server domain
Add the follwoing virtualhost containing this example (modify as needed):
```
# Assuming you have a website hosting this UI at "models.server.city"
<IfModule mod_ssl.c>
<VirtualHost 192.168.254.109:443>
DocumentRoot "/var/www/html/"
ServerName models.server.city
<Directory "/var/www/html/">
Options None
Require all granted
</Directory>
ProxyRequests Off
ProxyPreserveHost On
ProxyAddHeaders On
SSLProxyEngine on
ProxyPass / http://server.city:1000/ nocanon # or port 11434
ProxyPassReverse / http://server.city:1000/ # or port 11434
SSLCertificateFile /etc/letsencrypt/live/models.server.city/fullchain.pem
SSLCertificateKeyFile /etc/letsencrypt/live/models.server.city/privkey.pem
Include /etc/letsencrypt/options-ssl-apache.conf
</VirtualHost>
</IfModule>
```
You may need to enable the site first (if you haven't done so yet) before you can request SSL:
`a2ensite models.server.city.conf`
#### For the SSL part of Ollama server
Run the following commands:
Navigate to the apache sites-available directory:
`cd /etc/apache2/sites-available/`
`certbot --apache -d server.com`
```
<VirtualHost 192.168.254.109:80>
DocumentRoot "/var/www/html/"
ServerName models.server.city
<Directory "/var/www/html/">
Options None
Require all granted
</Directory>
ProxyRequests Off
ProxyPreserveHost On
ProxyAddHeaders On
SSLProxyEngine on
ProxyPass / http://server.city:1000/ nocanon # or port 11434
ProxyPassReverse / http://server.city:1000/ # or port 11434
RewriteEngine on
RewriteCond %{SERVER_NAME} =models.server.city
RewriteRule ^ https://%{SERVER_NAME}%{REQUEST_URI} [END,NE,R=permanent]
</VirtualHost>
```
Don't forget to restart/reload Apache with `systemctl reload apache2`
Open your site at https://server.com!
**Congratulations**, your _**Open-AI-like Chat-GPT style UI**_ is now serving AI with RAG, RBAC and multimodal features! Download Ollama models if you haven't yet done so!
If you encounter any misconfiguration or errors, please file an issue or engage with our discussion. There are a lot of friendly developers here to assist you.
Let's make this UI much more user friendly for everyone!
Thanks for making open-webui your UI Choice for AI!
This doc is made by **Bob Reyes**, your **Open-WebUI** fan from the Philippines.

View File

@ -9,7 +9,7 @@ Open WebUI now supports image generation through two backends: **AUTOMATIC1111**
## AUTOMATIC1111
Open WebUI supports image generation through the **AUTOMATIC1111** [API](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/API). Follow these steps to get started:
Open WebUI supports image generation through the **AUTOMATIC1111** [API](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/API). Here are the steps to get started:
### Initial Setup
@ -18,20 +18,52 @@ Open WebUI supports image generation through the **AUTOMATIC1111** [API](https:/
```
./webui.sh --api --listen
```
For Docker installations of Open WebUI, use the `--listen` flag to allow connections outside of localhost.
3. For Docker installation of WebUI with the environment variables preset, use the following command:
```
docker run -d -p 3000:8080 --add-host=host.docker.internal:host-gateway -e AUTOMATIC1111_BASE_URL=http://host.docker.internal:7860/ -e IMAGE_GENERATION_ENABLED=True -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main
```
### Configuring Open WebUI
1. In Open WebUI, navigate to **Settings > Images**.
2. In the API URL field, enter the address where AUTOMATIC1111's API is accessible:
```
http://<your_automatic1111_address>:7860
http://<your_automatic1111_address>:7860/
```
If you're running a Docker installation of Open WebUI and AUTOMATIC1111 on the same host, use `host.docker.internal` as your address.
If you're running a Docker installation of Open WebUI and AUTOMATIC1111 on the same host, use `http://host.docker.internal:7860/` as your address.
## ComfyUI
ComfyUI provides an alternative interface for managing and interacting with image generation models. Learn more or download it from its [GitHub page](https://github.com/comfyanonymous/ComfyUI). Below are the setup instructions to get ComfyUI running alongside your other tools.
### Initial Setup
1. Download and extract the ComfyUI software package from [GitHub](https://github.com/comfyanonymous/ComfyUI) to your desired directory.
2. To start ComfyUI, run the following command:
```
python main.py
```
For systems with low VRAM, launch ComfyUI with additional flags to reduce memory usage:
```
python main.py --lowvram
```
3. For Docker installation of WebUI with the environment variables preset, use the following command:
```
docker run -d -p 3000:8080 --add-host=host.docker.internal:host-gateway -e COMFYUI_BASE_URL=http://host.docker.internal:7860/ -e IMAGE_GENERATION_ENABLED=True -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main
```
### Configuring Open WebUI
1. In Open WebUI, navigate to **Settings > Images**.
2. In the API URL field, enter the address where ComfyUI's API is accessible:
```
http://<your_comfyui_address>:7860/
```
Set the environment variable `COMFYUI_BASE_URL` to this address to ensure proper integration.
## OpenAI DALL·E
Open WebUI also supports image generation through the **OpenAI DALL·E APIs**. This option now includes a selector for choosing between DALL·E 2 and DALL·E 3, each supporting different image sizes.
Open WebUI also supports image generation through the **OpenAI DALL·E APIs**. This option includes a selector for choosing between DALL·E 2 and DALL·E 3, each supporting different image sizes.
### Initial Setup
@ -51,4 +83,4 @@ Open WebUI also supports image generation through the **OpenAI DALL·E APIs**. T
![Image Generation Tutorial](/img/tutorial_image_generation.png)
1. First, use a text generation model to write a prompt for image generation.
2. After the response has finished, you can click the Picture icon to generate an image.
2. After the response has finished, you can click the Picture icon to generate an image.