From c6053476cacc80479f3a648d28c90bee7c12e48a Mon Sep 17 00:00:00 2001 From: "Timothy J. Baek" Date: Wed, 8 May 2024 11:02:26 -0700 Subject: [PATCH 1/3] Update dependabot.yml --- .github/dependabot.yml | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/.github/dependabot.yml b/.github/dependabot.yml index d78b4b24a..5ff86b802 100644 --- a/.github/dependabot.yml +++ b/.github/dependabot.yml @@ -3,8 +3,7 @@ updates: - package-ecosystem: pip directory: '/backend' schedule: - interval: daily - time: '13:00' + interval: weekly - package-ecosystem: 'github-actions' directory: '/' schedule: From 0db71e97bb378e2b6f98a6dfd3086ff9f08e32a5 Mon Sep 17 00:00:00 2001 From: "Timothy J. Baek" Date: Wed, 8 May 2024 11:28:42 -0700 Subject: [PATCH 2/3] Update README.md --- README.md | 62 +++++++++++++++++++++++++++++++++++++++++++------------ 1 file changed, 49 insertions(+), 13 deletions(-) diff --git a/README.md b/README.md index 96bc0eb44..f77fe773f 100644 --- a/README.md +++ b/README.md @@ -120,22 +120,62 @@ Don't forget to explore our sibling project, [Open WebUI Community](https://open > [!TIP] > If you wish to utilize Open WebUI with Ollama included or CUDA acceleration, we recommend utilizing our official images tagged with either `:cuda` or `:ollama`. To enable CUDA, you must install the [Nvidia CUDA container toolkit](https://docs.nvidia.com/dgx/nvidia-container-runtime-upgrade/) on your Linux/WSL system. -**If Ollama is on your computer**, use this command: +### Installation with Default Configuration -```bash -docker run -d -p 3000:8080 --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main -``` +- **If Ollama is on your computer**, use this command: -**If Ollama is on a Different Server**, use this command: + ```bash + docker run -d -p 3000:8080 --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main + ``` -To connect to Ollama on another server, change the `OLLAMA_BASE_URL` to the server's URL: +- **If Ollama is on a Different Server**, use this command: -```bash -docker run -d -p 3000:8080 -e OLLAMA_BASE_URL=https://example.com -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main -``` + To connect to Ollama on another server, change the `OLLAMA_BASE_URL` to the server's URL: + + ```bash + docker run -d -p 3000:8080 -e OLLAMA_BASE_URL=https://example.com -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main + ``` + + - **To run Open WebUI with Nvidia GPU support**, use this command: + + ```bash + docker run -d -p 3000:8080 --gpus all --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:cuda + ``` + +### Installation for OpenAI API Usage Only + +- **If you're only using OpenAI API**, use this command: + + ```bash + docker run -d -p 3000:8080 -e OPENAI_API_KEY=your_secret_key -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main + ``` + +### Installing Open WebUI with Bundled Ollama Support + +This installation method uses a single container image that bundles Open WebUI with Ollama, allowing for a streamlined setup via a single command. Choose the appropriate command based on your hardware setup: + +- **With GPU Support**: + Utilize GPU resources by running the following command: + + ```bash + docker run -d -p 3000:8080 --gpus=all -v ollama:/root/.ollama -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:ollama + ``` + +- **For CPU Only**: + If you're not using a GPU, use this command instead: + + ```bash + docker run -d -p 3000:8080 -v ollama:/root/.ollama -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:ollama + ``` + +Both commands facilitate a built-in, hassle-free installation of both Open WebUI and Ollama, ensuring that you can get everything up and running swiftly. After installation, you can access Open WebUI at [http://localhost:3000](http://localhost:3000). Enjoy! 😄 +### Other Installation Methods + +We offer various installation alternatives, including non-Docker native installation methods, Docker Compose, Kustomize, and Helm. Visit our [Open WebUI Documentation](https://docs.openwebui.com/getting-started/) or join our [Discord community](https://discord.gg/5rJgQTnV4s) for comprehensive guidance. + #### Open WebUI: Server Connection Error If you're experiencing connection issues, it’s often due to the WebUI docker container not being able to reach the Ollama server at 127.0.0.1:11434 (host.docker.internal:11434) inside the container . Use the `--network=host` flag in your docker command to resolve this. Note that the port changes from 3000 to 8080, resulting in the link: `http://localhost:8080`. @@ -146,10 +186,6 @@ If you're experiencing connection issues, it’s often due to the WebUI docker c docker run -d --network=host -v open-webui:/app/backend/data -e OLLAMA_BASE_URL=http://127.0.0.1:11434 --name open-webui --restart always ghcr.io/open-webui/open-webui:main ``` -### Other Installation Methods - -We offer various installation alternatives, including non-Docker methods, Docker Compose, Kustomize, and Helm. Visit our [Open WebUI Documentation](https://docs.openwebui.com/getting-started/) or join our [Discord community](https://discord.gg/5rJgQTnV4s) for comprehensive guidance. - ### Troubleshooting Encountering connection issues? Our [Open WebUI Documentation](https://docs.openwebui.com/troubleshooting/) has got you covered. For further assistance and to join our vibrant community, visit the [Open WebUI Discord](https://discord.gg/5rJgQTnV4s). From 1b2b05d2ea119280da1e8d0501545b480fedd962 Mon Sep 17 00:00:00 2001 From: "Timothy J. Baek" Date: Wed, 8 May 2024 11:29:42 -0700 Subject: [PATCH 3/3] Update README.md --- README.md | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/README.md b/README.md index f77fe773f..a40018f0a 100644 --- a/README.md +++ b/README.md @@ -176,6 +176,10 @@ After installation, you can access Open WebUI at [http://localhost:3000](http:// We offer various installation alternatives, including non-Docker native installation methods, Docker Compose, Kustomize, and Helm. Visit our [Open WebUI Documentation](https://docs.openwebui.com/getting-started/) or join our [Discord community](https://discord.gg/5rJgQTnV4s) for comprehensive guidance. +### Troubleshooting + +Encountering connection issues? Our [Open WebUI Documentation](https://docs.openwebui.com/troubleshooting/) has got you covered. For further assistance and to join our vibrant community, visit the [Open WebUI Discord](https://discord.gg/5rJgQTnV4s). + #### Open WebUI: Server Connection Error If you're experiencing connection issues, it’s often due to the WebUI docker container not being able to reach the Ollama server at 127.0.0.1:11434 (host.docker.internal:11434) inside the container . Use the `--network=host` flag in your docker command to resolve this. Note that the port changes from 3000 to 8080, resulting in the link: `http://localhost:8080`. @@ -186,10 +190,6 @@ If you're experiencing connection issues, it’s often due to the WebUI docker c docker run -d --network=host -v open-webui:/app/backend/data -e OLLAMA_BASE_URL=http://127.0.0.1:11434 --name open-webui --restart always ghcr.io/open-webui/open-webui:main ``` -### Troubleshooting - -Encountering connection issues? Our [Open WebUI Documentation](https://docs.openwebui.com/troubleshooting/) has got you covered. For further assistance and to join our vibrant community, visit the [Open WebUI Discord](https://discord.gg/5rJgQTnV4s). - ### Keeping Your Docker Installation Up-to-Date In case you want to update your local Docker installation to the latest version, you can do it with [Watchtower](https://containrrr.dev/watchtower/):