From 62ec2651ba3cbb442dd7141efd19f4984f385202 Mon Sep 17 00:00:00 2001 From: Jannik S <69747628+jannikstdl@users.noreply.github.com> Date: Wed, 10 Apr 2024 10:14:54 +0200 Subject: [PATCH 1/2] README.md Dockersection formatting and wording fix --- README.md | 32 ++++++++++++++------------------ 1 file changed, 14 insertions(+), 18 deletions(-) diff --git a/README.md b/README.md index 3c0093e71..12e6df9d3 100644 --- a/README.md +++ b/README.md @@ -92,68 +92,64 @@ Don't forget to explore our sibling project, [Open WebUI Community](https://open > [!NOTE] > Please note that for certain Docker environments, additional configurations might be needed. If you encounter any connection issues, our detailed guide on [Open WebUI Documentation](https://docs.openwebui.com/) is ready to assist you. -### Quick Start with Docker 🐳 +### Quick Start with Docker (3 ways) 🐳 > [!IMPORTANT] > When using Docker to install Open WebUI, make sure to include the `-v open-webui:/app/backend/data` in your Docker command. This step is crucial as it ensures your database is properly mounted and prevents any loss of data. -- **If Ollama is on your computer**, use this command: +1. **If Ollama is on your computer**, use this command: ```bash docker run -d -p 3000:8080 --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main ``` -- **If Ollama is on a Different Server**, use this command: +2. **If Ollama is on a Different Server**, use this command: -- To connect to Ollama on another server, change the `OLLAMA_BASE_URL` to the server's URL: + To connect to Ollama on another server, change the `OLLAMA_BASE_URL` to the server's URL: ```bash docker run -d -p 3000:8080 -e OLLAMA_BASE_URL=https://example.com -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main ``` -- After installation, you can access Open WebUI at [http://localhost:3000](http://localhost:3000). Enjoy! πŸ˜„ + After installation, you can access Open WebUI at [http://localhost:3000](http://localhost:3000). Enjoy! πŸ˜„ -- **If you want to customize your build with additional args**, use this commands: +3. **If you want to customize your build with additional ARGS**, use this commands: - > [!NOTE] - > If you only want to use Open WebUI with Ollama included or CUDA acelleration it's recomented to use our official images with the tags :cuda or :with-ollama - > If you want a combination of both or more customisation options like a different embedding model and/or CUDA version you need to build the image yourself following the instructions below. +> [!NOTE] +> If you only want to use Open WebUI with Ollama included or CUDA acelleration it's recomented to use our official images with the tags :cuda or :ollama +> If you want a combination of both or more customisation options like a different embedding model and/or CUDA version you need to build the image yourself following the instructions below. - **For the build:** + - **For the build:** ```bash docker build -t open-webui ``` - Optional build ARGS (use them in the docker build command below if needed): + - **Optional build ARGS (use them in the docker build command below if needed):** - e.g. + e.g. ```bash --build-arg="USE_EMBEDDING_MODEL=intfloat/multilingual-e5-large" ``` - For "intfloat/multilingual-e5-large" custom embedding model (default is all-MiniLM-L6-v2), only works with [sentence transforer models](https://huggingface.co/models?library=sentence-transformers). Current [Leaderbord](https://huggingface.co/spaces/mteb/leaderboard) of embedding models. ```bash --build-arg="USE_OLLAMA=true" ``` - For including ollama in the image. ```bash --build-arg="USE_CUDA=true" ``` - To use CUDA exeleration for the embedding and whisper models. - > [!NOTE] - > You need to install the [Nvidia CUDA container toolkit](https://docs.nvidia.com/dgx/nvidia-container-runtime-upgrade/) on your machine to be able to set CUDA as the Docker engine. Only works with Linux - use WSL for Windows! +> [!NOTE] +> You need to install the [Nvidia CUDA container toolkit](https://docs.nvidia.com/dgx/nvidia-container-runtime-upgrade/) on your machine to be able to set CUDA as the Docker engine. Only works with Linux - use WSL for Windows! ```bash --build-arg="USE_CUDA_VER=cu117" ``` - For CUDA 11 (default is CUDA 12) **To run the image:** From 27f01b0bc8e8051d59c7a11ec08d4bc8fc33c3bb Mon Sep 17 00:00:00 2001 From: Jannik S <69747628+jannikstdl@users.noreply.github.com> Date: Wed, 10 Apr 2024 10:20:52 +0200 Subject: [PATCH 2/2] Update README.md --- README.md | 63 ++++--------------------------------------------------- 1 file changed, 4 insertions(+), 59 deletions(-) diff --git a/README.md b/README.md index 12e6df9d3..386b00f58 100644 --- a/README.md +++ b/README.md @@ -92,18 +92,18 @@ Don't forget to explore our sibling project, [Open WebUI Community](https://open > [!NOTE] > Please note that for certain Docker environments, additional configurations might be needed. If you encounter any connection issues, our detailed guide on [Open WebUI Documentation](https://docs.openwebui.com/) is ready to assist you. -### Quick Start with Docker (3 ways) 🐳 +### Quick Start with Docker 🐳 > [!IMPORTANT] > When using Docker to install Open WebUI, make sure to include the `-v open-webui:/app/backend/data` in your Docker command. This step is crucial as it ensures your database is properly mounted and prevents any loss of data. -1. **If Ollama is on your computer**, use this command: + **If Ollama is on your computer**, use this command: ```bash docker run -d -p 3000:8080 --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main ``` -2. **If Ollama is on a Different Server**, use this command: + **If Ollama is on a Different Server**, use this command: To connect to Ollama on another server, change the `OLLAMA_BASE_URL` to the server's URL: @@ -112,62 +112,7 @@ Don't forget to explore our sibling project, [Open WebUI Community](https://open ``` After installation, you can access Open WebUI at [http://localhost:3000](http://localhost:3000). Enjoy! πŸ˜„ - -3. **If you want to customize your build with additional ARGS**, use this commands: - -> [!NOTE] -> If you only want to use Open WebUI with Ollama included or CUDA acelleration it's recomented to use our official images with the tags :cuda or :ollama -> If you want a combination of both or more customisation options like a different embedding model and/or CUDA version you need to build the image yourself following the instructions below. - - - **For the build:** - - ```bash - docker build -t open-webui - ``` - - - **Optional build ARGS (use them in the docker build command below if needed):** - - e.g. - - ```bash - --build-arg="USE_EMBEDDING_MODEL=intfloat/multilingual-e5-large" - ``` - For "intfloat/multilingual-e5-large" custom embedding model (default is all-MiniLM-L6-v2), only works with [sentence transforer models](https://huggingface.co/models?library=sentence-transformers). Current [Leaderbord](https://huggingface.co/spaces/mteb/leaderboard) of embedding models. - - ```bash - --build-arg="USE_OLLAMA=true" - ``` - For including ollama in the image. - - ```bash - --build-arg="USE_CUDA=true" - ``` - To use CUDA exeleration for the embedding and whisper models. - -> [!NOTE] -> You need to install the [Nvidia CUDA container toolkit](https://docs.nvidia.com/dgx/nvidia-container-runtime-upgrade/) on your machine to be able to set CUDA as the Docker engine. Only works with Linux - use WSL for Windows! - - ```bash - --build-arg="USE_CUDA_VER=cu117" - ``` - For CUDA 11 (default is CUDA 12) - - **To run the image:** - - - **If you DID NOT use the USE_CUDA=true build ARG**, use this command: - - ```bash - docker run -d -p 3000:8080 -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main - ``` - - - **If you DID use the USE_CUDA=true build ARG**, use this command: - - ```bash - docker run --gpus all -d -p 3000:8080 -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main - ``` - - - After installation, you can access Open WebUI at [http://localhost:3000](http://localhost:3000). Enjoy! πŸ˜„ - + #### Open WebUI: Server Connection Error If you're experiencing connection issues, it’s often due to the WebUI docker container not being able to reach the Ollama server at 127.0.0.1:11434 (host.docker.internal:11434) inside the container . Use the `--network=host` flag in your docker command to resolve this. Note that the port changes from 3000 to 8080, resulting in the link: `http://localhost:8080`.