mirror of
https://github.com/open-webui/open-webui
synced 2024-11-25 13:29:53 +00:00
Merge pull request #1489 from jannikstdl/patch-1
README.md Dockersection formatting and wording fix
This commit is contained in:
commit
546efe0d7b
69
README.md
69
README.md
@ -97,81 +97,22 @@ Don't forget to explore our sibling project, [Open WebUI Community](https://open
|
||||
> [!IMPORTANT]
|
||||
> When using Docker to install Open WebUI, make sure to include the `-v open-webui:/app/backend/data` in your Docker command. This step is crucial as it ensures your database is properly mounted and prevents any loss of data.
|
||||
|
||||
- **If Ollama is on your computer**, use this command:
|
||||
**If Ollama is on your computer**, use this command:
|
||||
|
||||
```bash
|
||||
docker run -d -p 3000:8080 --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main
|
||||
```
|
||||
|
||||
- **If Ollama is on a Different Server**, use this command:
|
||||
**If Ollama is on a Different Server**, use this command:
|
||||
|
||||
- To connect to Ollama on another server, change the `OLLAMA_BASE_URL` to the server's URL:
|
||||
To connect to Ollama on another server, change the `OLLAMA_BASE_URL` to the server's URL:
|
||||
|
||||
```bash
|
||||
docker run -d -p 3000:8080 -e OLLAMA_BASE_URL=https://example.com -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main
|
||||
```
|
||||
|
||||
- After installation, you can access Open WebUI at [http://localhost:3000](http://localhost:3000). Enjoy! 😄
|
||||
|
||||
- **If you want to customize your build with additional args**, use this commands:
|
||||
|
||||
> [!NOTE]
|
||||
> If you only want to use Open WebUI with Ollama included or CUDA acelleration it's recomented to use our official images with the tags :cuda or :with-ollama
|
||||
> If you want a combination of both or more customisation options like a different embedding model and/or CUDA version you need to build the image yourself following the instructions below.
|
||||
|
||||
**For the build:**
|
||||
|
||||
```bash
|
||||
docker build -t open-webui
|
||||
```
|
||||
|
||||
Optional build ARGS (use them in the docker build command below if needed):
|
||||
|
||||
e.g.
|
||||
|
||||
```bash
|
||||
--build-arg="USE_EMBEDDING_MODEL=intfloat/multilingual-e5-large"
|
||||
```
|
||||
|
||||
For "intfloat/multilingual-e5-large" custom embedding model (default is all-MiniLM-L6-v2), only works with [sentence transforer models](https://huggingface.co/models?library=sentence-transformers). Current [Leaderbord](https://huggingface.co/spaces/mteb/leaderboard) of embedding models.
|
||||
|
||||
```bash
|
||||
--build-arg="USE_OLLAMA=true"
|
||||
```
|
||||
|
||||
For including ollama in the image.
|
||||
|
||||
```bash
|
||||
--build-arg="USE_CUDA=true"
|
||||
```
|
||||
|
||||
To use CUDA exeleration for the embedding and whisper models.
|
||||
|
||||
> [!NOTE]
|
||||
> You need to install the [Nvidia CUDA container toolkit](https://docs.nvidia.com/dgx/nvidia-container-runtime-upgrade/) on your machine to be able to set CUDA as the Docker engine. Only works with Linux - use WSL for Windows!
|
||||
|
||||
```bash
|
||||
--build-arg="USE_CUDA_VER=cu117"
|
||||
```
|
||||
|
||||
For CUDA 11 (default is CUDA 12)
|
||||
|
||||
**To run the image:**
|
||||
|
||||
- **If you DID NOT use the USE_CUDA=true build ARG**, use this command:
|
||||
|
||||
```bash
|
||||
docker run -d -p 3000:8080 -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main
|
||||
```
|
||||
|
||||
- **If you DID use the USE_CUDA=true build ARG**, use this command:
|
||||
|
||||
```bash
|
||||
docker run --gpus all -d -p 3000:8080 -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main
|
||||
```
|
||||
|
||||
- After installation, you can access Open WebUI at [http://localhost:3000](http://localhost:3000). Enjoy! 😄
|
||||
|
||||
After installation, you can access Open WebUI at [http://localhost:3000](http://localhost:3000). Enjoy! 😄
|
||||
|
||||
#### Open WebUI: Server Connection Error
|
||||
|
||||
If you're experiencing connection issues, it’s often due to the WebUI docker container not being able to reach the Ollama server at 127.0.0.1:11434 (host.docker.internal:11434) inside the container . Use the `--network=host` flag in your docker command to resolve this. Note that the port changes from 3000 to 8080, resulting in the link: `http://localhost:8080`.
|
||||
|
Loading…
Reference in New Issue
Block a user