Merge pull request #55 from justinh-rahb/patch-1

This commit is contained in:
Timothy Jaeryang Baek 2024-05-01 12:44:19 -07:00 committed by GitHub
commit 39b40db647
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
2 changed files with 12 additions and 41 deletions

View File

@ -108,7 +108,13 @@ When using Docker to install Open WebUI, make sure to include the `-v open-webui
docker run -d -p 3000:8080 -e OLLAMA_BASE_URL=https://example.com -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main docker run -d -p 3000:8080 -e OLLAMA_BASE_URL=https://example.com -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main
``` ```
### Installing Both Open WebUI and Ollama Together - **To run Open WebUI with Nvidia GPU support**, use this command:
```bash
docker run -d -p 3000:8080 --gpus all --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:cuda
```
### Installing Ollama
- **With GPU Support**, Use this command: - **With GPU Support**, Use this command:
@ -122,44 +128,9 @@ When using Docker to install Open WebUI, make sure to include the `-v open-webui
docker run -d -p 3000:8080 -v ollama:/root/.ollama -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:ollama docker run -d -p 3000:8080 -v ollama:/root/.ollama -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:ollama
``` ```
- After installation, you can access Open WebUI at [http://localhost:3000](http://localhost:3000). Enjoy! 😄 After installation, you can access Open WebUI at [http://localhost:3000](http://localhost:3000). Enjoy! 😄
### GPU Support ### Open WebUI: Server Connection Error
#### Nvidia CUDA
To run Ollama with Nvidia GPU support, utilize the Nvidia-docker tool for GPU access, and set the appropriate environment variables for CUDA support:
```bash
docker run -d -p 3000:8080 \
--gpus all \
--add-host=host.docker.internal:host-gateway \
--volume open-webui:/app/backend/data \
--name open-webui \
--restart always \
ghcr.io/open-webui/open-webui:main
```
#### AMD ROCm
To run Ollama with AMD GPU support, set the `HSA_OVERRIDE_GFX_VERSION` environment variable and ensure the Docker container can access the GPU:
```bash
docker run -d -p 3000:8080 \
-e HSA_OVERRIDE_GFX_VERSION=11.0.0 \
--device /dev/kfd \
--device /dev/dri \
--group-add video \
--add-host=host.docker.internal:host-gateway \
--volume open-webui:/app/backend/data \
--name open-webui \
--restart always \
ghcr.io/open-webui/open-webui:main
```
Replace `HSA_OVERRIDE_GFX_VERSION=11.0.0` with the version appropriate for your AMD GPU model as described in the earlier sections. This command ensures compatibility and optimal performance with AMD GPUs.
#### Open WebUI: Server Connection Error
Encountering connection issues between the Open WebUI Docker container and the Ollama server? This problem often arises because distro-packaged versions of Docker—like those from the Ubuntu repository—do not support the `host.docker.internal` alias for reaching the host directly. Inside a container, referring to `localhost` or `127.0.0.1` typically points back to the container itself, not the host machine. Encountering connection issues between the Open WebUI Docker container and the Ollama server? This problem often arises because distro-packaged versions of Docker—like those from the Ubuntu repository—do not support the `host.docker.internal` alias for reaching the host directly. Inside a container, referring to `localhost` or `127.0.0.1` typically points back to the container itself, not the host machine.
@ -183,7 +154,7 @@ For more details on networking in Docker and addressing common connectivity issu
docker compose up -d --build docker compose up -d --build
``` ```
- **For GPU Support:** Use an additional Docker Compose file: - **For Nvidia GPU Support:** Use an additional Docker Compose file:
```bash ```bash
docker compose -f docker-compose.yaml -f docker-compose.gpu.yaml up -d --build docker compose -f docker-compose.yaml -f docker-compose.gpu.yaml up -d --build

View File

@ -20,7 +20,7 @@ Open WebUI supports image generation through the **AUTOMATIC1111** [API](https:/
``` ```
3. For Docker installation of WebUI with the environment variables preset, use the following command: 3. For Docker installation of WebUI with the environment variables preset, use the following command:
``` ```
docker run -d -p 3000:8080 --add-host=host.docker.internal:host-gateway -e AUTOMATIC1111_BASE_URL=http://host.docker.internal:7860/ -e IMAGE_GENERATION_ENABLED=True -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main docker run -d -p 3000:8080 --add-host=host.docker.internal:host-gateway -e AUTOMATIC1111_BASE_URL=http://host.docker.internal:7860/ -e ENABLE_IMAGE_GENERATION=True -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main
``` ```
### Configuring Open WebUI ### Configuring Open WebUI
@ -49,7 +49,7 @@ ComfyUI provides an alternative interface for managing and interacting with imag
``` ```
3. For Docker installation of WebUI with the environment variables preset, use the following command: 3. For Docker installation of WebUI with the environment variables preset, use the following command:
``` ```
docker run -d -p 3000:8080 --add-host=host.docker.internal:host-gateway -e COMFYUI_BASE_URL=http://host.docker.internal:7860/ -e IMAGE_GENERATION_ENABLED=True -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main docker run -d -p 3000:8080 --add-host=host.docker.internal:host-gateway -e COMFYUI_BASE_URL=http://host.docker.internal:7860/ -e ENABLE_IMAGE_GENERATION=True -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main
``` ```
### Configuring Open WebUI ### Configuring Open WebUI