mirror of
https://github.com/open-webui/open-webui
synced 2025-02-22 05:08:14 +00:00
commit
9f2b846ea7
3
.github/dependabot.yml
vendored
3
.github/dependabot.yml
vendored
@ -3,8 +3,7 @@ updates:
|
|||||||
- package-ecosystem: pip
|
- package-ecosystem: pip
|
||||||
directory: '/backend'
|
directory: '/backend'
|
||||||
schedule:
|
schedule:
|
||||||
interval: daily
|
interval: weekly
|
||||||
time: '13:00'
|
|
||||||
- package-ecosystem: 'github-actions'
|
- package-ecosystem: 'github-actions'
|
||||||
directory: '/'
|
directory: '/'
|
||||||
schedule:
|
schedule:
|
||||||
|
56
README.md
56
README.md
@ -120,13 +120,15 @@ Don't forget to explore our sibling project, [Open WebUI Community](https://open
|
|||||||
> [!TIP]
|
> [!TIP]
|
||||||
> If you wish to utilize Open WebUI with Ollama included or CUDA acceleration, we recommend utilizing our official images tagged with either `:cuda` or `:ollama`. To enable CUDA, you must install the [Nvidia CUDA container toolkit](https://docs.nvidia.com/dgx/nvidia-container-runtime-upgrade/) on your Linux/WSL system.
|
> If you wish to utilize Open WebUI with Ollama included or CUDA acceleration, we recommend utilizing our official images tagged with either `:cuda` or `:ollama`. To enable CUDA, you must install the [Nvidia CUDA container toolkit](https://docs.nvidia.com/dgx/nvidia-container-runtime-upgrade/) on your Linux/WSL system.
|
||||||
|
|
||||||
**If Ollama is on your computer**, use this command:
|
### Installation with Default Configuration
|
||||||
|
|
||||||
|
- **If Ollama is on your computer**, use this command:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
docker run -d -p 3000:8080 --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main
|
docker run -d -p 3000:8080 --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main
|
||||||
```
|
```
|
||||||
|
|
||||||
**If Ollama is on a Different Server**, use this command:
|
- **If Ollama is on a Different Server**, use this command:
|
||||||
|
|
||||||
To connect to Ollama on another server, change the `OLLAMA_BASE_URL` to the server's URL:
|
To connect to Ollama on another server, change the `OLLAMA_BASE_URL` to the server's URL:
|
||||||
|
|
||||||
@ -134,8 +136,50 @@ To connect to Ollama on another server, change the `OLLAMA_BASE_URL` to the serv
|
|||||||
docker run -d -p 3000:8080 -e OLLAMA_BASE_URL=https://example.com -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main
|
docker run -d -p 3000:8080 -e OLLAMA_BASE_URL=https://example.com -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main
|
||||||
```
|
```
|
||||||
|
|
||||||
|
- **To run Open WebUI with Nvidia GPU support**, use this command:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
docker run -d -p 3000:8080 --gpus all --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:cuda
|
||||||
|
```
|
||||||
|
|
||||||
|
### Installation for OpenAI API Usage Only
|
||||||
|
|
||||||
|
- **If you're only using OpenAI API**, use this command:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
docker run -d -p 3000:8080 -e OPENAI_API_KEY=your_secret_key -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main
|
||||||
|
```
|
||||||
|
|
||||||
|
### Installing Open WebUI with Bundled Ollama Support
|
||||||
|
|
||||||
|
This installation method uses a single container image that bundles Open WebUI with Ollama, allowing for a streamlined setup via a single command. Choose the appropriate command based on your hardware setup:
|
||||||
|
|
||||||
|
- **With GPU Support**:
|
||||||
|
Utilize GPU resources by running the following command:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
docker run -d -p 3000:8080 --gpus=all -v ollama:/root/.ollama -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:ollama
|
||||||
|
```
|
||||||
|
|
||||||
|
- **For CPU Only**:
|
||||||
|
If you're not using a GPU, use this command instead:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
docker run -d -p 3000:8080 -v ollama:/root/.ollama -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:ollama
|
||||||
|
```
|
||||||
|
|
||||||
|
Both commands facilitate a built-in, hassle-free installation of both Open WebUI and Ollama, ensuring that you can get everything up and running swiftly.
|
||||||
|
|
||||||
After installation, you can access Open WebUI at [http://localhost:3000](http://localhost:3000). Enjoy! 😄
|
After installation, you can access Open WebUI at [http://localhost:3000](http://localhost:3000). Enjoy! 😄
|
||||||
|
|
||||||
|
### Other Installation Methods
|
||||||
|
|
||||||
|
We offer various installation alternatives, including non-Docker native installation methods, Docker Compose, Kustomize, and Helm. Visit our [Open WebUI Documentation](https://docs.openwebui.com/getting-started/) or join our [Discord community](https://discord.gg/5rJgQTnV4s) for comprehensive guidance.
|
||||||
|
|
||||||
|
### Troubleshooting
|
||||||
|
|
||||||
|
Encountering connection issues? Our [Open WebUI Documentation](https://docs.openwebui.com/troubleshooting/) has got you covered. For further assistance and to join our vibrant community, visit the [Open WebUI Discord](https://discord.gg/5rJgQTnV4s).
|
||||||
|
|
||||||
#### Open WebUI: Server Connection Error
|
#### Open WebUI: Server Connection Error
|
||||||
|
|
||||||
If you're experiencing connection issues, it’s often due to the WebUI docker container not being able to reach the Ollama server at 127.0.0.1:11434 (host.docker.internal:11434) inside the container . Use the `--network=host` flag in your docker command to resolve this. Note that the port changes from 3000 to 8080, resulting in the link: `http://localhost:8080`.
|
If you're experiencing connection issues, it’s often due to the WebUI docker container not being able to reach the Ollama server at 127.0.0.1:11434 (host.docker.internal:11434) inside the container . Use the `--network=host` flag in your docker command to resolve this. Note that the port changes from 3000 to 8080, resulting in the link: `http://localhost:8080`.
|
||||||
@ -146,14 +190,6 @@ If you're experiencing connection issues, it’s often due to the WebUI docker c
|
|||||||
docker run -d --network=host -v open-webui:/app/backend/data -e OLLAMA_BASE_URL=http://127.0.0.1:11434 --name open-webui --restart always ghcr.io/open-webui/open-webui:main
|
docker run -d --network=host -v open-webui:/app/backend/data -e OLLAMA_BASE_URL=http://127.0.0.1:11434 --name open-webui --restart always ghcr.io/open-webui/open-webui:main
|
||||||
```
|
```
|
||||||
|
|
||||||
### Other Installation Methods
|
|
||||||
|
|
||||||
We offer various installation alternatives, including non-Docker methods, Docker Compose, Kustomize, and Helm. Visit our [Open WebUI Documentation](https://docs.openwebui.com/getting-started/) or join our [Discord community](https://discord.gg/5rJgQTnV4s) for comprehensive guidance.
|
|
||||||
|
|
||||||
### Troubleshooting
|
|
||||||
|
|
||||||
Encountering connection issues? Our [Open WebUI Documentation](https://docs.openwebui.com/troubleshooting/) has got you covered. For further assistance and to join our vibrant community, visit the [Open WebUI Discord](https://discord.gg/5rJgQTnV4s).
|
|
||||||
|
|
||||||
### Keeping Your Docker Installation Up-to-Date
|
### Keeping Your Docker Installation Up-to-Date
|
||||||
|
|
||||||
In case you want to update your local Docker installation to the latest version, you can do it with [Watchtower](https://containrrr.dev/watchtower/):
|
In case you want to update your local Docker installation to the latest version, you can do it with [Watchtower](https://containrrr.dev/watchtower/):
|
||||||
|
Loading…
Reference in New Issue
Block a user