Go to file
silentoplayz 2e215ca85e
Update README.md
List only top 10-ish key features of Open WebUI + dedicated features page to soon be linked to README.
2024-05-26 18:21:58 +00:00
.github chore: update github actions 2024-05-22 09:23:26 +01:00
backend fix 2024-05-25 15:56:47 -07:00
cypress feat: add image gen with automatic1111 to integration test 2024-05-20 23:03:05 +01:00
docs
kubernetes
scripts feat: seaborn added to pyodide 2024-05-19 09:46:43 -07:00
src fix 2024-05-25 17:43:28 -07:00
static refac: fetch pyodide deps at build time, not checked into git 2024-05-18 14:30:42 +08:00
test/test_files/image_gen feat: add image gen with automatic1111 to integration test 2024-05-20 23:03:05 +01:00
.dockerignore
.env.example refac: byebye litellm 2024-05-25 14:43:35 -07:00
.eslintignore
.eslintrc.cjs
.gitattributes
.gitignore refac: fetch pyodide deps at build time, not checked into git 2024-05-18 14:30:42 +08:00
.npmrc
.prettierignore
.prettierrc
bun.lockb
Caddyfile.localhost
CHANGELOG.md Update CHANGELOG.md 2024-05-19 09:48:22 -07:00
confirm_remove.sh
cypress.config.ts
demo.gif
docker-compose.a1111-test.yaml feat: add image gen with automatic1111 to integration test 2024-05-20 23:03:05 +01:00
docker-compose.amdgpu.yaml
docker-compose.api.yaml
docker-compose.data.yaml
docker-compose.gpu.yaml
docker-compose.yaml Update docker-compose.yaml 2024-05-20 19:56:14 -04:00
Dockerfile refac: byebye litellm 2024-05-25 14:43:35 -07:00
hatch_build.py infra: build 2024-05-20 16:34:37 +08:00
i18next-parser.config.ts
INSTALLATION.md
LICENSE
Makefile
package-lock.json 0.2.0.dev1 2024-05-21 21:18:42 -07:00
package.json chore: format 2024-05-21 21:39:45 -07:00
postcss.config.js
pyproject.toml chore: update python dependencies 2024-05-22 09:50:22 +01:00
README.md Update README.md 2024-05-26 18:21:58 +00:00
requirements-dev.lock rm litellm dependency 2024-05-25 14:52:08 -07:00
requirements.lock rm litellm dependency 2024-05-25 14:52:08 -07:00
run-compose.sh
run-ollama-docker.sh
run.sh
svelte.config.js
tailwind.config.js
TROUBLESHOOTING.md
tsconfig.json
update_ollama_models.sh
vite.config.ts feat: generate production source maps to assist debugging 2024-05-25 09:03:04 +01:00

Open WebUI (Formerly Ollama WebUI) 👋

GitHub stars GitHub forks GitHub watchers GitHub repo size GitHub language count GitHub top language GitHub last commit Hits Discord

Imagine having a powerhouse of AI-driven conversations at your fingertips. Open WebUI is a revolutionary, self-hosted Web UI that brings the future of language models to your desktop. With its modular architecture, extensibility, and user-friendly interface, Open WebUI is the perfect platform for anyone looking to unlock the full potential of language models. Capable of operating entirely offline and leveraging various LLM runners, including Ollama and OpenAI-compatible APIs. For more information, be sure to check out our Open WebUI Documentation.

Open WebUI Demo

Key Features of Open WebUI

  • 📚 Local RAG Integration: Dive into the future of chat interactions with the groundbreaking Retrieval Augmented Generation (RAG) support. This feature seamlessly integrates document interactions into your chat experience. You can load documents directly into the chat or add files to your document library, effortlessly accessing them using # command in the prompt. In its alpha phase, occasional issues may arise as we actively refine and enhance this feature to ensure optimal performance and reliability. Revolutionize chat interactions with RAG support.

  • 🔍 RAG Embedding Support: Change the RAG embedding model directly in document settings, enhancing document processing. This feature supports Ollama and OpenAI models. Take control of your document interactions.

  • 🌐 Web Browsing Capability: Seamlessly integrate websites into your chat experience using the # command followed by the URL. This feature allows you to incorporate web content directly into your conversations, enhancing the richness and depth of your interactions. Surf the web within your chat.

  • 🤖 Multiple Model Support: Seamlessly switch between different chat models for diverse interactions. Explore multiple perspectives in a single chat.

  • 🧩 Model Builder: Easily create Ollama models via the Web UI. Create and add custom characters/agents, customize chat elements, and import models effortlessly through Open WebUI Community integration. Design your ideal chat model.

  • 👥 '@' Model Integration: Harness the collective intelligence of multiple models in a single chat by seamlessly switching to any acessible local or external model during conversations by using the @ command to specify the model by name. Unlock the power of multiple models.

  • 🎨 Image Generation Integration: Seamlessly incorporate image generation capabilities using options such as AUTOMATIC1111 API or ComfyUI (local), and OpenAI's DALL-E (external), enriching your chat experience with dynamic visual content. Bring your chats to life with images.

  • 🤝 OpenAI API Integration: Effortlessly integrate OpenAI-compatible APIs for versatile conversations alongside Ollama models. Customize the Ollama API URL to link with LMStudio, GroqCloud, Mistral, OpenRouter, and more. Tap into the power of OpenAI.

  • 🔄 Multi-Modal Support: Seamlessly engage with models that support multimodal interactions, including images (e.g., LLava). Experience the future of chat interactions.

  • ⚙️ Fine-Tuned Control with Advanced Parameters: Gain a deeper level of control by adjusting parameters such as temperature, context length, and seed, and define your system prompts to tailor the conversation to your specific preferences and needs. Tailor your conversations to your needs.

  • 🌐🌍 Multilingual Support: Experience Open WebUI in your preferred language with our internationalization (i18n) support. Join us in expanding our supported languages! We're actively seeking contributors! Chat in your native tongue.

  • ↕️ Bi-Directional Chat Support: Easily switch between left-to-right and right-to-left chat directions to accommodate various language preferences. Accommodate diverse language preferences.

  • 🌟 Continuous Updates: We are committed to improving Open WebUI with regular updates, fixes, and new features. Enjoy the latest innovations in chat technology.

Want to learn more about Open WebUI's features? Check out our Open WebUI documentation for a comprehensive overview!

🔗 Also Check Out Open WebUI Community!

Don't forget to explore our sibling project, Open WebUI Community, where you can discover, download, and explore customized Modelfiles. Open WebUI Community offers a wide range of exciting possibilities for enhancing your chat interactions with Open WebUI! 🚀

How to Install 🚀

Note

Please note that for certain Docker environments, additional configurations might be needed. If you encounter any connection issues, our detailed guide on Open WebUI Documentation is ready to assist you.

Quick Start with Docker 🐳

Warning

When using Docker to install Open WebUI, make sure to include the -v open-webui:/app/backend/data in your Docker command. This step is crucial as it ensures your database is properly mounted and prevents any loss of data.

Tip

If you wish to utilize Open WebUI with Ollama included or CUDA acceleration, we recommend utilizing our official images tagged with either :cuda or :ollama. To enable CUDA, you must install the Nvidia CUDA container toolkit on your Linux/WSL system.

Installation with Default Configuration

  • If Ollama is on your computer, use this command:

    docker run -d -p 3000:8080 --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main
    
  • If Ollama is on a Different Server, use this command:

    To connect to Ollama on another server, change the OLLAMA_BASE_URL to the server's URL:

    docker run -d -p 3000:8080 -e OLLAMA_BASE_URL=https://example.com -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main
    
    • To run Open WebUI with Nvidia GPU support, use this command:
    docker run -d -p 3000:8080 --gpus all --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:cuda
    

Installation for OpenAI API Usage Only

  • If you're only using OpenAI API, use this command:

    docker run -d -p 3000:8080 -e OPENAI_API_KEY=your_secret_key -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main
    

Installing Open WebUI with Bundled Ollama Support

This installation method uses a single container image that bundles Open WebUI with Ollama, allowing for a streamlined setup via a single command. Choose the appropriate command based on your hardware setup:

  • With GPU Support: Utilize GPU resources by running the following command:

    docker run -d -p 3000:8080 --gpus=all -v ollama:/root/.ollama -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:ollama
    
  • For CPU Only: If you're not using a GPU, use this command instead:

    docker run -d -p 3000:8080 -v ollama:/root/.ollama -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:ollama
    

Both commands facilitate a built-in, hassle-free installation of both Open WebUI and Ollama, ensuring that you can get everything up and running swiftly.

After installation, you can access Open WebUI at http://localhost:3000. Enjoy! 😄

Other Installation Methods

We offer various installation alternatives, including non-Docker native installation methods, Docker Compose, Kustomize, and Helm. Visit our Open WebUI Documentation or join our Discord community for comprehensive guidance.

Troubleshooting

Encountering connection issues? Our Open WebUI Documentation has got you covered. For further assistance and to join our vibrant community, visit the Open WebUI Discord.

Open WebUI: Server Connection Error

If you're experiencing connection issues, its often due to the WebUI docker container not being able to reach the Ollama server at 127.0.0.1:11434 (host.docker.internal:11434) inside the container . Use the --network=host flag in your docker command to resolve this. Note that the port changes from 3000 to 8080, resulting in the link: http://localhost:8080.

Example Docker Command:

docker run -d --network=host -v open-webui:/app/backend/data -e OLLAMA_BASE_URL=http://127.0.0.1:11434 --name open-webui --restart always ghcr.io/open-webui/open-webui:main

Keeping Your Docker Installation Up-to-Date

In case you want to update your local Docker installation to the latest version, you can do it with Watchtower:

docker run --rm --volume /var/run/docker.sock:/var/run/docker.sock containrrr/watchtower --run-once open-webui

In the last part of the command, replace open-webui with your container name if it is different.

Moving from Ollama WebUI to Open WebUI

Check our Migration Guide available in our Open WebUI Documentation.

What's Next? 🌟

Discover upcoming features on our roadmap in the Open WebUI Documentation.

Supporters

A big shoutout to our amazing supporters who's helping to make this project possible! 🙏

Platinum Sponsors 🤍

  • We're looking for Sponsors!

Acknowledgments

Special thanks to Prof. Lawrence Kim and Prof. Nick Vincent for their invaluable support and guidance in shaping this project into a research endeavor. Grateful for your mentorship throughout the journey! 🙌

License 📜

This project is licensed under the MIT License - see the LICENSE file for details. 📄

Support 💬

If you have any questions, suggestions, or need assistance, please open an issue or join our Open WebUI Discord community to connect with us! 🤝

Star History

Star History Chart

Created by Timothy J. Baek - Let's make Open WebUI even more amazing together! 💪