Merge pull request #266 from matthewhand/getting-started

Overhaul of Getting Started Guide and Documentation Structure
This commit is contained in:
Timothy Jaeryang Baek 2024-11-05 15:26:03 -08:00 committed by GitHub
commit 828dd13875
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
34 changed files with 1309 additions and 1007 deletions

View File

@ -1,55 +1,55 @@
---
name: Deploy site to Pages
name: Deploy site to Pages
on:
# Runs on pushes targeting the default branch
push:
branches: ["main"]
on:
# Runs on pushes targeting the default branch
push:
branches: ["main"]
# Allows you to run this workflow manually from the Actions tab
workflow_dispatch:
# Allows you to run this workflow manually from the Actions tab
workflow_dispatch:
# Sets permissions of the GITHUB_TOKEN to allow deployment to GitHub Pages
permissions:
contents: read
pages: write
id-token: write
# Sets permissions of the GITHUB_TOKEN to allow deployment to GitHub Pages
permissions:
contents: read
pages: write
id-token: write
# Allow only one concurrent deployment, skipping runs queued between the run in-progress and latest queued.
# However, do NOT cancel in-progress runs as we want to allow these production deployments to complete.
concurrency:
group: "pages"
cancel-in-progress: false
# Allow only one concurrent deployment, skipping runs queued between the run in-progress and latest queued.
# However, do NOT cancel in-progress runs as we want to allow these production deployments to complete.
concurrency:
group: "pages"
cancel-in-progress: false
jobs:
# Build job
build:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Setup Node
uses: actions/setup-node@v4
with:
node-version-file: ".node-version"
cache: npm
- name: Install dependencies
run: npm ci
- name: Build
run: npm run build
- name: Upload artifact
uses: actions/upload-pages-artifact@v3
with:
path: ./build
jobs:
# Build job
build:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Setup Node
uses: actions/setup-node@v4
with:
node-version-file: ".node-version"
cache: npm
- name: Install dependencies
run: npm ci
- name: Build
run: npm run build
- name: Upload artifact
uses: actions/upload-pages-artifact@v3
with:
path: ./build
# Deployment job
deploy:
environment:
name: github-pages
url: ${{ steps.deployment.outputs.page_url }}
runs-on: ubuntu-latest
needs: build
steps:
- name: Deploy to GitHub Pages
id: deployment
uses: actions/deploy-pages@v4
# Deployment job
deploy:
environment:
name: github-pages
url: ${{ steps.deployment.outputs.page_url }}
runs-on: ubuntu-latest
needs: build
steps:
- name: Deploy to GitHub Pages
id: deployment
uses: actions/deploy-pages@v4

View File

@ -0,0 +1,201 @@
---
sidebar_position: 5
title: "🛠️ Development Guide"
---
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
import { TopBanners } from "@site/src/components/TopBanners";
<TopBanners />
# 🛠️ Development Setup Guide
Welcome to the **Open WebUI Development Setup Guide!** Whether you're a novice or an experienced developer, this guide will help you set up a **local development environment** for both the frontend and backend components. Lets dive in! 🚀
## System Requirements
- **Operating System**: Linux (or WSL on Windows) or macOS
- **Python Version**: Python 3.11+
- **Node.js Version**: 20.10+
## Development Methods
<Tabs groupId="dev-setup">
<TabItem value="local" label="Local Setup">
### 🐧 Local Development Setup
1. **Clone the Repository**:
```bash
git clone https://github.com/open-webui/open-webui.git
cd open-webui
```
2. **Frontend Setup**:
- Create a `.env` file:
```bash
cp -RPp .env.example .env
```
- Install dependencies:
```bash
npm install
```
- Start the frontend server:
```bash
npm run dev
```
🌐 Available at: [http://localhost:5173](http://localhost:5173).
3. **Backend Setup**:
- Navigate to the backend:
```bash
cd backend
```
- Use **Conda** for environment setup:
```bash
conda create --name open-webui python=3.11
conda activate open-webui
```
- Install dependencies:
```bash
pip install -r requirements.txt -U
```
- Start the backend:
```bash
sh dev.sh
```
📄 API docs available at: [http://localhost:8080/docs](http://localhost:8080/docs).
</TabItem>
<TabItem value="docker" label="Docker Setup">
### 🐳 Docker-Based Development Setup
1. **Create the Docker Compose File**:
```yaml
name: open-webui-dev
services:
frontend:
build:
context: .
target: build
command: ["npm", "run", "dev"]
depends_on:
- backend
ports:
- "3000:5173"
extra_hosts:
- host.docker.internal:host-gateway
volumes:
- ./src:/app/src
backend:
build:
context: .
target: base
command: ["bash", "dev.sh"]
env_file: ".env"
environment:
- ENV=dev
- WEBUI_AUTH=False
ports:
- "8080:8080"
extra_hosts:
- host.docker.internal:host-gateway
volumes:
- ./backend:/app/backend
- data:/app/backend/data
volumes:
data: {}
```
2. **Start the Development Containers**:
```bash
docker compose -f compose-dev.yaml up --watch
```
3. **Stop the Containers**:
```bash
docker compose -f compose-dev.yaml down
```
</TabItem>
<TabItem value="conda" label="Optional Conda Setup">
### Conda Environment Setup
If you prefer using **Conda** for isolation:
1. **Create and Activate the Environment**:
```bash
conda create --name open-webui-dev python=3.11
conda activate open-webui-dev
```
2. **Install Dependencies**:
```bash
pip install -r requirements.txt
```
3. **Run the Servers**:
- Frontend:
```bash
npm run dev
```
- Backend:
```bash
sh dev.sh
```
</TabItem>
<TabItem value="troubleshooting" label="Troubleshooting">
## 🐛 Troubleshooting
### **FATAL ERROR: Reached Heap Limit**
If you encounter memory-related errors during the build, increase the **Node.js heap size**:
1. **Modify Dockerfile**:
```dockerfile
ENV NODE_OPTIONS=--max-old-space-size=4096
```
2. **Allocate at least 4 GB of RAM** to Node.js.
---
### **Other Issues**
- **Port Conflicts**:
Ensure that no other processes are using **ports 8080 or 5173**.
- **Hot Reload Not Working**:
Verify that **watch mode** is enabled for both frontend and backend.
</TabItem>
</Tabs>
## Contributing to Open WebUI
### Local Workflow
1. **Commit Changes Regularly** to track progress.
2. **Sync with the Main Branch** to avoid conflicts:
```bash
git pull origin main
```
3. **Run Tests Before Pushing**:
```bash
npm run test
```
Happy coding! 🎉

View File

@ -19,7 +19,7 @@ Last updated: v0.3.20
The following environment variables are used by `backend/config.py` to provide Open WebUI startup
configuration. Please note that some variables may have different default values depending on
whether you're running Open WebUI directly or via Docker. For more information on logging
environment variables, see our [logging documentation](/getting-started/logging#appbackend).
environment variables, see our [logging documentation](./logging#appbackend).
### General

View File

@ -0,0 +1,27 @@
---
sidebar_position: 6
title: "🔒HTTPS Encryption"
---
## Overview
While HTTPS encryption is **not required** to operate Open WebUI in most cases, certain features—such as **Voice Calls**—will be blocked by modern web browsers unless HTTPS is enabled. If you do not plan to use these features, you can skip this section.
## Importance of HTTPS
For deployments at high risk of traffic interception, such as those hosted on the internet, it is recommended to implement HTTPS encryption. This ensures that the username/password signup and authentication process remains secure, protecting sensitive user data from potential threats.
## Choosing Your HTTPS Solution
The choice of HTTPS encryption solution is up to the user and should align with the existing infrastructure. Here are some common scenarios:
- **AWS Environments**: Utilizing an AWS Elastic Load Balancer is often a practical choice for managing HTTPS.
- **Docker Container Environments**: Popular solutions include Nginx, Traefik, and Caddy.
- **Cloudflare**: Offers easy HTTPS setup with minimal server-side configuration, suitable for a wide range of applications.
- **Ngrok**: Provides a quick way to set up HTTPS for local development environments, particularly useful for testing and demos.
## Further Guidance
For detailed instructions and community-submitted tutorials on actual HTTPS encryption deployments, please refer to the [Deployment Tutorials](../../tutorials/deployment/).
This documentation provides a starting point for understanding the options available for enabling HTTPS encryption in your environment.

View File

@ -0,0 +1,43 @@
---
sidebar_position: 4
title: "📚 Advanced Topics"
---
# 📚 Advanced Topics
Explore deeper concepts and advanced configurations of Open WebUI to enhance your setup.
---
## 🔧 Environment Configuration
Understand how to set environment variables to customize your Open WebUI setup.
[Environment Configuration Guide](./env-configuration)
---
## 📊 Logging and Monitoring
Learn how to monitor, log, and troubleshoot your system effectively.
[Logging and Monitoring Guide](./logging)
---
## 🛠️ Development Guide
Dive into the development process and learn how to contribute to Open WebUI.
[Development Guide](./development)
---
## 🔒 HTTPS Encryption
Ensure secure communication by implementing HTTPS encryption in your deployment.
[HTTPS Encryption Guide](./https-encryption)
---
## 🔗 API Endpoints
Get essential information for API integration and automation using our models.
[API Endpoints Guide](./api-endpoints)
---
Looking for installation instructions? Head over to our [Quick Start Guide](../quick-start).
Need to explore core features? Check out [Using OpenWebUI](../using-openwebui).

View File

@ -1,5 +1,5 @@
---
sidebar_position: 3
sidebar_position: 5
title: "📜 Open WebUI Logging"
---

View File

@ -1,199 +0,0 @@
---
sidebar_position: 6
title: "🛠️ Development Guide"
---
import { TopBanners } from "@site/src/components/TopBanners";
<TopBanners />
Welcome to the Open WebUI Development Setup Guide! 🌟 Whether you're a novice or a veteran in the software development world, this guide is designed to assist you in establishing a functional local development environment for both the frontend and backend components of Open WebUI. Let's get started and set up your development environment swiftly! 🚀
## System Requirements
Before diving into the setup, make sure your system meets the following requirements:
- **Operating System**: Linux (WSL) or macOS (Instructions provided here specifically cater to these operating systems)
- **Python Version**: Python 3.11
## 🐧 Linux/macOS Setup Guide
This section provides a step-by-step process to get your development environment ready on Linux (WSL) or macOS platforms.
### 📡 Cloning the Repository
First, you'll need to clone the Open WebUI repository and switch to the directory:
```sh
git clone https://github.com/open-webui/open-webui.git
cd open-webui
```
### 🖥️ Frontend Server Setup
To set up the frontend server, follow these instructions:
1. **Environment Configuration**:
Duplicate the environment configuration file:
```sh
cp -RPp .env.example .env
```
2. **Install Dependencies**:
Run the following commands to install necessary dependencies:
```sh
npm install
```
3. **Launch the Server**:
Start the server with:
```sh
npm run dev
```
🌐 The frontend server will be available at: http://localhost:5173. Please note that for the frontend server to function correctly, the backend server should be running concurrently.
### 🖥️ Backend Server Setup
Setting up the backend server involves a few more steps, Python 3.11 is required for Open WebUI:
1. **Change Directory**:
Open a new terminal window and navigate to the backend directory:
```sh
cd open-webui/backend
```
2. **Python Environment Setup** (Using Conda Recommended):
- Create and activate a Conda environment with Python 3.11:
```sh
conda create --name open-webui python=3.11
conda activate open-webui
```
3. **Install Backend Dependencies**:
Install all the required Python libraries:
```sh
pip install -r requirements.txt -U
```
4. **Start the Backend Application**:
Launch the backend application with:
```sh
sh dev.sh
```
📄 Access the backend API documentation at: http://localhost:8080/docs. The backend supports hot reloading, making your development process smoother by automatically reflecting changes.
That's it! You now have both the frontend and backend servers running. Explore the API documentation and start developing features for Open WebUI. Happy coding! 🎉
## 🐳 Running in a Docker Container
For those who prefer using Docker, here's how you can set things up:
1. **Initialize Configuration:**
Assuming you have already cloned the repository and created a `.env` file, create a new file named `compose-dev.yaml`. This configuration uses Docker Compose to ease the development setup.
```yaml
name: open-webui-dev
services:
frontend:
build:
context: .
target: build
command: ["npm", "run", "dev"]
depends_on:
- backend
extra_hosts:
- host.docker.internal:host-gateway
ports:
- "3000:5173"
develop:
watch:
path: ./src
action: sync
backend:
build:
context: .
target: base
command: ["bash", "dev.sh"]
env_file: ".env"
environment:
- ENV=dev
- WEBUI_AUTH=False
volumes:
- data:/app/backend/data
extra_hosts:
- host.docker.internal:host-gateway
ports:
- "8080:8080"
restart: always
develop:
watch:
path: ./backend
action: sync
volumes:
data: {}
```
2. **Start Development Containers:**
```sh
docker compose -f compose-dev.yaml up --watch
```
This command will start the frontend and backend servers in hot reload mode. Changes in your source files will trigger an automatic refresh. The web app will be available at http://localhost:3000 and Backend API docs at http://localhost:8080/docs.
3. **Stopping the Containers:**
To stop the containers, you can use:
```sh
docker compose -f compose-dev.yaml down
```
### 🔄 Integration with Pipelines
If your development involves [Pipelines](https://docs.openwebui.com/pipelines/), you can enhance your Docker setup:
```yaml
services:
pipelines:
ports:
- "9099:9099"
volumes:
- ./pipelines:/app/pipelines
extra_hosts:
- host.docker.internal:host-gateway
restart: always
```
This setup involves mounting the `pipelines` directory to ensure any changes reflect immediately, maintaining high development agility.
:::note
This configuration uses volume bind-mounts. Learn more about how they differ from named volumes [here](https://docs.docker.com/storage/bind-mounts/).
:::
## 🐛 Troubleshooting
### FATAL ERROR: Reached heap limit
When you encounter a memory-related error during the Docker build process—especially while executing `npm run build`—it typically indicates that the JavaScript heap has exceeded its memory limit. One effective solution is to increase the memory allocated to Node.js by adjusting the `NODE_OPTIONS` environment variable. This allows you to set a higher maximum heap size, which can help prevent out-of-memory errors during the build process. If you encounter this issue, try to allocate at least 4 GB of RAM, or higher if you have enough RAM.
You can increase the memory allocated to Node.js by adding the following line just before `npm run build` in the `Dockerfile`.
```docker title=/Dockerfile
ENV NODE_OPTIONS=--max-old-space-size=4096
```
---
Through these setup steps, both new and experienced contributors can seamlessly integrate into the development workflow of Open WebUI. Happy coding! 🎉

View File

@ -0,0 +1,27 @@
---
sidebar_position: 3
title: "🚀Getting Started"
---
# Getting Started with Open WebUI
Welcome to the **Open WebUI Documentation Hub!** Below is a list of essential guides and resources to help you get started, manage, and develop with Open WebUI.
---
## ⏱️ Quick Start
Get up and running quickly with our [Quick Start Guide](./quick-start).
---
## 📚 Using OpenWebUI
Learn the basics and explore key concepts in our [Using OpenWebUI Guide](./using-openwebui).
---
## 🛠️ Advanced Topics
Take a deeper dive into configurations and development tips in our [Advanced Topics Guide](./advanced-topics).
---
Happy exploring! 🎉 If you have questions, join our [community](https://discord.gg/5rJgQTnV4s) or raise an issue on [GitHub](https://github.com/open-webui/open-webui).

View File

@ -1,585 +0,0 @@
---
sidebar_position: 3
title: "🚀 Getting Started"
---
import { TopBanners } from "@site/src/components/TopBanners";
<TopBanners />
## How to Install 🚀
:::info **Important Note on User Roles and Privacy:**
- **Admin Creation:** The first account created on Open WebUI gains **Administrator privileges**, controlling user management and system settings.
- **User Registrations:** Subsequent sign-ups start with **Pending** status, requiring Administrator approval for access.
- **Privacy and Data Security:** **All your data**, including login details, is **locally stored** on your device. Open WebUI ensures **strict confidentiality** and **no external requests** for enhanced privacy and security.
:::
## Quick Start with Docker 🐳 (Recommended)
:::tip
#### Disabling Login for Single User
If you want to disable login for a single-user setup, set [`WEBUI_AUTH`](/getting-started/env-configuration) to `False`. This will bypass the login page.
:::warning
You cannot switch between single-user mode and multi-account mode after this change.
:::
:::danger
When using Docker to install Open WebUI, make sure to include the `-v open-webui:/app/backend/data` in your Docker command. This step is crucial as it ensures your database is properly mounted and prevents any loss of data.
:::
<details>
<summary>Before You Begin</summary>
#### Installing Docker
#### For Windows and Mac Users:
- Download Docker Desktop from [Docker's official website](https://www.docker.com/products/docker-desktop).
- Follow the installation instructions provided on the website. After installation, open Docker Desktop to ensure it's running properly.
#### For Ubuntu Users:
1. **Open your terminal.**
2. **Set up Docker's apt repository:**
- Update your package index:
```bash
sudo apt-get update
```
- Install packages to allow apt to use a repository over HTTPS:
```bash
sudo apt-get install ca-certificates curl
```
- Create a directory for the Docker apt keyring:
```bash
sudo install -m 0755 -d /etc/apt/keyrings
```
- Add Docker's official GPG key:
```bash
sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc
sudo chmod a+r /etc/apt/keyrings/docker.asc
```
- Add the Docker repository to Apt sources:
```bash
echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu \
$(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
```
:::note
If you're using an Ubuntu derivative distro, such as Linux Mint, you might need to use `UBUNTU_CODENAME` instead of `VERSION_CODENAME`.
:::
3. **Install Docker Engine:**
- Update your package index again:
```bash
sudo apt-get update
```
- Install Docker Engine, CLI, and containerd:
```bash
sudo apt-get install docker-ce docker-ce-cli containerd.io docker-compose-plugin
```
4. **Verify the Docker installation:**
- Use the following command to run a test image:
```bash
sudo docker run hello-world
```
This command downloads a test image and runs it in a container. If successful, it prints an informational message confirming that Docker is installed and working correctly.
#### Other Linux Distributions:
- For other Linux distributions, please refer to the [official Docker documentation](https://docs.docker.com/engine/install/) for installation instructions specific to your distro.
#### Ensure You Have the Latest Version of Ollama:
- Download the latest version from [https://ollama.com/](https://ollama.com/).
#### Verify Ollama Installation:
- After installing Ollama, verify its functionality by accessing [http://127.0.0.1:11434/](http://127.0.0.1:11434/) in your web browser. Note that the port number might be different based on your installation.
</details>
<details>
<summary>Data Storage in Docker</summary>
This tutorial uses [Docker named volumes](https://docs.docker.com/storage/volumes/) to guarantee the **persistance of your data**. This might make it difficult to know exactly where your data is stored in your machine if this is your first time using Docker. Alternatively, you can replace the volume name with a absolute path on your host machine to link your container data to a folder in your computer using a [bind mount](https://docs.docker.com/storage/bind-mounts/).
**Example**: change `-v open-webui:/app/backend/data` to `-v /path/to/folder:/app/backend/data`
Ensure you have the proper access rights to the folder on your host machine.
Visit the [Docker documentation](https://docs.docker.com/storage/) to understand more about volumes and bind mounts.
</details>
### Installation with Default Configuration
- **If Ollama is on your computer**, use this command:
```bash
docker run -d -p 3000:8080 --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main
```
- **If Ollama is on a Different Server**, use this command:
To connect to Ollama on another server, change the `OLLAMA_BASE_URL` to the server's URL:
```bash
docker run -d -p 3000:8080 -e OLLAMA_BASE_URL=https://example.com -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main
```
- **To run Open WebUI with Nvidia GPU support**, use this command:
```bash
docker run -d -p 3000:8080 --gpus all --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:cuda
```
This will result in a faster Bundled Ollama, faster Speech-To-Text and faster RAG embeddings if using SentenceTransformers.
### Installation for OpenAI API Usage Only
- **If you're only using OpenAI API**, use this command:
```bash
docker run -d -p 3000:8080 -e OPENAI_API_KEY=your_secret_key -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main
```
### Installing Open WebUI with Bundled Ollama Support
This installation method uses a single container image that bundles Open WebUI with Ollama, allowing for a streamlined setup via a single command. Choose the appropriate command based on your hardware setup:
- **With GPU Support**:
Utilize GPU resources by running the following command:
```bash
docker run -d -p 3000:8080 --gpus=all -v ollama:/root/.ollama -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:ollama
```
- **For CPU Only**:
If you're not using a GPU, use this command instead:
```bash
docker run -d -p 3000:8080 -v ollama:/root/.ollama -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:ollama
```
Both commands facilitate a built-in, hassle-free installation of both Open WebUI and Ollama, ensuring that you can get everything up and running swiftly.
After installation, you can access Open WebUI at [http://localhost:3000](http://localhost:3000). Enjoy! 😄
## Manual Installation
### Installation with `pip` (Beta)
For users who prefer to use Python's package manager `pip`, Open WebUI offers a installation method. Python 3.11 is required for this method.
1. **Install Open WebUI**:
Open your terminal and run the following command:
```bash
pip install open-webui
```
2. **Start Open WebUI**:
Once installed, start the server using:
```bash
open-webui serve
```
This method installs all necessary dependencies and starts Open WebUI, allowing for a simple and efficient setup. After installation, you can access Open WebUI at [http://localhost:8080](http://localhost:8080). Enjoy! 😄
### Install from Open WebUI GitHub Repo
:::info
Open WebUI consists of two primary components: the frontend and the backend (which serves as a reverse proxy, handling static frontend files, and additional features). Both need to be running concurrently for the development environment.
:::
#### Requirements 📦
- 🐰 [Node.js](https://nodejs.org/en) >= 20.10
- 🐍 [Python](https://python.org) >= 3.11
#### Build and Install 🛠️
Run the following commands to install:
For Linux/macOS:
```sh
git clone https://github.com/open-webui/open-webui.git
cd open-webui/
# Copying required .env file
cp -RPp .env.example .env
# Building Frontend Using Node
npm install
npm run build
cd ./backend
# Optional: To install using Conda as your development environment, follow these instructions:
# Create and activate a Conda environment
conda create --name open-webui-env python=3.11
conda activate open-webui-env
# Install dependencies
pip install -r requirements.txt -U
# Start the application
bash start.sh
```
For Windows:
```powershell
git clone https://github.com/open-webui/open-webui.git
cd open-webui
copy .env.example .env
npm install
npm run build
cd .\backend
# Optional: To install using Conda as your development environment, follow these instructions:
# Create and activate a Conda environment
conda create --name open-webui-env python=3.11
conda activate open-webui-env
pip install -r requirements.txt -U
start.bat
```
You should have Open WebUI up and running at http://localhost:8080/. Enjoy! 😄
## Docker Compose
#### Using Docker Compose
- If you don't have Ollama yet, use Docker Compose for easy installation. Run this command:
```bash
docker compose up -d --build
```
- **For Nvidia GPU Support:** Use an additional Docker Compose file:
```bash
docker compose -f docker-compose.yaml -f docker-compose.gpu.yaml up -d --build
```
- **For AMD GPU Support:** Some AMD GPUs require setting an environment variable for proper functionality:
```bash
HSA_OVERRIDE_GFX_VERSION=11.0.0 docker compose -f docker-compose.yaml -f docker-compose.amdgpu.yaml up -d --build
```
<details>
<summary>AMD GPU Support with HSA_OVERRIDE_GFX_VERSION</summary>
For AMD GPU users encountering compatibility issues, setting the `HSA_OVERRIDE_GFX_VERSION` environment variable is crucial. This variable instructs the ROCm platform to emulate a specific GPU architecture, ensuring compatibility with various AMD GPUs not officially supported. Depending on your GPU model, adjust the `HSA_OVERRIDE_GFX_VERSION` as follows:
- **For RDNA1 & RDNA2 GPUs** (e.g., RX 6700, RX 680M): Use `HSA_OVERRIDE_GFX_VERSION=10.3.0`.
- **For RDNA3 GPUs**: Set `HSA_OVERRIDE_GFX_VERSION=11.0.0`.
- **For older GCN (Graphics Core Next) GPUs**: The version to use varies. GCN 4th gen and earlier might require different settings, such as `ROC_ENABLE_PRE_VEGA=1` for GCN4, or `HSA_OVERRIDE_GFX_VERSION=9.0.0` for Vega (GCN5.0) emulation.
Ensure to replace `<version>` with the appropriate version number based on your GPU model and the guidelines above. For a detailed list of compatible versions and more in-depth instructions, refer to the [ROCm documentation](https://rocm.docs.amd.com) and the [openSUSE Wiki on AMD GPGPU](https://en.opensuse.org/SDB:AMD_GPGPU).
Example command for RDNA1 & RDNA2 GPUs:
```bash
HSA_OVERRIDE_GFX_VERSION=10.3.0 docker compose -f docker-compose.yaml -f docker-compose.amdgpu.yaml up -d --build
```
</details>
- **To Expose Ollama API:** Use another Docker Compose file:
```bash
docker compose -f docker-compose.yaml -f docker-compose.api.yaml up -d --build
```
#### Using `run-compose.sh` Script (Linux or Docker-Enabled WSL2 on Windows)
- Give execute permission to the script:
```bash
chmod +x run-compose.sh
```
- For CPU-only container:
```bash
./run-compose.sh
```
- For GPU support (read the note about GPU compatibility):
```bash
./run-compose.sh --enable-gpu
```
- To build the latest local version, add `--build`:
```bash
./run-compose.sh --enable-gpu --build
```
## Docker Swarm
This installation method requires knowledge on Docker Swarms, as it utilizes a stack file to deploy 3 seperate containers as services in a Docker Swarm.
It includes isolated containers of ChromaDB, Ollama, and OpenWebUI.
Additionally, there are pre-filled [Environment Variables](/getting-started/env-configuration) to further illustrate the setup.
Choose the appropriate command based on your hardware setup:
- **Before Starting**:
Directories for your volumes need to be created on the host, or you can specify a custom location or volume.
The current example utilizes an isolated dir `data`, which is within the same dir as the `docker-stack.yaml`.
- **For example**:
```bash
mkdir -p data/open-webui data/chromadb data/ollama
```
- **With GPU Support**:
#### Docker-stack.yaml
```yaml
version: '3.9'
services:
openWebUI:
image: ghcr.io/open-webui/open-webui:main
depends_on:
- chromadb
- ollama
volumes:
- ./data/open-webui:/app/backend/data
environment:
DATA_DIR: /app/backend/data
OLLAMA_BASE_URLS: http://ollama:11434
CHROMA_HTTP_PORT: 8000
CHROMA_HTTP_HOST: chromadb
CHROMA_TENANT: default_tenant
VECTOR_DB: chroma
WEBUI_NAME: Awesome ChatBot
CORS_ALLOW_ORIGIN: "*" # This is the current Default, will need to change before going live
RAG_EMBEDDING_ENGINE: ollama
RAG_EMBEDDING_MODEL: nomic-embed-text-v1.5
RAG_EMBEDDING_MODEL_TRUST_REMOTE_CODE: "True"
ports:
- target: 8080
published: 8080
mode: overlay
deploy:
replicas: 1
restart_policy:
condition: any
delay: 5s
max_attempts: 3
chromadb:
hostname: chromadb
image: chromadb/chroma:0.5.15
volumes:
- ./data/chromadb:/chroma/chroma
environment:
- IS_PERSISTENT=TRUE
- ALLOW_RESET=TRUE
- PERSIST_DIRECTORY=/chroma/chroma
ports:
- target: 8000
published: 8000
mode: overlay
deploy:
replicas: 1
restart_policy:
condition: any
delay: 5s
max_attempts: 3
healthcheck:
test: ["CMD-SHELL", "curl localhost:8000/api/v1/heartbeat || exit 1"]
interval: 10s
retries: 2
start_period: 5s
timeout: 10s
ollama:
image: ollama/ollama:latest
hostname: ollama
ports:
- target: 11434
published: 11434
mode: overlay
deploy:
resources:
reservations:
generic_resources:
- discrete_resource_spec:
kind: "NVIDIA-GPU"
value: 0
replicas: 1
restart_policy:
condition: any
delay: 5s
max_attempts: 3
volumes:
- ./data/ollama:/root/.ollama
```
- **Additional Requirements**:
1. Ensure CUDA is Enabled, follow your OS and GPU instructions for that.
2. Enable Docker GPU support, see [Nvidia Container Toolkit](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html " on Nvidia's site.")
3. Follow the [Guide here on configuring Docker Swarm to with with your GPU](https://gist.github.com/tomlankhorst/33da3c4b9edbde5c83fc1244f010815c#configuring-docker-to-work-with-your-gpus)
- Ensure _GPU Resource_ is enabled in `/etc/nvidia-container-runtime/config.toml` and enable GPU resource advertising by uncommenting the `swarm-resource = "DOCKER_RESOURCE_GPU"`. The docker daemon must be restarted after updating these files on each node.
- **With CPU Support**:
Modify the Ollama Service within `docker-stack.yaml` and remove the lines for `generic_resources:`
```yaml
ollama:
image: ollama/ollama:latest
hostname: ollama
ports:
- target: 11434
published: 11434
mode: overlay
deploy:
replicas: 1
restart_policy:
condition: any
delay: 5s
max_attempts: 3
volumes:
- ./data/ollama:/root/.ollama
```
- **Deploy Docker Stack**:
```bash
docker stack deploy -c docker-stack.yaml -d super-awesome-ai
```
## Installing with Podman
<details>
<summary>Rootless (Podman) local-only Open WebUI with Systemd service and auto-update</summary>
:::note
Consult the Docker documentation because much of the configuration and syntax is interchangeable with [Podman](https://github.com/containers/podman). See also [rootless_tutorial](https://github.com/containers/podman/blob/main/docs/tutorials/rootless_tutorial.md). This example requires the [slirp4netns](https://github.com/rootless-containers/slirp4netns) network backend to facilitate server listen and Ollama communication over localhost only.
:::
:::warning
Rootless container execution with Podman (and Docker/ContainerD) does **not** support [AppArmor confinment](https://github.com/containers/podman/pull/19303). This may increase the attack vector due to [requirement of user namespace](https://rootlesscontaine.rs/caveats). Caution should be exercised and judement (in contrast to the root daemon) rendered based on threat model.
:::
1. Pull the latest image:
```bash
podman pull ghcr.io/open-webui/open-webui:main
```
2. Create a new container using desired configuration:
:::note
`-p 127.0.0.1:3000:8080` ensures that we listen only on localhost, `--network slirp4netns:allow_host_loopback=true` permits the container to access Ollama when it also listens strictly on localhost. `--add-host=ollama.local:10.0.2.2 --env 'OLLAMA_BASE_URL=http://ollama.local:11434'` adds a hosts record to the container and configures open-webui to use the friendly hostname. `10.0.2.2` is the default slirp4netns address used for localhost mapping. `--env 'ANONYMIZED_TELEMETRY=False'` isn't necessary since Chroma telemetry has been disabled in the code but is included as an example.
:::
```bash
podman create -p 127.0.0.1:3000:8080 --network slirp4netns:allow_host_loopback=true --add-host=ollama.local:10.0.2.2 --env 'OLLAMA_BASE_URL=http://ollama.local:11434' --env 'ANONYMIZED_TELEMETRY=False' -v open-webui:/app/backend/data --label io.containers.autoupdate=registry --name open-webui ghcr.io/open-webui/open-webui:main
```
:::note
[Podman 5.0](https://www.redhat.com/en/blog/podman-50-unveiled) has updated the default rootless network backend to use the more performant [pasta](https://passt.top/passt/about/). While `slirp4netns:allow_host_loopback=true` still achieves the same local-only intention, it's now recommended use a simple TCP forward instead like: `--network=pasta:-T,11434 --add-host=ollama.local:127.0.0.1`. Full example:
:::
```bash
podman create -p 127.0.0.1:3000:8080 --network=pasta:-T,11434 --add-host=ollama.local:127.0.0.1 --env 'OLLAMA_BASE_URL=http://ollama.local:11434' --env 'ANONYMIZED_TELEMETRY=False' -v open-webui:/app/backend/data --label io.containers.autoupdate=registry --name open-webui ghcr.io/open-webui/open-webui:main
```
3. Prepare for systemd user service:
```bash
mkdir -p ~/.config/systemd/user/
```
4. Generate user service with Podman:
```bash
podman generate systemd --new open-webui > ~/.config/systemd/user/open-webui.service
```
5. Reload systemd configuration:
```bash
systemctl --user daemon-reload
```
6. Enable and validate new service:
```bash
systemctl --user enable open-webui.service
systemctl --user start open-webui.service
systemctl --user status open-webui.service
```
7. Enable and validate Podman auto-update:
```bash
systemctl --user enable podman-auto-update.timer
systemctl --user enable podman-auto-update.service
systemctl --user status podman-auto-update.timer
```
Dry run with the following command (omit `--dry-run` to force an update):
```bash
podman auto-update --dry-run
```
:::tip
This process is compatible with Windows 11 WSL deployments when using Ollama within the WSL environment or using the Ollama Windows Preview. When using the native Ollama Windows Preview version, one additional step is required: enable [mirrored networking mode](https://learn.microsoft.com/en-us/windows/wsl/networking#mirrored-mode-networking).
:::
### Enabling Windows 11 mirrored networking
1. Populate `%UserProfile%\.wslconfig` with:
```
[wsl2]
networkingMode=mirrored
```
2. Restart WSL:
```
wsl --shutdown
```
</details>
### Alternative Installation Methods
For other ways to install, like using Kustomize or Helm, check out [INSTALLATION](/getting-started/installation). Join our [Open WebUI Discord community](https://discord.gg/5rJgQTnV4s) for more help and information.
### Updating your Docker Installation
For detailed instructions on manually updating your local Docker installation of Open WebUI, including steps for those not using Watchtower and updates via Docker Compose, please refer to our dedicated guide: [UPDATING](/getting-started/updating).
For a quick update with Watchtower, use the command below. Remember to replace `open-webui` with your actual container name if it differs.
```bash
docker run --rm --volume /var/run/docker.sock:/var/run/docker.sock containrrr/watchtower --run-once open-webui
```
In the last part of the command, replace `open-webui` with your container name if it is different.
:::info
After updating Open WebUI, you might need to refresh your browser cache to see the changes.
:::

View File

@ -1,40 +0,0 @@
---
sidebar_position: 1
title: "🔧 Alternative Installation"
---
### Installing Both Ollama and Open WebUI Using Kustomize
For a CPU-only Pod:
```bash
kubectl apply -k ./kubernetes/manifest/base
```
For a GPU-enabled Pod:
```bash
kubectl apply -k ./kubernetes/manifest/gpu
```
### Installing Both Ollama and Open WebUI Using Helm
:::info
The Helm installation method has been migrated to the new GitHub repository. Please refer to
the latest installation instructions at [https://github.com/open-webui/helm-charts](https://github.com/open-webui/helm-charts).
:::
Confirm that Helm has been deployed on your execution environment.
For installation instructions, visit [https://helm.sh/docs/intro/install/](https://helm.sh/docs/intro/install/).
```bash
helm repo add open-webui https://helm.openwebui.com/
helm repo update
kubectl create namespace open-webui
helm upgrade --install open-webui open-webui/open-webui --namespace open-webui
```
For additional customization options, refer to the [kubernetes/helm/values.yaml](https://github.com/open-webui/helm-charts/tree/main/charts/open-webui) file.

View File

@ -0,0 +1,155 @@
---
sidebar_position: 2
title: "⏱️ Quick Start"
---
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
import { TopBanners } from "@site/src/components/TopBanners";
import DockerCompose from './tab-docker/DockerCompose.md';
import Podman from './tab-docker/Podman.md';
import ManualDocker from './tab-docker/ManualDocker.md';
import DockerSwarm from './tab-docker/DockerSwarm.md';
import DataStorage from './tab-docker/DataStorage.md';
import DockerUpdating from './tab-docker/DockerUpdating.md';
import Helm from './tab-kubernetes/Helm.md';
import Kustomize from './tab-kubernetes/Kustomize.md';
import Venv from './tab-python/Venv.md';
import CondaUnix from './tab-python/CondaUnix.md';
import CondaWindows from './tab-python/CondaWindows.md';
<TopBanners />
## How to Install ⏱️
:::info **Important Note on User Roles and Privacy:**
- **Admin Creation:** The first account created on Open WebUI gains **Administrator privileges**, controlling user management and system settings.
- **User Registrations:** Subsequent sign-ups start with **Pending** status, requiring Administrator approval for access.
- **Privacy and Data Security:** **All your data**, including login details, is **locally stored** on your device. Open WebUI ensures **strict confidentiality** and **no external requests** for enhanced privacy and security.
:::
Choose your preferred installation method below:
- **Docker:** Recommended for most users due to ease of setup and flexibility.
- **Kubernetes:** Ideal for enterprise deployments that require scaling and orchestration.
- **Python:** Suitable for low-resource environments or those wanting a manual setup.
<Tabs>
<TabItem value="docker" label="Docker">
<Tabs>
<TabItem value="docker-compose" label="Docker Compose">
<DockerCompose />
<DataStorage />
<DockerUpdating />
</TabItem>
<TabItem value="podman" label="Podman">
<Podman />
</TabItem>
<TabItem value="manual-docker" label="Manual Docker">
<ManualDocker />
<DataStorage />
<DockerUpdating />
</TabItem>
<TabItem value="swarm" label="Docker Swarm">
<DockerSwarm />
</TabItem>
</Tabs>
</TabItem>
<TabItem value="kubernetes" label="Kubernetes">
<Tabs>
<TabItem value="helm" label="Helm">
<Helm />
</TabItem>
<TabItem value="kustomize" label="Kustomize">
<Kustomize />
</TabItem>
</Tabs>
</TabItem>
<TabItem value="python" label="Python">
<Tabs>
<TabItem value="venv" label="Venv">
<Venv />
</TabItem>
<TabItem value="conda" label="Conda">
<h3>Choose Your Platform</h3>
<Tabs groupId="platform-conda">
<TabItem value="unix-conda" label="Linux/macOS">
<CondaUnix />
</TabItem>
<TabItem value="windows-conda" label="Windows">
<CondaWindows />
</TabItem>
</Tabs>
</TabItem>
<TabItem value="development" label="Development">
<h3>Development Setup</h3>
<p>
For developers who want to contribute, check the Development Guide in <a href="../advanced-topics">Advanced Topics</a>.
</p>
</TabItem>
</Tabs>
</TabItem>
<TabItem value="third-party" label="Third Party">
<Tabs>
<TabItem value="pinokio-computer" label="Pinokio.computer">
### Pinokio.computer Installation
For installation via Pinokio.computer, please visit their website:
[https://pinokio.computer/](https://pinokio.computer/)
Support for this installation method is provided through their website.
</TabItem>
</Tabs>
### Additional Third-Party Integrations
*(Add information about third-party integrations as they become available.)*
</TabItem>
</Tabs>
## Next Steps
After installing, visit:
- [http://localhost:3000](http://localhost:3000) to access OpenWebUI.
- or [http://localhost:8080/](http://localhost:8080/) when using a Python deployment.
You are now ready to start **[Using OpenWebUI](../using-openwebui/index.mdx)**!
## Join the Community
Need help? Have questions? Join our community:
- [Open WebUI Discord](https://discord.gg/5rJgQTnV4s)
- [GitHub Issues](https://github.com/open-webui/open-webui/issues)
Stay updated with the latest features, troubleshooting tips, and announcements!
## Conclusion
Thank you for choosing Open WebUI! We are committed to providing a powerful, privacy-focused interface for your LLM needs. If you encounter any issues, refer to the [Troubleshooting Guide](../../troubleshooting/index.mdx).
Happy exploring! 🎉

View File

@ -0,0 +1,12 @@
## Data Storage and Bind Mounts
This project uses [Docker named volumes](https://docs.docker.com/storage/volumes/) to **persist data**. If needed, replace the volume name with a host directory:
**Example**:
```bash
-v /path/to/folder:/app/backend/data
```
Ensure the host folder has the correct permissions.

View File

@ -0,0 +1,54 @@
# Docker Compose Setup
Using Docker Compose simplifies the management of multi-container Docker applications.
If you don't have Docker installed, check out our [Docker installation tutorial](../../../tutorials/integrations/docker-install.md).
Docker Compose requires an additional package, `docker-compose-v2`.
**Warning:** Older Docker Compose tutorials may reference version 1 syntax, which uses commands like `docker-compose build`. Ensure you use version 2 syntax, which uses commands like `docker compose build` (note the space instead of a hyphen).
## Example `docker-compose.yml`
Here is an example configuration file for setting up Open WebUI with Docker Compose:
```yaml
version: '3'
services:
openwebui:
image: ghcr.io/open-webui/open-webui:main
ports:
- "3000:8080"
volumes:
- open-webui:/app/backend/data
volumes:
open-webui:
```
## Starting the Services
To start your services, run the following command:
```bash
docker compose up -d
```
## Helper Script
A useful helper script called `run-compose.sh` is included with the codebase. This script assists in choosing which Docker Compose files to include in your deployment, streamlining the setup process.
---
**Note:** For Nvidia GPU support, add the following to your service definition in the `docker-compose.yml` file:
```yaml
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: all
capabilities: [gpu]
```
This setup ensures that your application can leverage GPU resources when available.

View File

@ -0,0 +1,141 @@
## Docker Swarm
This installation method requires knowledge on Docker Swarms, as it utilizes a stack file to deploy 3 seperate containers as services in a Docker Swarm.
It includes isolated containers of ChromaDB, Ollama, and OpenWebUI.
Additionally, there are pre-filled [Environment Variables](../advanced-topics/env-configuration) to further illustrate the setup.
Choose the appropriate command based on your hardware setup:
- **Before Starting**:
Directories for your volumes need to be created on the host, or you can specify a custom location or volume.
The current example utilizes an isolated dir `data`, which is within the same dir as the `docker-stack.yaml`.
- **For example**:
```bash
mkdir -p data/open-webui data/chromadb data/ollama
```
- **With GPU Support**:
#### Docker-stack.yaml
```yaml
version: '3.9'
services:
openWebUI:
image: ghcr.io/open-webui/open-webui:main
depends_on:
- chromadb
- ollama
volumes:
- ./data/open-webui:/app/backend/data
environment:
DATA_DIR: /app/backend/data
OLLAMA_BASE_URLS: http://ollama:11434
CHROMA_HTTP_PORT: 8000
CHROMA_HTTP_HOST: chromadb
CHROMA_TENANT: default_tenant
VECTOR_DB: chroma
WEBUI_NAME: Awesome ChatBot
CORS_ALLOW_ORIGIN: "*" # This is the current Default, will need to change before going live
RAG_EMBEDDING_ENGINE: ollama
RAG_EMBEDDING_MODEL: nomic-embed-text-v1.5
RAG_EMBEDDING_MODEL_TRUST_REMOTE_CODE: "True"
ports:
- target: 8080
published: 8080
mode: overlay
deploy:
replicas: 1
restart_policy:
condition: any
delay: 5s
max_attempts: 3
chromadb:
hostname: chromadb
image: chromadb/chroma:0.5.15
volumes:
- ./data/chromadb:/chroma/chroma
environment:
- IS_PERSISTENT=TRUE
- ALLOW_RESET=TRUE
- PERSIST_DIRECTORY=/chroma/chroma
ports:
- target: 8000
published: 8000
mode: overlay
deploy:
replicas: 1
restart_policy:
condition: any
delay: 5s
max_attempts: 3
healthcheck:
test: ["CMD-SHELL", "curl localhost:8000/api/v1/heartbeat || exit 1"]
interval: 10s
retries: 2
start_period: 5s
timeout: 10s
ollama:
image: ollama/ollama:latest
hostname: ollama
ports:
- target: 11434
published: 11434
mode: overlay
deploy:
resources:
reservations:
generic_resources:
- discrete_resource_spec:
kind: "NVIDIA-GPU"
value: 0
replicas: 1
restart_policy:
condition: any
delay: 5s
max_attempts: 3
volumes:
- ./data/ollama:/root/.ollama
```
- **Additional Requirements**:
1. Ensure CUDA is Enabled, follow your OS and GPU instructions for that.
2. Enable Docker GPU support, see [Nvidia Container Toolkit](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html " on Nvidia's site.")
3. Follow the [Guide here on configuring Docker Swarm to with with your GPU](https://gist.github.com/tomlankhorst/33da3c4b9edbde5c83fc1244f010815c#configuring-docker-to-work-with-your-gpus)
- Ensure _GPU Resource_ is enabled in `/etc/nvidia-container-runtime/config.toml` and enable GPU resource advertising by uncommenting the `swarm-resource = "DOCKER_RESOURCE_GPU"`. The docker daemon must be restarted after updating these files on each node.
- **With CPU Support**:
Modify the Ollama Service within `docker-stack.yaml` and remove the lines for `generic_resources:`
```yaml
ollama:
image: ollama/ollama:latest
hostname: ollama
ports:
- target: 11434
published: 11434
mode: overlay
deploy:
replicas: 1
restart_policy:
condition: any
delay: 5s
max_attempts: 3
volumes:
- ./data/ollama:/root/.ollama
```
- **Deploy Docker Stack**:
```bash
docker stack deploy -c docker-stack.yaml -d super-awesome-ai
```

View File

@ -0,0 +1,42 @@
# Docker Compose Setup
Using Docker Compose simplifies the management of multi-container Docker applications.
## Example `docker-compose.yml`
```yaml
version: '3'
services:
openwebui:
image: ghcr.io/open-webui/open-webui:main
ports:
- "3000:8080"
volumes:
- open-webui:/app/backend/data
volumes:
open-webui:
```
## Starting the Services
To start your services, run:
```bash
docker compose up -d
```
---
**Note:** For Nvidia GPU support, add the following to your service definition:
```yaml
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: all
capabilities: [gpu]
```

View File

@ -0,0 +1,24 @@
# Manual Docker Setup
If you prefer to set up Docker manually, follow these steps.
## Step 1: Pull the Open WebUI Image
```bash
docker pull ghcr.io/open-webui/open-webui:main
```
## Step 2: Run the Container
```bash
docker run -d -p 3000:8080 -v open-webui:/app/backend/data --name open-webui ghcr.io/open-webui/open-webui:main
```
**Note:** For Nvidia GPU support, add `--gpus all` to the `docker run` command.
## Access the WebUI
After the container is running, access Open WebUI at:
[http://localhost:3000](http://localhost:3000)

View File

@ -0,0 +1,28 @@
# Using Podman
Podman is a daemonless container engine for developing, managing, and running OCI Containers.
## Basic Commands
- **Run a Container:**
```bash
podman run -d --name openwebui -p 3000:8080 ghcr.io/open-webui/open-webui:main
```
- **List Running Containers:**
```bash
podman ps
```
## Networking with Podman
If networking issues arise, you may need to adjust your network settings:
```bash
--network=slirp4netns:allow_host_loopback=true
```
Refer to the Podman [documentation](https://podman.io/) for advanced configurations.

View File

@ -0,0 +1,34 @@
# Helm Setup for Kubernetes
Helm helps you manage Kubernetes applications.
## Prerequisites
- Kubernetes cluster is set up.
- Helm is installed.
## Steps
1. **Add Open WebUI Helm Repository:**
```bash
helm repo add open-webui https://open-webui.github.io/helm-charts
helm repo update
```
2. **Install Open WebUI Chart:**
```bash
helm install openwebui open-webui/open-webui
```
3. **Verify Installation:**
```bash
kubectl get pods
```
## Access the WebUI
Set up port forwarding or load balancing to access Open WebUI from outside the cluster.

View File

@ -0,0 +1,35 @@
# Kustomize Setup for Kubernetes
Kustomize allows you to customize Kubernetes YAML configurations.
## Prerequisites
- Kubernetes cluster is set up.
- Kustomize is installed.
## Steps
1. **Clone the Open WebUI Manifests:**
```bash
git clone https://github.com/open-webui/k8s-manifests.git
cd k8s-manifests
```
2. **Apply the Manifests:**
```bash
kubectl apply -k .
```
3. **Verify Installation:**
```bash
kubectl get pods
```
## Access the WebUI
Set up port forwarding or load balancing to access Open WebUI from outside the cluster.

View File

@ -0,0 +1,27 @@
# Install with Conda
1. **Create a Conda Environment:**
```bash
conda create -n open-webui python=3.9
```
2. **Activate the Environment:**
```bash
conda activate open-webui
```
3. **Install Open WebUI:**
```bash
pip install open-webui
```
4. **Start the Server:**
```bash
open-webui serve
```

View File

@ -0,0 +1,27 @@
# Install with Conda
1. **Create a Conda Environment:**
```bash
conda create -n open-webui python=3.9
```
2. **Activate the Environment:**
```bash
conda activate open-webui
```
3. **Install Open WebUI:**
```bash
pip install open-webui
```
4. **Start the Server:**
```bash
open-webui serve
```

View File

@ -0,0 +1,38 @@
# Using Virtual Environments
Create isolated Python environments using `venv`.
## Steps
1. **Create a Virtual Environment:**
```bash
python3 -m venv venv
```
2. **Activate the Virtual Environment:**
- On Linux/macOS:
```bash
source venv/bin/activate
```
- On Windows:
```bash
venv\Scripts\activate
```
3. **Install Open WebUI:**
```bash
pip install open-webui
```
4. **Start the Server:**
```bash
open-webui serve
```

View File

@ -1,123 +0,0 @@
---
sidebar_position: 2
title: "🔄 Updating Open WebUI"
---
## Updating your Docker Installation
Keeping your Open WebUI Docker installation up-to-date ensures you have the latest features and security updates. You can update your installation manually or use [Watchtower](https://containrrr.dev/watchtower/) for automatic updates.
### Manual Update
Follow these steps to manually update your Open WebUI:
1. **Pull the Latest Docker Image**:
```bash
docker pull ghcr.io/open-webui/open-webui:main
```
2. **Stop and Remove the Existing Container**:
- This step ensures that you can create a new container from the updated image.
```bash
docker stop open-webui
docker rm open-webui
```
3. **Create a New Container with the Updated Image**:
- Use the same `docker run` command you used initially to create the container, ensuring all your configurations remain the same.
```bash
docker run -d -p 3000:8080 --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main
```
This process updates your Open WebUI container to the latest version while preserving your data stored in Docker volumes.
### Updating with Watchtower
For those who prefer automated updates, Watchtower can monitor your Open WebUI container and automatically update it to the latest version. You have two options with Watchtower: running it once for an immediate update, or deploying it persistently to automate future updates.
#### Running Watchtower Once
To update your container immediately without keeping Watchtower running continuously, use the following command. Replace `open-webui` with your container name if it differs.
```bash
docker run --rm --volume /var/run/docker.sock:/var/run/docker.sock containrrr/watchtower --run-once open-webui
```
#### Deploying Watchtower Persistently
If you prefer Watchtower to continuously monitor and update your container whenever a new version is available, you can run Watchtower as a persistent service. This method ensures your Open WebUI always stays up to date without any manual intervention. Use the command below to deploy Watchtower in this manner:
```bash
docker run -d --name watchtower --volume /var/run/docker.sock:/var/run/docker.sock containrrr/watchtower open-webui
```
Remember to replace `open-webui` with the name of your container if you have named it differently. This configuration allows you to benefit from the latest improvements and security patches with minimal downtime and manual effort.
### Updating Docker Compose Installation
If you installed Open WebUI using Docker Compose, follow these steps to update:
1. **Pull the Latest Images**:
- This command fetches the latest versions of the images specified in your `docker-compose.yml` files.
```bash
docker compose pull
```
2. **Recreate the Containers with the Latest Images**:
- This command recreates the containers based on the newly pulled images, ensuring your installation is up-to-date. No build step is required for updates.
```bash
docker compose up -d
```
This method ensures your Docker Compose-based installation of Open WebUI (and any associated services, like Ollama) is updated efficiently and without the need for manual container management.
## Updating Your Direct Install
For those who have installed Open WebUI directly without using Docker, updates are just as important to ensure access to the latest features and security patches. Remember, direct installations are not officially supported, and you might need to troubleshoot on your own. Here's how to update your installation:
### Pull the Latest Changes
Navigate to your Open WebUI project directory and pull the latest changes from the repository:
```sh
cd path/to/open-webui/
git pull origin main
```
Replace `path/to/open-webui/` with the actual path to your Open WebUI installation.
### Update Dependencies
After pulling the latest changes, update your project dependencies. This step ensures that both frontend and backend dependencies are up to date.
- **For Node.js (Frontend):**
```sh
npm install
npm run build
```
- **For Python (Backend):**
```sh
cd backend
pip install -r requirements.txt -U
```
### Restart the Backend Server
To apply the updates, you need to restart the backend server. If you have a running instance, stop it first and then start it again using the provided script.
```sh
bash start.sh
```
This command should be run from within the `backend` directory of your Open WebUI project.
:::info
Direct installations require more manual effort to update compared to Docker-based installations. If you frequently need updates and want to streamline the process, consider transitioning to a Docker-based setup for easier management.
:::
By following these steps, you can update your direct installation of Open WebUI, ensuring you're running the latest version with all its benefits. Remember to back up any critical data or custom configurations before starting the update process to prevent any unintended loss.

View File

@ -0,0 +1,36 @@
---
sidebar_position: 3
title: "🧑‍💻 Using OpenWebUI"
---
# Using OpenWebUI
Explore the essential concepts and features of Open WebUI, including models, knowledge, prompts, pipes, actions, and more.
---
## 📥 Troubleshooting Ollama
Many users wish to make use of their existing Ollama instance, but encounter common issues.
If this is you, then check out the [Troubleshooting Ollama guide](./troubleshooting-ollama.mdx)
---
## 📚 Terminology
Understand key components: models, prompts, knowledge, functions, pipes, and actions.
[Read the Terminology Guide](./terminology.mdx)
## 🌐 Additional Resources and Integrations
Find community tools, integrations, and official resources.
[Additional Resources Guide](./resources)
---
## 📖 Community Tutorials
If you like the documentation you are reading right now, then check out this tutorial on [Configuring RAG with OpenWebUI Documentation](../../tutorials/tips/rag-tutorial.md).
Then go on to explore other community-submitted tutorials to enhance your OpenWebUI experience.
[Explore Community Tutorials](/category/-tutorials)
---
Stay tuned for more updates as we continue to expand these sections!

View File

@ -0,0 +1,37 @@
---
sidebar_position: 4
title: "🌐 Additional Resources"
---
# 🌐 Additional Resources
Explore more resources, community tools, and integration options to make the most out of Open WebUI.
---
## 🔥 Open WebUI Website
Visit [Open WebUI](https://openwebui.com/) for official documentation, tools, and resources:
- **Leaderboard**: Check out the latest high-ranking models, tools, and integrations.
- **Featured Models and Tools**: Discover models and tools created by community members.
- **New Integrations**: Find newly released integrations, plugins, and models to expand your setup.
---
## 🌍 Community Platforms
Connect with the Open WebUI community for support, tips, and discussions.
- **Discord**: Join our community on Discord to chat with other users, ask questions, and stay updated.
[Join the Discord Server](https://discord.com/invite/5rJgQTnV4s)
- **Reddit**: Follow the Open WebUI subreddit for announcements, discussions, and user-submitted content.
[Visit Reddit Community](https://www.reddit.com/r/OpenWebUI/)
---
## 📖 Tutorials and User Guides
Explore community-created tutorials to enhance your Open WebUI experience:
- [Explore Community Tutorials](/category/-tutorials)
- Learn how to configure RAG and advanced integrations with the [RAG Configuration Guide](../../tutorials/tips/rag-tutorial.md).
---
Stay connected and make the most out of Open WebUI through these community resources and integrations!

View File

@ -0,0 +1,43 @@
### 🐳 Ollama Inside Docker
If **Ollama is deployed inside Docker** (e.g., using Docker Compose or Kubernetes), the service will be available:
- **Inside the container**: `http://127.0.0.1:11434`
- **From the host**: `http://localhost:11435` (if exposed via host network)
#### Step 1: Check Available Models
- Inside the container:
```bash
docker exec -it openwebui curl http://ollama:11434/v1/models
```
- From the host (if exposed):
```bash
curl http://localhost:11435/v1/models
```
This command lists all available models and confirms that Ollama is running.
#### Step 2: Download Llama 3.2
Run the following command:
```bash
docker exec -it ollama ollama pull llama3.2
```
**Tip:** You can download other models from Hugging Face by specifying the appropriate URL. For example, to download a higher-quality **8-bit version of Llama 3.2**:
```bash
ollama pull hf.co/bartowski/Llama-3.2-3B-Instruct-GGUF:Q8_0
```
#### Step 3: Access the WebUI
Once everything is set up, access the WebUI at:
[http://localhost:3000](http://localhost:3000)

View File

@ -0,0 +1,50 @@
### 🛠️ Bring Your Own Ollama (BYO Ollama)
If Ollama is running on the **host machine** or another server on your network, follow these steps.
#### Step 1: Check Available Models
- If Ollama is **local**, run:
```bash
curl http://localhost:11434/v1/models
```
- If Ollama is **remote**, use:
```bash
curl http://<remote-ip>:11434/v1/models
```
This confirms that Ollama is available and lists the available models.
#### Step 2: Set the OLLAMA_BASE_URL
If Ollama is running **remotely** or on the host, set the following environment variable:
```bash
export OLLAMA_HOST=<remote-ip>:11434
```
This ensures Open WebUI can reach the remote Ollama instance.
#### Step 3: Download Llama 3.2
From your local or remote machine, run:
```bash
ollama pull llama3.2
```
**Tip:** Use this command to download the 8-bit version from Hugging Face:
```bash
ollama pull hf.co/bartowski/Llama-3.2-3B-Instruct-GGUF:Q8_0
```
#### Step 4: Access the WebUI
You can now access the WebUI at:
[http://localhost:3000](http://localhost:3000)

View File

@ -0,0 +1,27 @@
---
sidebar_position: 3
title: "📖 OpenWebUI Terminology"
---
# 📖 OpenWebUI Terminology
Enhance your understanding of OpenWebUI with key concepts and components to improve your usage and configuration.
---
## Explore the Workspace
Begin by exploring the [Workspace](../../tutorials/features/workspace) to discover essential concepts such as Modelfiles, Knowledge, Prompts, Tools, and Functions.
---
## Interact with the Playground
Visit the Playground to directly engage with a Large Language Model. Here, you can experiment with different `System Prompts` to modify the model's behavior and persona.
---
## Personalize in Settings
Access the Settings to personalize your experience. Customize features like Memory, adjust Voice settings for both TTS (Text-to-Speech) and STT (Speech-to-Text), and toggle between Dark/Light mode for optimal viewing.
---
This terminology guide will help you navigate and configure OpenWebUI effectively!

View File

@ -0,0 +1,82 @@
---
sidebar_position: 2
title: "🤖Troubleshooting Ollama"
---
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
# Troubleshooting Ollama
Explore how to download, load, and use models with Ollama, both via **Docker** and **Remote** setups.
---
<Tabs groupId="ollama-setup">
<TabItem value="docker-ollama" label="Ollama Inside Docker">
## 🐳 Ollama Inside Docker
If **Ollama is deployed inside Docker** (e.g., using Docker Compose or Kubernetes), the service will be available:
- **Inside the container**: `http://127.0.0.1:11434`
- **From the host**: `http://localhost:11435` (if exposed via host network)
### Step 1: Check Available Models
```bash
docker exec -it openwebui curl http://ollama:11434/v1/models
```
From the host (if exposed):
```bash
curl http://localhost:11435/v1/models
```
### Step 2: Download Llama 3.2
```bash
docker exec -it ollama ollama pull llama3.2
```
You can also download a higher-quality version (8-bit) from Hugging Face:
```bash
docker exec -it ollama ollama pull hf.co/bartowski/Llama-3.2-3B-Instruct-GGUF:Q8_0
```
</TabItem>
<TabItem value="byo-ollama" label="BYO Ollama (External Ollama)">
## 🛠️ Bring Your Own Ollama (BYO Ollama)
If Ollama is running on the **host machine** or another server on your network, follow these steps.
### Step 1: Check Available Models
Local:
```bash
curl http://localhost:11434/v1/models
```
Remote:
```bash
curl http://<remote-ip>:11434/v1/models
```
### Step 2: Set the OLLAMA_BASE_URL
```bash
export OLLAMA_HOST=<remote-ip>:11434
```
### Step 3: Download Llama 3.2
```bash
ollama pull llama3.2
```
Or download the 8-bit version from Hugging Face:
```bash
ollama pull hf.co/bartowski/Llama-3.2-3B-Instruct-GGUF:Q8_0
```
</TabItem>
</Tabs>
---
You now have everything you need to download and run models with **Ollama**. Happy exploring!

View File

@ -43,7 +43,7 @@ import { SponsorList } from "@site/src/components/SponsorList";
#### Disabling Login for Single User
If you want to disable login for a single-user setup, set [`WEBUI_AUTH`](/getting-started/env-configuration) to `False`. This will bypass the login page.
If you want to disable login for a single-user setup, set [`WEBUI_AUTH`](./getting-started/advanced-topics/env-configuration) to `False`. This will bypass the login page.
:::warning
You cannot switch between single-user mode and multi-account mode after this change.
@ -161,7 +161,7 @@ If you're facing various issues like "Open WebUI: Server Connection Error", see
## Updating
Check out our full [updating guide](/getting-started/updating).
Check out how to update Docker in the [Quick Start guide](./getting-started/quick-start).
In case you want to update your local Docker installation to the latest version, you can do it with [Watchtower](https://containrrr.dev/watchtower/):

View File

@ -14,7 +14,7 @@ Open WebUI allows you to integrate directly into your web browser. This tutorial
Before you begin, ensure that:
- You have Chrome or another supported browser installed.
- The `WEBUI_URL` environment variable is set correctly, either using Docker environment variables or in the `.env` file as specified in the [Getting Started](/getting-started/env-configuration) guide.
- The `WEBUI_URL` environment variable is set correctly, either using Docker environment variables or in the `.env` file as specified in the [Getting Started](/getting-started/advanced-topics/env-configuration) guide.
### Step 1: Set the WEBUI_URL Environment Variable

View File

@ -0,0 +1,59 @@
# Installing Docker
## For Windows and Mac Users
- Download Docker Desktop from [Docker's official website](https://www.docker.com/products/docker-desktop).
- Follow the installation instructions on the website.
- After installation, **open Docker Desktop** to ensure it's running properly.
---
## For Ubuntu Users
1. **Open your terminal.**
2. **Set up Dockers apt repository:**
```bash
sudo apt-get update
sudo apt-get install ca-certificates curl
sudo install -m 0755 -d /etc/apt/keyrings
sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc
sudo chmod a+r /etc/apt/keyrings/docker.asc
echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu \
$(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
```
:::note
If using an **Ubuntu derivative** (e.g., Linux Mint), use `UBUNTU_CODENAME` instead of `VERSION_CODENAME`.
:::
3. **Install Docker Engine:**
```bash
sudo apt-get update
sudo apt-get install docker-ce docker-ce-cli containerd.io docker-compose-plugin
```
4. **Verify Docker Installation:**
```bash
sudo docker run hello-world
```
---
## For Other Linux Distributions
For other Linux distributions, refer to the [official Docker documentation](https://docs.docker.com/engine/install/).
---
## Install and Verify Ollama
1. **Download Ollama** from [https://ollama.com/](https://ollama.com/).
2. **Verify Ollama Installation:**
- Open a browser and navigate to:
[http://127.0.0.1:11434/](http://127.0.0.1:11434/).
- Note: The port may vary based on your installation.

View File

@ -4,7 +4,7 @@ title: "Local LLM Setup with IPEX-LLM on Intel GPU"
---
:::note
This guide is verified with Open WebUI setup through [Manual Installation](/getting-started/index.mdx#manual-installation).
This guide is verified with Open WebUI setup through [Manual Installation](/getting-started/index.md).
:::
# Local LLM Setup with IPEX-LLM on Intel GPU