Update links to advanced-topics/env-configuration and finalize pending link updates

This commit is contained in:
Matthew Hand 2024-11-05 22:08:15 +00:00
parent 54c621f503
commit 4d0e58ea79
14 changed files with 43 additions and 1775 deletions

View File

@ -1,202 +0,0 @@
---
sidebar_position: 400
title: "🔗 API Endpoints"
---
This guide provides essential information on how to interact with the API endpoints effectively to achieve seamless integration and automation using our models. Please note that this is an experimental setup and may undergo future updates for enhancement.
## Authentication
To ensure secure access to the API, authentication is required 🛡️. You can authenticate your API requests using the Bearer Token mechanism. Obtain your API key from **Settings > Account** in the Open WebUI, or alternatively, use a JWT (JSON Web Token) for authentication.
## Notable API Endpoints
### 📜 Retrieve All Models
- **Endpoint**: `GET /api/models`
- **Description**: Fetches all models created or added via Open WebUI.
- **Example**:
```bash
curl -H "Authorization: Bearer YOUR_API_KEY" http://localhost:3000/api/models
```
### 💬 Chat Completions
- **Endpoint**: `POST /api/chat/completions`
- **Description**: Serves as an OpenAI API compatible chat completion endpoint for models on Open WebUI including Ollama models, OpenAI models, and Open WebUI Function models.
- **Example**:
```bash
curl -X POST http://localhost:3000/api/chat/completions \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "llama3.1",
"messages": [
{
"role": "user",
"content": "Why is the sky blue?"
}
]
}'
```
### 🧩 Retrieval Augmented Generation (RAG)
The Retrieval Augmented Generation (RAG) feature allows you to enhance responses by incorporating data from external sources. Below, you will find the methods for managing files and knowledge collections via the API, and how to use them in chat completions effectively.
#### Uploading Files
To utilize external data in RAG responses, you first need to upload the files. The content of the uploaded file is automatically extracted and stored in a vector database.
- **Endpoint**: `POST /api/v1/files/`
- **Curl Example**:
```bash
curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Accept: application/json" \
-F "file=@/path/to/your/file" http://localhost:3000/api/v1/files/
```
- **Python Example**:
```python
import requests
def upload_file(token, file_path):
url = 'http://localhost:3000/api/v1/files/'
headers = {
'Authorization': f'Bearer {token}',
'Accept': 'application/json'
}
files = {'file': open(file_path, 'rb')}
response = requests.post(url, headers=headers, files=files)
return response.json()
```
#### Adding Files to Knowledge Collections
After uploading, you can group files into a knowledge collection or reference them individually in chats.
- **Endpoint**: `POST /api/v1/knowledge/{id}/file/add`
- **Curl Example**:
```bash
curl -X POST http://localhost:3000/api/v1/knowledge/{knowledge_id}/file/add \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{"file_id": "your-file-id-here"}'
```
- **Python Example**:
```python
import requests
def add_file_to_knowledge(token, knowledge_id, file_id):
url = f'http://localhost:3000/api/v1/knowledge/{knowledge_id}/file/add'
headers = {
'Authorization': f'Bearer {token}',
'Content-Type': 'application/json'
}
data = {'file_id': file_id}
response = requests.post(url, headers=headers, json=data)
return response.json()
```
#### Using Files and Collections in Chat Completions
You can reference both individual files or entire collections in your RAG queries for enriched responses.
##### Using an Individual File in Chat Completions
This method is beneficial when you want to focus the chat model's response on the content of a specific file.
- **Endpoint**: `POST /api/chat/completions`
- **Curl Example**:
```bash
curl -X POST http://localhost:3000/api/chat/completions \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-4-turbo",
"messages": [
{"role": "user", "content": "Explain the concepts in this document."}
],
"files": [
{"type": "file", "id": "your-file-id-here"}
]
}'
```
- **Python Example**:
```python
import requests
def chat_with_file(token, model, query, file_id):
url = 'http://localhost:3000/api/chat/completions'
headers = {
'Authorization': f'Bearer {token}',
'Content-Type': 'application/json'
}
payload = {
'model': model,
'messages': [{'role': 'user', 'content': query}],
'files': [{'type': 'file', 'id': file_id}]
}
response = requests.post(url, headers=headers, json=payload)
return response.json()
```
##### Using a Knowledge Collection in Chat Completions
Leverage a knowledge collection to enhance the response when the inquiry may benefit from a broader context or multiple documents.
- **Endpoint**: `POST /api/chat/completions`
- **Curl Example**:
```bash
curl -X POST http://localhost:3000/api/chat/completions \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-4-turbo",
"messages": [
{"role": "user", "content": "Provide insights on the historical perspectives covered in the collection."}
],
"files": [
{"type": "collection", "id": "your-collection-id-here"}
]
}'
```
- **Python Example**:
```python
import requests
def chat_with_collection(token, model, query, collection_id):
url = 'http://localhost:3000/api/chat/completions'
headers = {
'Authorization': f'Bearer {token}',
'Content-Type': 'application/json'
}
payload = {
'model': model,
'messages': [{'role': 'user', 'content': query}],
'files': [{'type': 'collection', 'id': collection_id}]
}
response = requests.post(url, headers=headers, json=payload)
return response.json()
```
These methods enable effective utilization of external knowledge via uploaded files and curated knowledge collections, enhancing chat applications' capabilities using the Open WebUI API. Whether using files individually or within collections, you can customize the integration based on your specific needs.
## Advantages of Using Open WebUI as a Unified LLM Provider
Open WebUI offers a myriad of benefits, making it an essential tool for developers and businesses alike:
- **Unified Interface**: Simplify your interactions with different LLMs through a single, integrated platform.
- **Ease of Implementation**: Quick start integration with comprehensive documentation and community support.
## Swagger Documentation Links
Access detailed API documentation for different services provided by Open WebUI:
| Application | Documentation Path |
|-------------|-------------------------|
| Main | `/docs` |
| WebUI | `/api/v1/docs` |
| Ollama | `/ollama/docs` |
| OpenAI | `/openai/docs` |
| Images | `/images/api/v1/docs` |
| Audio | `/audio/api/v1/docs` |
| RAG | `/retrieval/api/v1/docs`|
Each documentation portal offers interactive examples, schema descriptions, and testing capabilities to enhance your understanding and ease of use.
By following these guidelines, you can swiftly integrate and begin utilizing the Open WebUI API. Should you encounter any issues or have questions, feel free to reach out through our Discord Community or consult the FAQs. Happy coding! 🌟

File diff suppressed because it is too large Load Diff

View File

@ -1,199 +0,0 @@
---
sidebar_position: 6
title: "🛠️ Development Guide"
---
import { TopBanners } from "@site/src/components/TopBanners";
<TopBanners />
Welcome to the Open WebUI Development Setup Guide! 🌟 Whether you're a novice or a veteran in the software development world, this guide is designed to assist you in establishing a functional local development environment for both the frontend and backend components of Open WebUI. Let's get started and set up your development environment swiftly! 🚀
## System Requirements
Before diving into the setup, make sure your system meets the following requirements:
- **Operating System**: Linux (WSL) or macOS (Instructions provided here specifically cater to these operating systems)
- **Python Version**: Python 3.11
## 🐧 Linux/macOS Setup Guide
This section provides a step-by-step process to get your development environment ready on Linux (WSL) or macOS platforms.
### 📡 Cloning the Repository
First, you'll need to clone the Open WebUI repository and switch to the directory:
```sh
git clone https://github.com/open-webui/open-webui.git
cd open-webui
```
### 🖥️ Frontend Server Setup
To set up the frontend server, follow these instructions:
1. **Environment Configuration**:
Duplicate the environment configuration file:
```sh
cp -RPp .env.example .env
```
2. **Install Dependencies**:
Run the following commands to install necessary dependencies:
```sh
npm install
```
3. **Launch the Server**:
Start the server with:
```sh
npm run dev
```
🌐 The frontend server will be available at: http://localhost:5173. Please note that for the frontend server to function correctly, the backend server should be running concurrently.
### 🖥️ Backend Server Setup
Setting up the backend server involves a few more steps, Python 3.11 is required for Open WebUI:
1. **Change Directory**:
Open a new terminal window and navigate to the backend directory:
```sh
cd open-webui/backend
```
2. **Python Environment Setup** (Using Conda Recommended):
- Create and activate a Conda environment with Python 3.11:
```sh
conda create --name open-webui python=3.11
conda activate open-webui
```
3. **Install Backend Dependencies**:
Install all the required Python libraries:
```sh
pip install -r requirements.txt -U
```
4. **Start the Backend Application**:
Launch the backend application with:
```sh
sh dev.sh
```
📄 Access the backend API documentation at: http://localhost:8080/docs. The backend supports hot reloading, making your development process smoother by automatically reflecting changes.
That's it! You now have both the frontend and backend servers running. Explore the API documentation and start developing features for Open WebUI. Happy coding! 🎉
## 🐳 Running in a Docker Container
For those who prefer using Docker, here's how you can set things up:
1. **Initialize Configuration:**
Assuming you have already cloned the repository and created a `.env` file, create a new file named `compose-dev.yaml`. This configuration uses Docker Compose to ease the development setup.
```yaml
name: open-webui-dev
services:
frontend:
build:
context: .
target: build
command: ["npm", "run", "dev"]
depends_on:
- backend
extra_hosts:
- host.docker.internal:host-gateway
ports:
- "3000:5173"
develop:
watch:
path: ./src
action: sync
backend:
build:
context: .
target: base
command: ["bash", "dev.sh"]
env_file: ".env"
environment:
- ENV=dev
- WEBUI_AUTH=False
volumes:
- data:/app/backend/data
extra_hosts:
- host.docker.internal:host-gateway
ports:
- "8080:8080"
restart: always
develop:
watch:
path: ./backend
action: sync
volumes:
data: {}
```
2. **Start Development Containers:**
```sh
docker compose -f compose-dev.yaml up --watch
```
This command will start the frontend and backend servers in hot reload mode. Changes in your source files will trigger an automatic refresh. The web app will be available at http://localhost:3000 and Backend API docs at http://localhost:8080/docs.
3. **Stopping the Containers:**
To stop the containers, you can use:
```sh
docker compose -f compose-dev.yaml down
```
### 🔄 Integration with Pipelines
If your development involves [Pipelines](https://docs.openwebui.com/pipelines/), you can enhance your Docker setup:
```yaml
services:
pipelines:
ports:
- "9099:9099"
volumes:
- ./pipelines:/app/pipelines
extra_hosts:
- host.docker.internal:host-gateway
restart: always
```
This setup involves mounting the `pipelines` directory to ensure any changes reflect immediately, maintaining high development agility.
:::note
This configuration uses volume bind-mounts. Learn more about how they differ from named volumes [here](https://docs.docker.com/storage/bind-mounts/).
:::
## 🐛 Troubleshooting
### FATAL ERROR: Reached heap limit
When you encounter a memory-related error during the Docker build process—especially while executing `npm run build`—it typically indicates that the JavaScript heap has exceeded its memory limit. One effective solution is to increase the memory allocated to Node.js by adjusting the `NODE_OPTIONS` environment variable. This allows you to set a higher maximum heap size, which can help prevent out-of-memory errors during the build process. If you encounter this issue, try to allocate at least 4 GB of RAM, or higher if you have enough RAM.
You can increase the memory allocated to Node.js by adding the following line just before `npm run build` in the `Dockerfile`.
```docker title=/Dockerfile
ENV NODE_OPTIONS=--max-old-space-size=4096
```
---
Through these setup steps, both new and experienced contributors can seamlessly integrate into the development workflow of Open WebUI. Happy coding! 🎉

View File

@ -1,40 +0,0 @@
---
sidebar_position: 1
title: "🔧 Alternative Installation"
---
### Installing Both Ollama and Open WebUI Using Kustomize
For a CPU-only Pod:
```bash
kubectl apply -k ./kubernetes/manifest/base
```
For a GPU-enabled Pod:
```bash
kubectl apply -k ./kubernetes/manifest/gpu
```
### Installing Both Ollama and Open WebUI Using Helm
:::info
The Helm installation method has been migrated to the new GitHub repository. Please refer to
the latest installation instructions at [https://github.com/open-webui/helm-charts](https://github.com/open-webui/helm-charts).
:::
Confirm that Helm has been deployed on your execution environment.
For installation instructions, visit [https://helm.sh/docs/intro/install/](https://helm.sh/docs/intro/install/).
```bash
helm repo add open-webui https://helm.openwebui.com/
helm repo update
kubectl create namespace open-webui
helm upgrade --install open-webui open-webui/open-webui --namespace open-webui
```
For additional customization options, refer to the [kubernetes/helm/values.yaml](https://github.com/open-webui/helm-charts/tree/main/charts/open-webui) file.

View File

@ -1,63 +0,0 @@
---
sidebar_position: 3
title: "📜 Open WebUI Logging"
---
## Browser Client Logging ##
Client logging generally occurs via [JavaScript](https://developer.mozilla.org/en-US/docs/Web/API/console/log_static) `console.log()` and can be accessed using the built-in browser-specific developer tools:
* Blink
* [Chrome/Chromium](https://developer.chrome.com/docs/devtools/)
* [Edge](https://learn.microsoft.com/en-us/microsoft-edge/devtools-guide-chromium/overview)
* Gecko
* [Firefox](https://firefox-source-docs.mozilla.org/devtools-user/)
* WebKit
* [Safari](https://developer.apple.com/safari/tools/)
## Application Server/Backend Logging ##
Logging is an ongoing work-in-progress but some level of control is available using environment variables. [Python Logging](https://docs.python.org/3/howto/logging.html) `log()` and `print()` statements send information to the console. The default level is `INFO`. Ideally, sensitive data will only be exposed with `DEBUG` level.
### Logging Levels ###
The following [logging levels](https://docs.python.org/3/howto/logging.html#logging-levels) values are supported:
| Level | Numeric value |
| ---------- | ------------- |
| `CRITICAL` | 50 |
| `ERROR` | 40 |
| `WARNING` | 30 |
| `INFO` | 20 |
| `DEBUG` | 10 |
| `NOTSET` | 0 |
### Global ###
The default global log level of `INFO` can be overridden with the `GLOBAL_LOG_LEVEL` environment variable. When set, this executes a [basicConfig](https://docs.python.org/3/library/logging.html#logging.basicConfig) statement with the `force` argument set to *True* within `config.py`. This results in reconfiguration of all attached loggers:
> _If this keyword argument is specified as true, any existing handlers attached to the root logger are removed and closed, before carrying out the configuration as specified by the other arguments._
The stream uses standard output (`sys.stdout`). In addition to all Open-WebUI `log()` statements, this also affects any imported Python modules that use the Python Logging module `basicConfig` mechanism including [urllib](https://docs.python.org/3/library/urllib.html).
For example, to set `DEBUG` logging level as a Docker parameter use:
```
--env GLOBAL_LOG_LEVEL="DEBUG"
```
### App/Backend ###
Some level of granularity is possible using any of the following combination of variables. Note that `basicConfig` `force` isn't presently used so these statements may only affect Open-WebUI logging and not 3rd party modules.
| Environment Variable | App/Backend |
| -------------------- | ----------------------------------------------------------------- |
| `AUDIO_LOG_LEVEL` | Audio transcription using faster-whisper, TTS etc. |
| `COMFYUI_LOG_LEVEL` | ComfyUI integration handling |
| `CONFIG_LOG_LEVEL` | Configuration handling |
| `DB_LOG_LEVEL` | Internal Peewee Database |
| `IMAGES_LOG_LEVEL` | AUTOMATIC1111 stable diffusion image generation |
| `LITELLM_LOG_LEVEL` | LiteLLM proxy |
| `MAIN_LOG_LEVEL` | Main (root) execution |
| `MODELS_LOG_LEVEL` | LLM model interaction, authentication, etc. |
| `OLLAMA_LOG_LEVEL` | Ollama backend interaction |
| `OPENAI_LOG_LEVEL` | OpenAI interaction |
| `RAG_LOG_LEVEL` | Retrieval-Augmented Generation using Chroma/Sentence-Transformers |
| `WEBHOOK_LOG_LEVEL` | Authentication webhook extended logging |

View File

@ -1,123 +0,0 @@
---
sidebar_position: 2
title: "🔄 Updating Open WebUI"
---
## Updating your Docker Installation
Keeping your Open WebUI Docker installation up-to-date ensures you have the latest features and security updates. You can update your installation manually or use [Watchtower](https://containrrr.dev/watchtower/) for automatic updates.
### Manual Update
Follow these steps to manually update your Open WebUI:
1. **Pull the Latest Docker Image**:
```bash
docker pull ghcr.io/open-webui/open-webui:main
```
2. **Stop and Remove the Existing Container**:
- This step ensures that you can create a new container from the updated image.
```bash
docker stop open-webui
docker rm open-webui
```
3. **Create a New Container with the Updated Image**:
- Use the same `docker run` command you used initially to create the container, ensuring all your configurations remain the same.
```bash
docker run -d -p 3000:8080 --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main
```
This process updates your Open WebUI container to the latest version while preserving your data stored in Docker volumes.
### Updating with Watchtower
For those who prefer automated updates, Watchtower can monitor your Open WebUI container and automatically update it to the latest version. You have two options with Watchtower: running it once for an immediate update, or deploying it persistently to automate future updates.
#### Running Watchtower Once
To update your container immediately without keeping Watchtower running continuously, use the following command. Replace `open-webui` with your container name if it differs.
```bash
docker run --rm --volume /var/run/docker.sock:/var/run/docker.sock containrrr/watchtower --run-once open-webui
```
#### Deploying Watchtower Persistently
If you prefer Watchtower to continuously monitor and update your container whenever a new version is available, you can run Watchtower as a persistent service. This method ensures your Open WebUI always stays up to date without any manual intervention. Use the command below to deploy Watchtower in this manner:
```bash
docker run -d --name watchtower --volume /var/run/docker.sock:/var/run/docker.sock containrrr/watchtower open-webui
```
Remember to replace `open-webui` with the name of your container if you have named it differently. This configuration allows you to benefit from the latest improvements and security patches with minimal downtime and manual effort.
### Updating Docker Compose Installation
If you installed Open WebUI using Docker Compose, follow these steps to update:
1. **Pull the Latest Images**:
- This command fetches the latest versions of the images specified in your `docker-compose.yml` files.
```bash
docker compose pull
```
2. **Recreate the Containers with the Latest Images**:
- This command recreates the containers based on the newly pulled images, ensuring your installation is up-to-date. No build step is required for updates.
```bash
docker compose up -d
```
This method ensures your Docker Compose-based installation of Open WebUI (and any associated services, like Ollama) is updated efficiently and without the need for manual container management.
## Updating Your Direct Install
For those who have installed Open WebUI directly without using Docker, updates are just as important to ensure access to the latest features and security patches. Remember, direct installations are not officially supported, and you might need to troubleshoot on your own. Here's how to update your installation:
### Pull the Latest Changes
Navigate to your Open WebUI project directory and pull the latest changes from the repository:
```sh
cd path/to/open-webui/
git pull origin main
```
Replace `path/to/open-webui/` with the actual path to your Open WebUI installation.
### Update Dependencies
After pulling the latest changes, update your project dependencies. This step ensures that both frontend and backend dependencies are up to date.
- **For Node.js (Frontend):**
```sh
npm install
npm run build
```
- **For Python (Backend):**
```sh
cd backend
pip install -r requirements.txt -U
```
### Restart the Backend Server
To apply the updates, you need to restart the backend server. If you have a running instance, stop it first and then start it again using the provided script.
```sh
bash start.sh
```
This command should be run from within the `backend` directory of your Open WebUI project.
:::info
Direct installations require more manual effort to update compared to Docker-based installations. If you frequently need updates and want to streamline the process, consider transitioning to a Docker-based setup for easier management.
:::
By following these steps, you can update your direct installation of Open WebUI, ensuring you're running the latest version with all its benefits. Remember to back up any critical data or custom configurations before starting the update process to prevent any unintended loss.

View File

@ -11,13 +11,17 @@ Explore the essential concepts and features of Open WebUI, including models, kno
## 📥 Ollama Models
Learn how to download, load, and use models effectively.
[Check out Ollama Models](./OllamaModels.mdx)
[Check out Ollama Models](./ollama-models.mdx)
---
## 📚 Terminology
Understand key components: models, prompts, knowledge, functions, pipes, and actions.
[Read the Terminology Guide](./Terminology.mdx)
[Read the Terminology Guide](./terminology.mdx)
## 🌐 Additional Resources and Integrations
Find community tools, integrations, and official resources.
[Additional Resources Guide](./resources)
---

View File

@ -0,0 +1,37 @@
---
sidebar_position: 400
title: "🌐 Additional Resources and Integrations"
---
# 🌐 Additional Resources and Integrations
Explore more resources, community tools, and integration options to make the most out of Open WebUI.
---
## 🔥 Open WebUI Website
Visit [Open WebUI](https://openwebui.com/) for official documentation, tools, and resources:
- **Leaderboard**: Check out the latest high-ranking models, tools, and integrations.
- **Featured Models and Tools**: Discover models and tools created by community members.
- **New Integrations**: Find newly released integrations, plugins, and models to expand your setup.
---
## 🌍 Community Platforms
Connect with the Open WebUI community for support, tips, and discussions.
- **Discord**: Join our community on Discord to chat with other users, ask questions, and stay updated.
[Join the Discord Server](https://discord.com/invite/5rJgQTnV4s)
- **Reddit**: Follow the Open WebUI subreddit for announcements, discussions, and user-submitted content.
[Visit Reddit Community](https://www.reddit.com/r/OpenWebUI/)
---
## 📖 Tutorials and User Guides
Explore community-created tutorials to enhance your Open WebUI experience:
- [Explore Community Tutorials](/category/-tutorials)
- Learn how to configure RAG and advanced integrations with the [RAG Configuration Guide](../../tutorials/tips/rag-tutorial.md).
---
Stay connected and make the most out of Open WebUI through these community resources and integrations!