Merge branch 'open-webui:main' into main

This commit is contained in:
Jeff Weisman 2025-02-09 10:19:44 -05:00 committed by GitHub
commit dae2edf463
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
3 changed files with 149 additions and 6 deletions

View File

@ -1070,6 +1070,12 @@ When enabling `GOOGLE_DRIVE_INTEGRATION`, ensure that you have configured `GOOGL
- Default: `default` - Default: `default`
- Description: Specifies the database to connect to within a milvus instance - Description: Specifies the database to connect to within a milvus instance
#### `MILVUS_TOKEN`
- Type: `str`
- Default: `None`
- Description: Specifies the connection token for Milvus, optional.
### OpenSearch ### OpenSearch

View File

@ -0,0 +1,132 @@
---
sidebar_position: 4100
title: "🦊 Firefox AI Chatbot Sidebar"
---
:::warning
This tutorial is a community contribution and is not supported by the Open WebUI team. It serves only as a demonstration on how to customize Open WebUI for your specific use case. Want to contribute? Check out the contributing tutorial.
:::
## 🦊 Firefox AI Chatbot Sidebar
# Integrating Open WebUI as a Local AI Chatbot Browser Assistant in Mozilla Firefox
Table of Contents
=================
1. [Prerequisites](#prerequisites)
2. [Enabling AI Chatbot in Firefox](#enabling-ai-chatbot-in-firefox)
3. [Configuring about:config Settings](#configuring-aboutconfig-settings)
* [browser.ml.chat.enabled](#browsermlchatenabled)
* [browser.ml.chat.hideLocalhost](#browsermlchathidelocalhost)
* [browser.ml.chat.prompts.#](#browsermlchatsprompts)
* [browser.ml.chat.provider](#browsermlchatprovider)
4. [URL Parameters for Open WebUI](#url-parameters-for-open-webui)
* [Models and Model Selection](#models-and-model-selection)
* [YouTube Transcription](#youtube-transcription)
* [Web Search](#web-search)
* [Tool Selection](#tool-selection)
* [Call Overlay](#call-overlay)
* [Initial Query Prompt](#initial-query-prompt)
* [Temporary Chat Sessions](#temporary-chat-sessions)
5. [Additional about:config Settings](#additional-aboutconfig-settings)
6. [Accessing the AI Chatbot Sidebar](#accessing-the-ai-chatbot-sidebar)
## Prerequisites
Before integrating Open WebUI as a AI chatbot browser assistant in Mozilla Firefox, ensure you have:
* Open WebUI instance URL (local or domain)
* Firefox browser installed
## Enabling AI Chatbot in Firefox
1. Click on the hamburger button (three horizontal lines button at the top right corner, just below the `X` button)
2. Open up Firefox settings
2. Click on the `Firefox Labs` section
3. Toggle on `AI Chatbot`
Alternatively, you can enable AI Chatbot through the `about:config` page (described in the next section).
## Configuring about:config Settings
1. Type `about:config` in the Firefox address bar
2. Click `Accept the Risk and Continue`
3. Search for `browser.ml.chat.enabled` and toggle it to `true` if it's not already enabled through Firefox Labs
4. Search for `browser.ml.chat.hideLocalhost` and toggle it to `false`
### browser.ml.chat.prompts.#
To add custom prompts, follow these steps:
1. Search for `browser.ml.chat.prompts.#` (replace `#` with a number, e.g., `0`, `1`, `2`, etc.)
2. Click the `+` button to add a new prompt
3. Enter the prompt label, value, and ID (e.g., `{"id":"My Prompt", "value": "This is my custom prompt.", "label": "My Prompt"}`)
4. Repeat the process to add more prompts as desired
### browser.ml.chat.provider
1. Search for `browser.ml.chat.provider`
2. Enter your Open WebUI instance URL, including any optional parameters (e.g., `https://my-open-webui-instance.com/?model=browser-productivity-assistant&temporary-chat=true&tools=jina_web_scrape`)
## URL Parameters for Open WebUI
The following URL parameters can be used to customize your Open WebUI instance:
### Models and Model Selection
* `models`: Specify multiple models (comma-separated list) for the chat session (e.g., `/?models=model1,model2`)
* `model`: Specify a single model for the chat session (e.g., `/?model=model1`)
### YouTube Transcription
* `youtube`: Provide a YouTube video ID to transcribe the video in the chat (e.g., `/?youtube=VIDEO_ID`)
### Web Search
* `web-search`: Enable web search functionality by setting this parameter to `true` (e.g., `/?web-search=true`)
### Tool Selection
* `tools` or `tool-ids`: Specify a comma-separated list of tool IDs to activate in the chat (e.g., `/?tools=tool1,tool2` or `/?tool-ids=tool1,tool2`)
### Call Overlay
* `call`: Enable a video or call overlay in the chat interface by setting this parameter to `true` (e.g., `/?call=true`)
### Initial Query Prompt
* `q`: Set an initial query or prompt for the chat (e.g., `/?q=Hello%20there`)
### Temporary Chat Sessions
* `temporary-chat`: Mark the chat as a temporary session by setting this parameter to `true` (e.g., `/?temporary-chat=true`)
See https://docs.openwebui.com/features/chat-features/url-params for more info on URL parameters and how to use them.
## Additional about:config Settings
The following `about:config` settings can be adjusted for further customization:
* `browser.ml.chat.shortcuts`: Enable custom shortcuts for the AI chatbot sidebar
* `browser.ml.chat.shortcuts.custom`: Enable custom shortcut keys for the AI chatbot sidebar
* `browser.ml.chat.shortcuts.longPress`: Set the long press delay for shortcut keys
* `browser.ml.chat.sidebar`: Enable the AI chatbot sidebar
* `browser.ml.checkForMemory`: Check for available memory before loading models
* `browser.ml.defaultModelMemoryUsage`: Set the default memory usage for models
* `browser.ml.enable`: Enable the machine learning features in Firefox
* `browser.ml.logLevel`: Set the log level for machine learning features
* `browser.ml.maximumMemoryPressure`: Set the maximum memory pressure threshold
* `browser.ml.minimumPhysicalMemory`: Set the minimum physical memory required
* `browser.ml.modelCacheMaxSize`: Set the maximum size of the model cache
* `browser.ml.modelCacheTimeout`: Set the timeout for model cache
* `browser.ml.modelHubRootUrl`: Set the root URL for the model hub
* `browser.ml.modelHubUrlTemplate`: Set the URL template for the model hub
* `browser.ml.queueWaitInterval`: Set the interval for queue wait
* `browser.ml.queueWaitTimeout`: Set the timeout for queue wait
## Accessing the AI Chatbot Sidebar
To access the AI chatbot sidebar, use one of the following methods:
* Press `CTRL+B` to open the bookmarks sidebar and switch to AI Chatbot
* Press `CTRL+Alt+X` to open the AI chatbot sidebar directly

View File

@ -13,9 +13,12 @@ This tutorial is a community contribution and is not supported by the Open WebUI
[Kokoro-FastAPI](https://github.com/remsky/Kokoro-FastAPI) is a dockerized FastAPI wrapper for the [Kokoro-82M](https://huggingface.co/hexgrad/Kokoro-82M) text-to-speech model that implements the OpenAI API endpoint specification. It offers high-performance text-to-speech with impressive generation speeds: [Kokoro-FastAPI](https://github.com/remsky/Kokoro-FastAPI) is a dockerized FastAPI wrapper for the [Kokoro-82M](https://huggingface.co/hexgrad/Kokoro-82M) text-to-speech model that implements the OpenAI API endpoint specification. It offers high-performance text-to-speech with impressive generation speeds:
- Small local model (~<300mb on disk, additional storage needed up to 5gb for CUDA drivers, etc)
- 100x+ real-time speed via HF A100 - 100x+ real-time speed via HF A100
- 35-50x+ real-time speed via 4060Ti - 35-50x+ real-time speed via 4060Ti
- 5x+ real-time speed via M3 Pro CPU - 5x+ real-time speed via M3 Pro CPU
- Low latecy (sub 1s with GPU), customizable by chunking parameters
-
## Key Features ## Key Features
@ -23,18 +26,20 @@ This tutorial is a community contribution and is not supported by the Open WebUI
- NVIDIA GPU accelerated or CPU Onnx inference - NVIDIA GPU accelerated or CPU Onnx inference
- Streaming support with variable chunking - Streaming support with variable chunking
- Multiple audio format support (`.mp3`, `.wav`, `.opus`, `.flac`, `.aac`, `.pcm`) - Multiple audio format support (`.mp3`, `.wav`, `.opus`, `.flac`, `.aac`, `.pcm`)
- Gradio Web UI interface for easy testing - Integrated web interface on localhost:8880/web (or additional container in repo for gradio)
- Phoneme endpoints for conversion and generation - Phoneme endpoints for conversion and generation
## Voices ## Voices
- af - af
- af_bella - af_bella
- af_irulan
- af_nicole - af_nicole
- af_sarah - af_sarah
- af_sky - af_sky
- am_adam - am_adam
- am_michael - am_michael
- am_gurney
- bf_emma - bf_emma
- bf_isabella - bf_isabella
- bm_george - bm_george
@ -49,23 +54,22 @@ This tutorial is a community contribution and is not supported by the Open WebUI
- Docker installed on your system - Docker installed on your system
- Open WebUI running - Open WebUI running
- For GPU support: NVIDIA GPU with CUDA 12.1 - For GPU support: NVIDIA GPU with CUDA 12.3
- For CPU-only: No special requirements - For CPU-only: No special requirements
## ⚡️ Quick start ## ⚡️ Quick start
### You can choose between GPU or CPU versions ### You can choose between GPU or CPU versions
### GPU Version (Requires NVIDIA GPU with CUDA 12.1) ### GPU Version (Requires NVIDIA GPU with CUDA 12.1)
```bash ```bash
docker run -d -p 8880:8880 -p 7860:7860 remsky/kokoro-fastapi:latest docker run -d -p 8880:8880 -p 7860:7860 remsky/kokoro-fastapi-gpu:latest
``` ```
### CPU Version (ONNX optimized inference) ### CPU Version (ONNX optimized inference)
```bash ```bash
docker run -d -p 8880:8880 -p 7860:7860 remsky/kokoro-fastapi:cpu-latest docker run -d -p 8880:8880 -p 7860:7860 remsky/kokoro-fastapi-cpu:latest
``` ```
## Setting up Open WebUI to use `Kokoro-FastAPI` ## Setting up Open WebUI to use `Kokoro-FastAPI`
@ -78,7 +82,7 @@ To use Kokoro-FastAPI with Open WebUI, follow these steps:
- API Base URL: `http://localhost:8880/v1` - API Base URL: `http://localhost:8880/v1`
- API Key: `not-needed` - API Key: `not-needed`
- TTS Model: `kokoro` - TTS Model: `kokoro`
- TTS Voice: `af_bella` - TTS Voice: `af_bella` # also accepts mapping of existing OAI voices for compatibility
:::info :::info
The default API key is the string `not-needed`. You do not have to change that value if you do not need the added security. The default API key is the string `not-needed`. You do not have to change that value if you do not need the added security.
@ -89,6 +93,7 @@ The default API key is the string `not-needed`. You do not have to change that v
```bash ```bash
git clone https://github.com/remsky/Kokoro-FastAPI.git git clone https://github.com/remsky/Kokoro-FastAPI.git
cd Kokoro-FastAPI cd Kokoro-FastAPI
cd docker/cpu # or docker/gpu
docker compose up --build docker compose up --build
``` ```