mirror of
https://github.com/open-webui/docs
synced 2025-05-19 18:58:41 +00:00
Merge pull request #214 from silentoplayz/silentoplayz-patch-1
Some small adjustments
This commit is contained in:
commit
bc3269f4d9
@ -74,7 +74,10 @@ When using Docker to install Open WebUI, make sure to include the `-v open-webui
|
||||
$(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \
|
||||
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
|
||||
```
|
||||
Note: If you're using an Ubuntu derivative distro, such as Linux Mint, you might need to use `UBUNTU_CODENAME` instead of `VERSION_CODENAME`.
|
||||
|
||||
:::note
|
||||
If you're using an Ubuntu derivative distro, such as Linux Mint, you might need to use `UBUNTU_CODENAME` instead of `VERSION_CODENAME`.
|
||||
:::
|
||||
|
||||
3. **Install Docker Engine:**
|
||||
|
||||
@ -194,7 +197,7 @@ For users who prefer to use Python's package manager `pip`, Open WebUI offers a
|
||||
|
||||
This method installs all necessary dependencies and starts Open WebUI, allowing for a simple and efficient setup. After installation, you can access Open WebUI at [http://localhost:8080](http://localhost:8080). Enjoy! 😄
|
||||
|
||||
### Install from Open WebUI Github Repo
|
||||
### Install from Open WebUI GitHub Repo
|
||||
|
||||
:::info
|
||||
Open WebUI consists of two primary components: the frontend and the backend (which serves as a reverse proxy, handling static frontend files, and additional features). Both need to be running concurrently for the development environment.
|
||||
@ -431,7 +434,5 @@ docker run --rm --volume /var/run/docker.sock:/var/run/docker.sock containrrr/wa
|
||||
In the last part of the command, replace `open-webui` with your container name if it is different.
|
||||
|
||||
:::info
|
||||
|
||||
After updating Open WebUI, you might need to refresh your browser cache to see the changes.
|
||||
|
||||
:::
|
||||
|
@ -2,13 +2,13 @@
|
||||
|
||||
### Installing Both Ollama and Open WebUI Using Kustomize
|
||||
|
||||
For cpu-only pod
|
||||
For a CPU-only Pod:
|
||||
|
||||
```bash
|
||||
kubectl apply -f ./kubernetes/manifest/base
|
||||
```
|
||||
|
||||
For gpu-enabled pod
|
||||
For a GPU-enabled Pod:
|
||||
|
||||
```bash
|
||||
kubectl apply -k ./kubernetes/manifest
|
||||
@ -18,13 +18,13 @@ kubectl apply -k ./kubernetes/manifest
|
||||
|
||||
:::info
|
||||
|
||||
The helm install method has been migrated to the new github repo,
|
||||
and the latest installation method is referred to. [https://github.com/open-webui/helm-charts](https://github.com/open-webui/helm-charts)
|
||||
The Helm installation method has been migrated to the new GitHub repository. Please refer to
|
||||
the latest installation instructions at [https://github.com/open-webui/helm-charts](https://github.com/open-webui/helm-charts).
|
||||
|
||||
:::
|
||||
|
||||
Confirm that'Helm 'has been deployed on your execution environment.
|
||||
For more installation instructions, please refer to [https://helm.sh/docs/intro/install/](https://helm.sh/docs/intro/install/)
|
||||
Confirm that Helm has been deployed on your execution environment.
|
||||
For installation instructions, visit [https://helm.sh/docs/intro/install/](https://helm.sh/docs/intro/install/).
|
||||
|
||||
```bash
|
||||
helm repo add open-webui https://helm.openwebui.com/
|
||||
@ -34,4 +34,4 @@ kubectl create namespace open-webui
|
||||
helm upgrade --install open-webui open-webui/open-webui --namespace open-webui
|
||||
```
|
||||
|
||||
Check the [kubernetes/helm/values.yaml](https://github.com/open-webui/helm-charts/tree/main/charts/open-webui) file to know more values are available for customization
|
||||
For additional customization options, refer to the [kubernetes/helm/values.yaml](https://github.com/open-webui/helm-charts/tree/main/charts/open-webui) file.
|
||||
|
@ -130,7 +130,6 @@ services:
|
||||
- "9099:9099"
|
||||
volumes:
|
||||
- ./pipelines:/app/pipelines
|
||||
- ./blueprints:/app/blueprints
|
||||
extra_hosts:
|
||||
- "host.docker.internal:host-gateway"
|
||||
restart: always
|
||||
|
@ -23,7 +23,7 @@ Open WebUI supports image generation through the **AUTOMATIC1111** [API](https:/
|
||||
docker run -d -p 3000:8080 --add-host=host.docker.internal:host-gateway -e AUTOMATIC1111_BASE_URL=http://host.docker.internal:7860/ -e ENABLE_IMAGE_GENERATION=True -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main
|
||||
```
|
||||
|
||||
### Configuring Open WebUI
|
||||
### Setting Up Open WebUI with AUTOMATIC1111
|
||||
|
||||
1. In Open WebUI, navigate to the **Admin Panel** > **Settings** > **Images** menu.
|
||||
2. Set the `Image Generation Engine` field to `Default (Automatic1111)`.
|
||||
@ -93,9 +93,12 @@ To integrate ComfyUI into Open WebUI, follow these steps:
|
||||
3. Return to Open WebUI and click the **Click here to upload a workflow.json file** button.
|
||||
4. Select the `workflow_api.json` file to import the exported workflow from ComfyUI into Open WebUI.
|
||||
5. After importing the workflow, you must map the `ComfyUI Workflow Nodes` according to the imported workflow node IDs.
|
||||
:::info
|
||||
You may need to adjust an `Input Key` or two within Open WebUI's `ComfyUI Workflow Nodes` section to match a node within your workflow.
|
||||
For example, `seed` may need to be renamed to `noise_seed` to match a node ID within your imported workflow.
|
||||
:::
|
||||
tip
|
||||
Some workflows, such as ones that use any of the Flux models, may utilize multiple nodes IDs that is necessary to fill in for their their node entry fields within Open WebUI. If a node entry field requires multiple IDs, the node IDs should be comma separated (e.g. `1` or `1, 2`).
|
||||
:::tip
|
||||
Some workflows, such as ones that use any of the Flux models, may utilize multiple nodes IDs that is necessary to fill in for their node entry fields within Open WebUI. If a node entry field requires multiple IDs, the node IDs should be comma separated (e.g. `1` or `1, 2`).
|
||||
:::
|
||||
6. Click `Save` to apply the settings and enjoy image generation with ComfyUI integrated into Open WebUI!
|
||||
|
||||
|
@ -53,6 +53,6 @@ Once setup is complete, your Langfuse dashboard should start recording every API
|
||||
|
||||

|
||||
|
||||
## Note
|
||||
|
||||
:::note
|
||||
Ensure that all configurations are correctly set, and environment variables are properly passed to avoid integration issues.
|
||||
:::
|
||||
|
@ -7,17 +7,19 @@ title: "Retrieval Augmented Generation (RAG)"
|
||||
|
||||
Retrieval Augmented Generation (RAG) is a a cutting-edge technology that enhances the conversational capabilities of chatbots by incorporating context from diverse sources. It works by retrieving relevant information from a wide range of sources such as local and remote documents, web content, and even multimedia sources like YouTube videos. The retrieved text is then combined with a predefined RAG template and prefixed to the user's prompt, providing a more informed and contextually relevant response.
|
||||
|
||||
One of the key advantages of RAG is its ability to access and integrate information from a variety of sources, making it an ideal solution for complex conversational scenarios. For instance, when a user asks a question related to a specific document or webpage, RAG can retrieve and incorporate the relevant information from that source into the chat response. RAG can also retrieve and incorporate information from multimedia sources like YouTube videos. By analyzing the transcripts or captions of these videos, RAG can extract relevant information and incorporate it into the chat response.
|
||||
One of the key advantages of RAG is its ability to access and integrate information from a variety of sources, making it an ideal solution for complex conversational scenarios. For instance, when a user asks a question related to a specific document or web page, RAG can retrieve and incorporate the relevant information from that source into the chat response. RAG can also retrieve and incorporate information from multimedia sources like YouTube videos. By analyzing the transcripts or captions of these videos, RAG can extract relevant information and incorporate it into the chat response.
|
||||
|
||||
## Local and Remote RAG Integration
|
||||
|
||||
Local documents must first be uploaded via the Documents section of the Workspace area to access them using the `#` symbol before a query. Click on the formatted URL in the that appears above the chatbox. Once selected, a document icon appears above `Send a message`, indicating successful retrieval.
|
||||
Local documents must first be uploaded via the Documents section of the Workspace area to access them using the `#` symbol before a query. Click on the formatted URL in the that appears above the chat box. Once selected, a document icon appears above `Send a message`, indicating successful retrieval.
|
||||
|
||||
## Web Search for RAG
|
||||
|
||||
For web content integration, start a query in a chat with `#`, followed by the target URL. Click on the formatted URL in the box that appears above the chatbox. Once selected, a document icon appears above `Send a message`, indicating successful retrieval. Open WebUI fetches and parses information from the URL if it can.
|
||||
For web content integration, start a query in a chat with `#`, followed by the target URL. Click on the formatted URL in the box that appears above the chat box. Once selected, a document icon appears above `Send a message`, indicating successful retrieval. Open WebUI fetches and parses information from the URL if it can.
|
||||
|
||||
> **Tip:** Webpages often contain extraneous information such as navigation and footer. For better results, link to a raw or reader-friendly version of the page.
|
||||
:::tip
|
||||
Web pages often contain extraneous information such as navigation and footer. For better results, link to a raw or reader-friendly version of the page.
|
||||
:::
|
||||
|
||||
## RAG Template Customization
|
||||
|
||||
|
@ -146,15 +146,13 @@ Launch your updated stack with:
|
||||
docker compose -f docker-compose.yaml -f docker-compose.searxng.yaml up -d
|
||||
```
|
||||
|
||||
### 3. Alternative: Docker Run
|
||||
|
||||
You can run SearXNG directly using `docker run`:
|
||||
Alternatively, you can run SearXNG directly using `docker run`:
|
||||
|
||||
```bash
|
||||
docker run -d --name searxng -p 8080:8080 -v ./searxng:/etc/searxng --restart always searxng/searxng:latest
|
||||
```
|
||||
|
||||
### 4. GUI Configuration
|
||||
### 3. GUI Configuration
|
||||
|
||||
1. Navigate to: `Admin Panel` -> `Settings` -> `Web Search`
|
||||
2. Toggle `Enable Web Search`
|
||||
@ -165,7 +163,7 @@ docker run -d --name searxng -p 8080:8080 -v ./searxng:/etc/searxng --restart al
|
||||
|
||||

|
||||
|
||||
### 5. Using Web Search in a Chat
|
||||
### 4. Using Web Search in a Chat
|
||||
|
||||
To access Web Search, Click on the + next to the message input field.
|
||||
|
||||
@ -222,10 +220,6 @@ Search the web ;-)
|
||||
|
||||

|
||||
|
||||
## Serper API
|
||||
|
||||
## Serpstack API
|
||||
|
||||
## Brave API
|
||||
|
||||
### Docker Compose Setup
|
||||
@ -243,3 +237,20 @@ services:
|
||||
RAG_WEB_SEARCH_CONCURRENT_REQUESTS: 10
|
||||
```
|
||||
|
||||
## Serpstack API
|
||||
Coming Soon
|
||||
|
||||
## Serper API
|
||||
Coming Soon
|
||||
|
||||
## Serply API
|
||||
Coming Soon
|
||||
|
||||
## DuckDuckGo API
|
||||
Coming Soon
|
||||
|
||||
## Tavily API
|
||||
Coming Soon
|
||||
|
||||
## Jina API
|
||||
Coming Soon
|
||||
|
@ -11,8 +11,8 @@ Open WebUI allows you to filter specific models for use in your instance. This f
|
||||
|
||||

|
||||
|
||||
1. Go to **Admin Panel > Admin Settings**.
|
||||
2. In the **Manage Models** section, you can enable or disable the feature, and add or remove models from the whitelist.
|
||||
1. Go to **Admin Panel > Settings > Users**.
|
||||
2. In the **Manage Models** section, you can enable or disable the model whitelisting feature, and add or remove models from the whitelist.
|
||||
3. Click **Save** to apply your changes.
|
||||
|
||||
## Filtering via Environment Variables
|
||||
|
Loading…
Reference in New Issue
Block a user