New Apache Tika & Artifacts Docs Pages

New Apache Tika & Artifacts Docs Pages
This commit is contained in:
Silentoplayz 2024-12-21 12:47:50 -05:00
parent 1c83f95c88
commit 0bdca8cede
35 changed files with 418 additions and 100 deletions

View File

@ -1,67 +1,67 @@
---
sidebar_position: 4
title: "⚙️ Chat Parameters"
---
Within Open WebUI, there are three levels to setting a **System Prompt** and **Advanced Parameters**: per-chat basis, per-model basis, and per-account basis. This hierarchical system allows for flexibility while maintaining structured administration and control.
## System Prompt and Advanced Parameters Hierarchy Chart
| **Level** | **Definition** | **Modification Permissions** | **Override Capabilities** |
| --- | --- | --- | --- |
| **Per-Chat** | System prompt and advanced parameters for a specific chat instance | Users can modify, but cannot override model-specific settings | Restricted from overriding model-specific settings |
| **Per-Model** | Default system prompt and advanced parameters for a specific model | Administrators can set, Users cannot modify | Admin-specific settings take precedence, User settings can be overridden |
| **Per-Account** | Default system prompt and advanced parameters for a specific user account | Users can set, but may be overridden by model-specific settings | User settings can be overridden by model-specific settings |
### 1. **Per-chat basis:**
- **Description**: The per-chat basis setting refers to the system prompt and advanced parameters configured for a specific chat instance. These settings are only applicable to the current conversation and do not affect future chats.
- **How to set**: Users can modify the system prompt and advanced parameters for a specific chat instance within the right-hand sidebar's **Chat Controls** section in Open WebUI.
- **Override capabilities**: Users are restricted from overriding the **System Prompt** or specific **Advanced Parameters** already set by an administrator on a per-model basis (**#2**). This ensures consistency and adherence to model-specific settings.
<details>
<summary>Example Use Case</summary>
:::tip **Per-chat basis**:
Suppose a user wants to set a custom system prompt for a specific conversation. They can do so by accessing the **Chat Controls** section and modifying the **System Prompt** field. These changes will only apply to the current chat session.
:::
</details>
### 2. **Per-model basis:**
- **Description**: The per-model basis setting refers to the default system prompt and advanced parameters configured for a specific model. These settings are applicable to all chat instances using that model.
- **How to set**: Administrators can set the default system prompt and advanced parameters for a specific model within the **Models** section of the **Workspace** in Open WebUI.
- **Override capabilities**: **User** accounts are restricted from modifying the **System Prompt** or specific **Advanced Parameters** on a per-model basis (**#3**). This restriction prevents users from inappropriately altering default settings.
- **Context length preservation:** When a model's **System Prompt** or specific **Advanced Parameters** are set manually in the **Workspace** section by an Admin, said **System Prompt** or manually set **Advanced Parameters** cannot be overridden or adjusted on a per-account basis within the **General** settings or **Chat Controls** section by a **User** account. This ensures consistency and prevents excessive reloading of the model whenever a user's context length setting changes.
- **Model precedence:** If a model's **System Prompt** or specific **Advanced Parameters** value is pre-set in the Workspace section by an Admin, any context length changes made by a **User** account in the **General** settings or **Chat Controls** section will be disregarded, maintaining the pre-configured value for that model. Be advised that parameters left untouched by an **Admin** account can still be manually adjusted by a **User** account on a per-account or per-chat basis.
<details>
<summary>Example Use Case</summary>
:::tip **Per-model basis**:
Suppose an administrator wants to set a default system prompt for a specific model. They can do so by accessing the **Models** section and modifying the **System Prompt** field for the corresponding model. Any chat instances using this model will automatically use the model's system prompt and advanced parameters.
:::
</details>
### 3. **Per-account basis:**
- **Description**: The per-account basis setting refers to the default system prompt and advanced parameters configured for a specific user account. Any user-specific changes can serve as a fallback in situations where lower-level settings aren't defined.
- **How to set**: Users can set their own system prompt and advanced parameters for their account within the **General** section of the **Settings** menu in Open WebUI.
- **Override capabilities**: Users have the ability to set their own system prompt on their account, but they must be aware that such parameters can still be overridden if an administrator has already set the **System Prompt** or specific **Advanced Parameters** on a per-model basis for the particular model being used.
<details>
<summary>Example Use Case</summary>
:::tip **Per-account basis**:
Suppose a user wants to set their own system prompt for their account. They can do so by accessing the **Settings** menu and modifying the **System Prompt** field.
:::
</details>
## **Optimize System Prompt Settings for Maximum Flexibility**
:::tip **Bonus Tips**
**This tip applies for both administrators and user accounts. To achieve maximum flexibility with your system prompts, we recommend considering the following setup:**
- Assign your primary System Prompt (**i.e., to give an LLM a defining character**) you want to use in your **General** settings **System Prompt** field. This sets it on a per-account level and allows it to work as the system prompt across all your LLMs without requiring adjustments within a model from the **Workspace** section.
- For your secondary System Prompt (**i.e, to give an LLM a task to perform**), choose whether to place it in the **System Prompt** field within the **Chat Controls** sidebar (on a per-chat basis) or the **Models** section of the **Workspace** section (on a per-model basis) for Admins, allowing you to set them directly. This allows your account-level system prompt to work in conjunction with either the per-chat level system prompt provided by **Chat Controls**, or the per-model level system prompt provided by **Models**.
- As an administrator, you should assign your LLM parameters on a per-model basis using **Models** section for optimal flexibility. For both of these secondary System Prompts, ensure to set them in a manner that maximizes flexibility and minimizes required adjustments across different per-account or per-chat instances. It is essential for both your Admin account as well as all User accounts to understand the priority order by which system prompts within **Chat Controls** and the **Models** section will be applied to the **LLM**.
:::
---
sidebar_position: 4
title: "⚙️ Chat Parameters"
---
Within Open WebUI, there are three levels to setting a **System Prompt** and **Advanced Parameters**: per-chat basis, per-model basis, and per-account basis. This hierarchical system allows for flexibility while maintaining structured administration and control.
## System Prompt and Advanced Parameters Hierarchy Chart
| **Level** | **Definition** | **Modification Permissions** | **Override Capabilities** |
| --- | --- | --- | --- |
| **Per-Chat** | System prompt and advanced parameters for a specific chat instance | Users can modify, but cannot override model-specific settings | Restricted from overriding model-specific settings |
| **Per-Model** | Default system prompt and advanced parameters for a specific model | Administrators can set, Users cannot modify | Admin-specific settings take precedence, User settings can be overridden |
| **Per-Account** | Default system prompt and advanced parameters for a specific user account | Users can set, but may be overridden by model-specific settings | User settings can be overridden by model-specific settings |
### 1. **Per-chat basis:**
- **Description**: The per-chat basis setting refers to the system prompt and advanced parameters configured for a specific chat instance. These settings are only applicable to the current conversation and do not affect future chats.
- **How to set**: Users can modify the system prompt and advanced parameters for a specific chat instance within the right-hand sidebar's **Chat Controls** section in Open WebUI.
- **Override capabilities**: Users are restricted from overriding the **System Prompt** or specific **Advanced Parameters** already set by an administrator on a per-model basis (**#2**). This ensures consistency and adherence to model-specific settings.
<details>
<summary>Example Use Case</summary>
:::tip **Per-chat basis**:
Suppose a user wants to set a custom system prompt for a specific conversation. They can do so by accessing the **Chat Controls** section and modifying the **System Prompt** field. These changes will only apply to the current chat session.
:::
</details>
### 2. **Per-model basis:**
- **Description**: The per-model basis setting refers to the default system prompt and advanced parameters configured for a specific model. These settings are applicable to all chat instances using that model.
- **How to set**: Administrators can set the default system prompt and advanced parameters for a specific model within the **Models** section of the **Workspace** in Open WebUI.
- **Override capabilities**: **User** accounts are restricted from modifying the **System Prompt** or specific **Advanced Parameters** on a per-model basis (**#3**). This restriction prevents users from inappropriately altering default settings.
- **Context length preservation:** When a model's **System Prompt** or specific **Advanced Parameters** are set manually in the **Workspace** section by an Admin, said **System Prompt** or manually set **Advanced Parameters** cannot be overridden or adjusted on a per-account basis within the **General** settings or **Chat Controls** section by a **User** account. This ensures consistency and prevents excessive reloading of the model whenever a user's context length setting changes.
- **Model precedence:** If a model's **System Prompt** or specific **Advanced Parameters** value is pre-set in the Workspace section by an Admin, any context length changes made by a **User** account in the **General** settings or **Chat Controls** section will be disregarded, maintaining the pre-configured value for that model. Be advised that parameters left untouched by an **Admin** account can still be manually adjusted by a **User** account on a per-account or per-chat basis.
<details>
<summary>Example Use Case</summary>
:::tip **Per-model basis**:
Suppose an administrator wants to set a default system prompt for a specific model. They can do so by accessing the **Models** section and modifying the **System Prompt** field for the corresponding model. Any chat instances using this model will automatically use the model's system prompt and advanced parameters.
:::
</details>
### 3. **Per-account basis:**
- **Description**: The per-account basis setting refers to the default system prompt and advanced parameters configured for a specific user account. Any user-specific changes can serve as a fallback in situations where lower-level settings aren't defined.
- **How to set**: Users can set their own system prompt and advanced parameters for their account within the **General** section of the **Settings** menu in Open WebUI.
- **Override capabilities**: Users have the ability to set their own system prompt on their account, but they must be aware that such parameters can still be overridden if an administrator has already set the **System Prompt** or specific **Advanced Parameters** on a per-model basis for the particular model being used.
<details>
<summary>Example Use Case</summary>
:::tip **Per-account basis**:
Suppose a user wants to set their own system prompt for their account. They can do so by accessing the **Settings** menu and modifying the **System Prompt** field.
:::
</details>
## **Optimize System Prompt Settings for Maximum Flexibility**
:::tip **Bonus Tips**
**This tip applies for both administrators and user accounts. To achieve maximum flexibility with your system prompts, we recommend considering the following setup:**
- Assign your primary System Prompt (**i.e., to give an LLM a defining character**) you want to use in your **General** settings **System Prompt** field. This sets it on a per-account level and allows it to work as the system prompt across all your LLMs without requiring adjustments within a model from the **Workspace** section.
- For your secondary System Prompt (**i.e, to give an LLM a task to perform**), choose whether to place it in the **System Prompt** field within the **Chat Controls** sidebar (on a per-chat basis) or the **Models** section of the **Workspace** section (on a per-model basis) for Admins, allowing you to set them directly. This allows your account-level system prompt to work in conjunction with either the per-chat level system prompt provided by **Chat Controls**, or the per-model level system prompt provided by **Models**.
- As an administrator, you should assign your LLM parameters on a per-model basis using **Models** section for optimal flexibility. For both of these secondary System Prompts, ensure to set them in a manner that maximizes flexibility and minimizes required adjustments across different per-account or per-chat instances. It is essential for both your Admin account as well as all User accounts to understand the priority order by which system prompts within **Chat Controls** and the **Models** section will be applied to the **LLM**.
:::

View File

@ -1,5 +1,5 @@
---
sidebar_position: 3
sidebar_position: 4
title: "🗨️ Chat Sharing"
---

View File

@ -0,0 +1,6 @@
---
sidebar_position: 3
title: "🎛️ Chat Controls"
---
COMING SOON!

View File

@ -0,0 +1,6 @@
---
sidebar_position: 5
title: "📤 Downloading & Exporting Chats"
---
COMING SOON!

View File

@ -0,0 +1,6 @@
---
sidebar_position: 5
title: "📥 Importing Chats"
---
COMING SOON!

View File

@ -1,6 +1,6 @@
---
sidebar_position: 1
title: "😏 Chat Overview"
title: "😏 Chat Features"
---
COMING SOON!

View File

@ -1,7 +0,0 @@
---
sidebar_position: 2
title: "🏺 Artifacts"
---
Test

View File

@ -0,0 +1,103 @@
---
sidebar_position: 1
title: "🏺 Artifacts"
---
# What are Artifacts and how do I use them in Open WebUI?
Artifacts in Open WebUI are an innovative feature inspired by Claude.AI's Artifacts, allowing you to interact with significant and standalone content generated by an LLM within a chat. They enable you to view, modify, build upon, or reference substantial pieces of content separately from the main conversation, making it easier to work with complex outputs and ensuring that you can revisit important information later.
## When does Open WebUI use Artifacts?
Open WebUI creates an Artifact when the generated content meets specific criteria tailored to our platform:
1. **Renderable**: To be displayed as an Artifact, the content must be in a format that Open WebUI supports for rendering. This includes:
* Single-page HTML websites
* Scalable Vector Graphics (SVG) images
* Complete webpages, which include HTML, Javascript, and CSS all in the same Artifact. Do note that HTML is required if generating a complete webpage.
* ThreeJS Visualizations and other JavaScript visualization libraries such as D3.js.
Other content types like Documents (Markdown or Plain Text), Code snippets, and React components are not rendered as Artifacts by Open WebUI.
## How does Open WebUI's model create Artifacts?
To use artifacts in Open WebUI, a model must provide content that triggers the rendering process to create an artifact. This involves generating valid HTML, SVG code, or other supported formats for artifact rendering. When the generated content meets the criteria mentioned above, Open WebUI will display it as an interactive Artifact.
## How do I use Artifacts in Open WebUI?
When Open WebUI creates an Artifact, you'll see the content displayed in a dedicated Artifacts window to the right side of the main chat. Here's how to interact with Artifacts:
* **Editing and iterating**: Ask an LLM within the chat to edit or iterate on the content, and these updates will be displayed directly in the Artifact window. You can switch between versions using the version selector at the bottom left of the Artifact. Each edit creates a new version, allowing you to track changes using the version selector.
* **Updates**: Open WebUI may update an existing Artifact based on your messages. The Artifact window will display the latest content.
* **Actions**: Access additional actions for the Artifact, such as copying the content or opening the artifact in full screen, located in the lower right corner of the Artifact.
## Editing Artifacts
1. **Targeted Updates**: Describe what you want changed and where. For example:
* "Change the color of the bar in the chart from blue to red."
* "Update the title of the SVG image to 'New Title'."
2. **Full Rewrites**: Request major changes affecting most of the content for substantial restructuring or multiple section updates. For example:
* "Rewrite this single-page HTML website to be a landing page instead."
* "Redesign this SVG so that it's animated using ThreeJS."
**Best Practices**:
* Be specific about which part of the Artifact you want to change.
* Reference unique identifying text around your desired change for targeted updates.
* Consider whether a small update or full rewrite is more appropriate for your needs.
## Use Cases
Artifacts in Open WebUI enable various teams to create high-quality work products quickly and efficiently. Here are some examples tailored to our platform:
* **Designers**:
* Create interactive SVG graphics for UI/UX design.
* Design single-page HTML websites or landing pages.
* **Developers**: Create simple HTML prototypes or generate SVG icons for projects.
* **Marketers**:
* Design campaign landing pages with performance metrics.
* Create SVG graphics for ad creatives or social media posts.
## Examples of Projects you can create with Open WebUI's Artifacts
Artifacts in Open WebUI enable various teams and individuals to create high-quality work products quickly and efficiently. Here are some examples tailored to our platform, showcasing the versatility of artifacts and inspiring you to explore their potential:
1. **Interactive Visualizations**
* Components used: ThreeJS, D3.js, and SVG
* Benefits: Create immersive data stories with interactive visualizations. Open WebUI's Artifacts enable you to switch between versions, making it easier to test different visualization approaches and track changes.
Example Project: Build an interactive line chart using ThreeJS to visualize stock prices over time. Update the chart's colors and scales in separate versions to compare different visualization strategies.
2. **Single-Page Web Applications**
* Components used: HTML, CSS, and JavaScript
* Benefits: Develop single-page web applications directly within Open WebUI. Edit and iterate on the content using targeted updates and full rewrites.
Example Project: Design a to-do list app with a user interface built using HTML and CSS. Use JavaScript to add interactive functionality. Update the app's layout and UI/UX using targeted updates and full rewrites.
3. **Animated SVG Graphics**
* Components used: SVG and ThreeJS
* Benefits: Create engaging animated SVG graphics for marketing campaigns, social media, or web design. Open WebUI's Artifacts enable you to edit and iterate on the graphics in a single window.
Example Project: Design an animated SVG logo for a company brand. Use ThreeJS to add animation effects and Open WebUI's targeted updates to refine the logo's colors and design.
4. **Webpage Prototypes**
* Components used: HTML, CSS, and JavaScript
* Benefits: Build and test webpage prototypes directly within Open WebUI. Switch between versions to compare different design approaches and refine the prototype.
Example Project: Develop a prototype for a new e-commerce website using HTML, CSS, and JavaScript. Use Open WebUI's targeted updates to refines the navigation, layout, and UI/UX.
5. **Interactive Storytelling**
* Components used: HTML, CSS, and JavaScript
* Benefits: Create interactive stories with scrolling effects, animations, and other interactive elements. Open WebUI's Artifacts enable you to refine the story and test different versions.
Example Project: Build an interactive story about a company's history, using scrolling effects and animations to engage the reader. Use targeted updates to refine the story's narrative and Open WebUI's version selector to test different versions.

View File

@ -0,0 +1,6 @@
---
sidebar_position: 5
title: "🐍 Code Execution"
---
COMING SOON!

View File

@ -1,5 +1,5 @@
---
sidebar_position: 17
sidebar_position: 3
title: "🌊 MermaidJS Rendering"
---

View File

@ -1,5 +1,5 @@
---
sidebar_position: 16
sidebar_position: 2
title: "🐍 Python Code Execution"
---

View File

@ -1,5 +1,5 @@
---
sidebar_position: 4
sidebar_position: 6
title: "📝 Evaluation"
---

View File

@ -1,5 +1,5 @@
---
sidebar_position: 14
sidebar_position: 16
title: "📎 JWT Expiration"
---

View File

@ -1,5 +1,5 @@
---
sidebar_position: 15
sidebar_position: 8
title: "🧠 Memory (Experimental)"
---

View File

@ -1,5 +1,5 @@
---
sidebar_position: 4
sidebar_position: 19
title: "🔐 OAuth"
---

View File

@ -1,5 +1,5 @@
---
sidebar_position: 6
sidebar_position: 14
title: "⚖️ Ollama Load Balancing"
---

View File

@ -1,5 +1,5 @@
---
sidebar_position: 9
sidebar_position: 15
title: "🖇 OpenAI Connections"
---

View File

@ -1,5 +1,5 @@
---
sidebar_position: 2
sidebar_position: 3
title: "🛝 Playground (Beta)"
---

View File

@ -1,5 +1,5 @@
---
sidebar_position: 1
sidebar_position: 2
title: "🛠️ Tools & Functions"
---

View File

@ -1,5 +1,5 @@
---
sidebar_position: 8
sidebar_position: 11
title: "🔎 Retrieval Augmented Generation (RAG)"
---

View File

@ -1,5 +1,5 @@
---
sidebar_position: 11
sidebar_position: 19
title: "🔒 SSO: Federated Authentication Support"
---

View File

@ -1,5 +1,5 @@
---
sidebar_position: 4
sidebar_position: 7
title: "📝 Task Model"
---

View File

@ -1,5 +1,5 @@
---
sidebar_position: 3
sidebar_position: 4
title: "👨‍👧‍👦 User Groups"
---

View File

@ -1,5 +1,5 @@
---
sidebar_position: 21
sidebar_position: 10
title: "📹 Video Call"
---

View File

@ -1,5 +1,5 @@
---
sidebar_position: 20
sidebar_position: 9
title: "🎙️ Hands-Free Voice Call"
---

View File

@ -1,6 +1,6 @@
---
sidebar_position: 12
title: "🪝 Webhook for New Sign Ups"
sidebar_position: 17
title: "🪝 Webhook Integrations"
---
Overview

View File

@ -1,5 +1,5 @@
---
sidebar_position: 10
sidebar_position: 12
title: "📝 Model Whitelisting"
---

View File

@ -1,6 +1,6 @@
---
sidebar_position: 0
title: "🚧 Server Connection Error"
title: "🚧 Server Connectivity Issues"
---
We're here to help you get everything set up and running smoothly. Below, you'll find step-by-step instructions tailored for different scenarios to solve common connection issues with Ollama and external servers like Hugging Face.

View File

@ -0,0 +1,186 @@
---
sidebar_position: 4000
title: "🪶 Apache Tika Extraction"
---
:::warning
This tutorial is a community contribution and is not supported by the OpenWebUI team. It serves only as a demonstration on how to customize OpenWebUI for your specific use case. Want to contribute? Check out the contributing tutorial.
:::
## 🪶 Apache Tika Extraction
This documentation provides a step-by-step guide to integrating Apache Tika with Open WebUI. Apache Tika is a content analysis toolkit that can be used to detect and extract metadata and text content from over a thousand different file types. All of these file types can be parsed through a single interface, making Tika useful for search engine indexing, content analysis, translation, and much more.
Prerequisites
------------
* Open WebUI instance
* Docker installed on your system
* Docker network set up for Open WebUI
Integration Steps
----------------
### Step 1: Create a Docker Compose File or Run the Docker Command for Apache Tika
You have two options to run Apache Tika:
**Option 1: Using Docker Compose**
Create a new file named `docker-compose.yml` in the same directory as your Open WebUI instance. Add the following configuration to the file:
```yml
services:
tika:
image: apache/tika:latest-full
container_name: tika
ports:
- "9998:9998"
restart: unless-stopped
```
Run the Docker Compose file using the following command:
```bash
docker-compose up -d
```
**Option 2: Using Docker Run Command**
Alternatively, you can run Apache Tika using the following Docker command:
```bash
docker run -d --name tika \
-p 9998:9998 \
-restart unless-stopped \
apache/tika:latest-full
```
Note that if you choose to use the Docker run command, you'll need to specify the `--network` flag if you want to run the container in the same network as your Open WebUI instance.
### Step 2: Configure Open WebUI to Use Apache Tika
To use Apache Tika as the context extraction engine in Open WebUI, follow these steps:
* Log in to your Open WebUI instance.
* Navigate to the `Admin Panel` settings menu.
* Click on `Settings`.
* Click on the `Documents` tab.
* Change the `Default` content extraction engine dropdown to `Tika`.
* Update the context extraction engine URL to `http://tika:9998`.
* Save the changes.
Verifying Apache Tika in Docker
=====================================
To verify that Apache Tika is working correctly in a Docker environment, you can follow these steps:
### 1. Start the Apache Tika Docker Container
First, ensure that the Apache Tika Docker container is running. You can start it using the following command:
```bash
docker run -p 9998:9998 apache/tika
```
This command starts the Apache Tika container and maps port 9998 from the container to port 9998 on your local machine.
### 2. Verify the Server is Running
You can verify that the Apache Tika server is running by sending a GET request:
```bash
curl -X GET http://localhost:9998/tika
```
This command should return the following response:
```
This is Tika Server. Please PUT
```
### 3. Verify the Integration
Alternatively, you can also try sending a file for analysis to test the integration. You can test Apache Tika by sending a file for analysis using the `curl` command:
```bash
curl -T test.txt http://localhost:9998/tika
```
Replace `test.txt` with the path to a text file on your local machine.
Apache Tika will respond with the detected metadata and content type of the file.
### Using a Script to Verify Apache Tika
If you want to automate the verification process, this script sends a file to Apache Tika and checks the response for the expected metadata. If the metadata is present, the script will output a success message along with the file's metadata; otherwise, it will output an error message and the response from Apache Tika.
```python
import requests
def verify_tika(file_path, tika_url):
try:
# Send the file to Apache Tika and verify the output
response = requests.put(tika_url, files={'file': open(file_path, 'rb')})
if response.status_code == 200:
print("Apache Tika successfully analyzed the file.")
print("Response from Apache Tika:")
print(response.text)
else:
print("Error analyzing the file:")
print(f"Status code: {response.status_code}")
print(f"Response from Apache Tika: {response.text}")
except Exception as e:
print(f"An error occurred: {e}")
if __name__ == "__main__":
file_path = "test.txt" # Replace with the path to your file
tika_url = "http://localhost:9998/tika"
verify_tika(file_path, tika_url)
```
Instructions to run the script:
### Prerequisites
* Python 3.x must be installed on your system
* `requests` library must be installed (you can install it using pip: `pip install requests`)
* Apache Tika Docker container must be running (use `docker run -p 9998:9998 apache/tika` command)
* Replace `"test.txt"` with the path to the file you want to send to Apache Tika
### Running the Script
1. Save the script as `verify_tika.py` (e.g., using a text editor like Notepad or Sublime Text)
2. Open a terminal or command prompt
3. Navigate to the directory where you saved the script (using the `cd` command)
4. Run the script using the following command: `python verify_tika.py`
5. The script will output a message indicating whether Apache Tika is working correctly
Note: If you encounter any issues, ensure that the Apache Tika container is running correctly and that the file is being sent to the correct URL.
### Conclusion
By following these steps, you can verify that Apache Tika is working correctly in a Docker environment. You can test the setup by sending a file for analysis, verifying the server is running with a GET request, or use a script to automate the process. If you encounter any issues, ensure that the Apache Tika container is running correctly and that the file is being sent to the correct URL.
Troubleshooting
--------------
* Make sure the Apache Tika service is running and accessible from the Open WebUI instance.
* Check the Docker logs for any errors or issues related to the Apache Tika service.
* Verify that the context extraction engine URL is correctly configured in Open WebUI.
Benefits of Integration
----------------------
Integrating Apache Tika with Open WebUI provides several benefits, including:
* **Improved Metadata Extraction**: Apache Tika's advanced metadata extraction capabilities can help you extract accurate and relevant data from your files.
* **Support for Multiple File Formats**: Apache Tika supports a wide range of file formats, making it an ideal solution for organizations that work with diverse file types.
* **Enhanced Content Analysis**: Apache Tika's advanced content analysis capabilities can help you extract valuable insights from your files.
Conclusion
----------
Integrating Apache Tika with Open WebUI is a straightforward process that can improve the metadata extraction capabilities of your Open WebUI instance. By following the steps outlined in this documentation, you can easily set up Apache Tika as a context extraction engine for Open WebUI.

View File

@ -3,7 +3,11 @@ sidebar_position: 6
title: "🎨 Image Generation"
---
# Image Generation
:::warning
This tutorial is a community contribution and is not supported by the OpenWebUI team. It serves only as a demonstration on how to customize OpenWebUI for your specific use case. Want to contribute? Check out the contributing tutorial.
:::
# 🎨 Image Generation
Open WebUI supports image generation through three backends: **AUTOMATIC1111**, **ComfyUI**, and **OpenAI DALL·E**. This guide will help you set up and use either of these options.

View File

@ -1,9 +1,13 @@
---
sidebar_position: 30
title: "🔒 Redis Websocket Support"
title: "🔗 Redis Websocket Support"
---
# 🔒 Redis Websocket Support
:::warning
This tutorial is a community contribution and is not supported by the OpenWebUI team. It serves only as a demonstration on how to customize OpenWebUI for your specific use case. Want to contribute? Check out the contributing tutorial.
:::
# 🔗 Redis Websocket Support
## Overview

View File

@ -3,7 +3,11 @@ sidebar_position: 5
title: "🌐 Web Search"
---
## Overview
:::warning
This tutorial is a community contribution and is not supported by the OpenWebUI team. It serves only as a demonstration on how to customize OpenWebUI for your specific use case. Want to contribute? Check out the contributing tutorial.
:::
## 🌐 Web Search
This guide provides instructions on how to set up web search capabilities in Open WebUI using various search engines.