diff --git a/docs/enterprise.mdx b/docs/enterprise.mdx
index 9276c6d..9462317 100644
--- a/docs/enterprise.mdx
+++ b/docs/enterprise.mdx
@@ -28,9 +28,37 @@ In the rapidly advancing AI landscape, staying ahead isn't just a competitive ad
## **Letβs Talk**
+
+π§ **sales@openwebui.com** β Send us your **team size**, and letβs explore how we can work together! Support available in **English & Korean (νκ΅μ΄), with more languages coming soon!**
+
Transform the way your organization leverages AI. **Contact our enterprise team today** for customized pricing, expert consulting, and tailored deployment strategies.
-π§ **sales@openwebui.com** β Support available in **English & Korean (νκ΅μ΄), with more languages coming soon!**
+
+
+:::tip
+
+We are **currently focused on partnering with teams of 100+ seats** to provide the **dedicated attention, expertise, and tailored solutions** needed for guaranteed success.
+
+If your team is **close to 50 seats** and you're looking for advanced features, **reach out**βwe may still be able to help.
+
+For **smaller teams**, we're launching a **self-serve licensing option** by the **end of Q2**, bringing **customization and select enterprise features** to companies of all sizes. **Stay tuned!**
+
+:::
+
+
+:::info
+
+## β οΈ Partnership Guidelines for Agencies
+
+We **carefully select** our partners to maintain the **highest standards** and provide **the best experience** to our community.
+
+If you are a **consulting agency**, **AI services provider**, or **reseller**, please **do not** contact our enterprise sales directly. Instead, **fill out our partnership interest form**:
+
+π **[Apply Here](https://forms.gle/SemdgxjFXpHmdCby6)**
+
+We evaluate all applications to ensure alignment with our **mission, vision, and values**, selecting only those partners best suited for our ecosystem.
+:::
+
---
@@ -82,11 +110,6 @@ Open WebUIβs enterprise solutions provide mission-critical businesses with **a
β
**Operational AI Consulting** β On-demand **architecture, optimization, and deployment consulting**.
β
**Strategic AI Roadmap Planning** β Work with our experts to **define your AI transformation strategy**.
-### π **Lifecycle & Ecosystem Benefits**
-β
**Multi-Tenant & Enterprise-Scale Deployments** β Support for **large-scale organizations**, distributed teams, and divisions.
-β
**Access to Private Beta & Enterprise-Only Features** β Stay ahead with access to upcoming, high-priority capabilities.
-β
**Software Bill of Materials (SBOM) & Security Transparency** β Enterprise customers receive **full security reports and compliance packages**.
-
---
## **Keep Open WebUI Thriving: Support Continuous Innovation**
diff --git a/docs/features/chat-features/chat-params.md b/docs/features/chat-features/chat-params.md
index f419c39..f5605eb 100644
--- a/docs/features/chat-features/chat-params.md
+++ b/docs/features/chat-features/chat-params.md
@@ -10,8 +10,8 @@ Within Open WebUI, there are three levels to setting a **System Prompt** and **A
| **Level** | **Definition** | **Modification Permissions** | **Override Capabilities** |
| --- | --- | --- | --- |
| **Per-Chat** | System prompt and advanced parameters for a specific chat instance | Users can modify, but cannot override model-specific settings | Restricted from overriding model-specific settings |
-| **Per-Model** | Default system prompt and advanced parameters for a specific model | Administrators can set, Users cannot modify | Admin-specific settings take precedence, User settings can be overridden |
| **Per-Account** | Default system prompt and advanced parameters for a specific user account | Users can set, but may be overridden by model-specific settings | User settings can be overridden by model-specific settings |
+| **Per-Model** | Default system prompt and advanced parameters for a specific model | Administrators can set, Users cannot modify | Admin-specific settings take precedence, User settings can be overridden |
### 1. **Per-chat basis:**
@@ -26,7 +26,20 @@ Suppose a user wants to set a custom system prompt for a specific conversation.
:::
-### 2. **Per-model basis:**
+### 2. **Per-account basis:**
+
+- **Description**: The per-account basis setting refers to the default system prompt and advanced parameters configured for a specific user account. Any user-specific changes can serve as a fallback in situations where lower-level settings aren't defined.
+- **How to set**: Users can set their own system prompt and advanced parameters for their account within the **General** section of the **Settings** menu in Open WebUI.
+- **Override capabilities**: Users have the ability to set their own system prompt on their account, but they must be aware that such parameters can still be overridden if an administrator has already set the **System Prompt** or specific **Advanced Parameters** on a per-model basis for the particular model being used.
+
+
+Example Use Case
+:::tip **Per-account basis**:
+Suppose a user wants to set their own system prompt for their account. They can do so by accessing the **Settings** menu and modifying the **System Prompt** field.
+:::
+
+
+### 3. **Per-model basis:**
- **Description**: The per-model basis setting refers to the default system prompt and advanced parameters configured for a specific model. These settings are applicable to all chat instances using that model.
- **How to set**: Administrators can set the default system prompt and advanced parameters for a specific model within the **Models** section of the **Workspace** in Open WebUI.
@@ -41,18 +54,6 @@ Suppose an administrator wants to set a default system prompt for a specific mod
:::
-### 3. **Per-account basis:**
-
-- **Description**: The per-account basis setting refers to the default system prompt and advanced parameters configured for a specific user account. Any user-specific changes can serve as a fallback in situations where lower-level settings aren't defined.
-- **How to set**: Users can set their own system prompt and advanced parameters for their account within the **General** section of the **Settings** menu in Open WebUI.
-- **Override capabilities**: Users have the ability to set their own system prompt on their account, but they must be aware that such parameters can still be overridden if an administrator has already set the **System Prompt** or specific **Advanced Parameters** on a per-model basis for the particular model being used.
-
-
-Example Use Case
-:::tip **Per-account basis**:
-Suppose a user wants to set their own system prompt for their account. They can do so by accessing the **Settings** menu and modifying the **System Prompt** field.
-:::
-
## **Optimize System Prompt Settings for Maximum Flexibility**
diff --git a/docs/features/index.mdx b/docs/features/index.mdx
index 79ec2be..74fee00 100644
--- a/docs/features/index.mdx
+++ b/docs/features/index.mdx
@@ -108,7 +108,7 @@ import { TopBanners } from "@site/src/components/TopBanners";
- π¨ **Flexible Text Input Options**: Switch between rich text input and legacy text area input for chat, catering to user preferences and providing a choice between advanced formatting and simpler text input.
-- π Effortless Code Sharing : Streamline the sharing and collaboration process with convenient code copying options, including a floating copy button in code blocks and click-to-copy functionality from code spans, saving time and reducing frustration.
+- π **Effortless Code Sharing** : Streamline the sharing and collaboration process with convenient code copying options, including a floating copy button in code blocks and click-to-copy functionality from code spans, saving time and reducing frustration.
- π¨ **Interactive Artifacts**: Render web content and SVGs directly in the interface, supporting quick iterations and live changes for enhanced creativity and productivity.
@@ -196,7 +196,7 @@ import { TopBanners } from "@site/src/components/TopBanners";
- π₯ **'@' Model Integration**: By seamlessly switching to any accessible local or external model during conversations, users can harness the collective intelligence of multiple models in a single chat. This can done by using the `@` command to specify the model by name within a chat.
-- π·οΈ Conversation Tagging : Effortlessly categorize and locate tagged chats for quick reference and streamlined data collection using our efficient 'tag:' query system, allowing you to manage, search, and organize your conversations without cluttering the interface.
+- π·οΈ **Conversation Tagging** : Effortlessly categorize and locate tagged chats for quick reference and streamlined data collection using our efficient 'tag:' query system, allowing you to manage, search, and organize your conversations without cluttering the interface.
- π§ **Auto-Tagging**: Conversations can optionally be automatically tagged for improved organization, mirroring the efficiency of auto-generated titles.
@@ -280,7 +280,7 @@ import { TopBanners } from "@site/src/components/TopBanners";
- π― **Topic-Based Rankings**: Discover more accurate rankings with our experimental topic-based re-ranking system, which adjusts leaderboard standings based on tag similarity in feedback.
-- π Unified and Collaborative Workspace : Access and manage all your model files, prompts, documents, tools, and functions in one convenient location, while also enabling multiple users to collaborate and contribute to models, knowledge, prompts, or tools, streamlining your workflow and enhancing teamwork.
+- π **Unified and Collaborative Workspace** : Access and manage all your model files, prompts, documents, tools, and functions in one convenient location, while also enabling multiple users to collaborate and contribute to models, knowledge, prompts, or tools, streamlining your workflow and enhancing teamwork.
---
diff --git a/docs/features/plugin/tools/index.mdx b/docs/features/plugin/tools/index.mdx
index 3b63016..4087bf8 100644
--- a/docs/features/plugin/tools/index.mdx
+++ b/docs/features/plugin/tools/index.mdx
@@ -304,7 +304,7 @@ async def test_function(
"source": title,
}
],
- "source": {"name": "Title of the content"", "url": "http://link-to-citation"},
+ "source": {"name": "Title of the content", "url": "http://link-to-citation"},
},
}
)
diff --git a/docs/features/rag.md b/docs/features/rag.md
index d836469..a435c96 100644
--- a/docs/features/rag.md
+++ b/docs/features/rag.md
@@ -48,3 +48,18 @@ A variety of parsers extract content from local and remote documents. For more,
## Google Drive Integration
When paired with a Google Cloud project that has the Google Picker API and Google Drive API enabled, this feature allows users to directly access their Drive files from the chat interface and upload documents, slides, sheets and more and uploads them as context to your chat. Can be enabled `Admin Panel` > `Settings` > `Documents` menu. Must set [`GOOGLE_DRIVE_API_KEY and GOOGLE_DRIVE_CLIENT_ID`](https://github.com/open-webui/docs/blob/main/docs/getting-started/env-configuration.md) environment variables to use.
+
+### Detailed Instructions
+1. Create an OAuth 2.0 client and configure both the Authorized JavaScript origins & Authorized redirect URI to be the URL (include the port if any) you use to access your Open-WebUI instance.
+1. Make a note of the Client ID associated with that OAuth client.
+1. Make sure that you enable both Google Drive API and Google Picker API for your project.
+1. Also set your app (project) as Testing and add your Google Drive email to the User List
+1. Set the permission scope to include everything those APIs have to offer. And because the app would be in Testing mode, no verification is required by Google to allow the app from accessing the data of the limited test users.
+1. Go to the Google Picker API page, and click on the create credentials button.
+1. Create an API key and under Application restrictions and choose Websites. Then add your Open-WebUI instance's URL, same as the Authorized JavaScript origins and Authorized redirect URIs settings in the step 1.
+1. Set up API restrictions on the API Key to only have access to Google Drive API & Google Picker API
+1. Set up the environment variable, `GOOGLE_DRIVE_CLIENT_ID` to the Client ID of the OAuth client from step 2.
+1. Set up the environment variable `GOOGLE_DRIVE_API_KEY` to the API Key value setup up in step 7 (NOT the OAuth client secret from step 2).
+1. Set up the `GOOGLE_REDIRECT_URI` to my Open-WebUI instance's URL (include the port, if any).
+1. Then relaunch your Open-WebUI instance with those three environment variables.
+1. After that, make sure Google Drive was enabled under `Admin Panel` < `Settings` < `Documents` < `Google Drive`
diff --git a/docs/getting-started/advanced-topics/logging.md b/docs/getting-started/advanced-topics/logging.md
index 796d16e..ac6550c 100644
--- a/docs/getting-started/advanced-topics/logging.md
+++ b/docs/getting-started/advanced-topics/logging.md
@@ -45,6 +45,12 @@ For example, to set `DEBUG` logging level as a Docker parameter use:
--env GLOBAL_LOG_LEVEL="DEBUG"
```
+or for Docker Compose put this in the environment section of the docker-compose.yml file (notice the absence of quotation signs):
+```
+environment:
+ - GLOBAL_LOG_LEVEL=DEBUG
+```
+
### App/Backend ###
Some level of granularity is possible using any of the following combination of variables. Note that `basicConfig` `force` isn't presently used so these statements may only affect Open-WebUI logging and not 3rd party modules.
diff --git a/docs/getting-started/env-configuration.md b/docs/getting-started/env-configuration.md
index 0b3fdcc..b808316 100644
--- a/docs/getting-started/env-configuration.md
+++ b/docs/getting-started/env-configuration.md
@@ -242,10 +242,15 @@ allowing the client to wait indefinitely.
:::
+#### `AIOHTTP_CLIENT_TIMEOUT_MODEL_LIST`
+
+- Type: `int`
+- Description: Sets the timeout in seconds for fetching the model list. This can be useful in cases where network latency requires a longer timeout duration to successfully retrieve the model list.
+
#### `AIOHTTP_CLIENT_TIMEOUT_OPENAI_MODEL_LIST`
- Type: `int`
-- Description: Sets the timeout in seconds for fetching the OpenAI model list. This can be useful in cases where network latency requires a longer timeout duration to successfully retrieve the model list.
+- Description: Sets the timeout in seconds for fetching the model list. This can be useful in cases where network latency requires a longer timeout duration to successfully retrieve the model list.
### Directories
@@ -575,7 +580,7 @@ The value of `API_KEY_ALLOWED_ENDPOINTS` should be a comma-separated list of end
- type: `bool`
- Default: `False`
-- Description: Forwards user information (name, id, email, and role) as X-headers to OpenAI API.
+- Description: Forwards user information (name, id, email, and role) as X-headers to OpenAI API and Ollama API.
If enabled, the following headers are forwarded:
- `X-OpenWebUI-User-Name`
- `X-OpenWebUI-User-Id`
diff --git a/docs/pipelines/tutorials.md b/docs/pipelines/tutorials.md
index b6d6e05..7144eb7 100644
--- a/docs/pipelines/tutorials.md
+++ b/docs/pipelines/tutorials.md
@@ -23,3 +23,7 @@ with us, as we'd love to feature it here!
[Demo and Code Review for Text-To-SQL with Open-WebUI](https://www.youtube.com/watch?v=iLVyEgxGbg4) (YouTube video by Jordan Nanos)
- A hands-on demonstration and code review on utilizing text-to-sql tools powered by the Open WebUI.
+
+[Deploying custom Document RAG pipeline with Open-WebUI](https://github.com/Sebulba46/document-RAG-pipeline) (GitHub guide by Sebulba46)
+
+- Step by step guide to deploy Open-WebUI and pipelines containers and creating your own document RAG with local LLM API.
diff --git a/docs/team.mdx b/docs/team.mdx
index 52cc4b2..8ab76e5 100644
--- a/docs/team.mdx
+++ b/docs/team.mdx
@@ -7,9 +7,9 @@ import { TopBanners } from "@site/src/components/TopBanners";
-## π Meet Our Team!
+## π Meet Our Development Team!
-Our team is led by the dedicated creator and founder, [Tim J. Baek](https://github.com/tjbck). Although Tim is currently the only official member of the team, we are incredibly fortunate to have a community of **[amazing contributors](https://github.com/open-webui/open-webui/graphs/contributors)** who find this project valuable and actively participate in its continued success.
+Our team is led by the dedicated creator and founder, [Tim J. Baek](https://github.com/tjbck). Although Tim is currently the only official full-time member of the development team, we are incredibly fortunate to have a community of **[amazing contributors](https://github.com/open-webui/open-webui/graphs/contributors)** who find this project valuable and actively participate in its continued success.
### π Our Contributors
@@ -20,11 +20,18 @@ Our team is led by the dedicated creator and founder, [Tim J. Baek](https://gith
/>
-### Important Note:
+## ποΈ Governance
-To keep things smooth and organized, please do not contact or `@` mention anyone other than the official maintainer. If you have any questions or need assistance, `@tjbck` is your go-to person. You can also reach out via our official email, hello@openwebui.com. π¨
+Open WebUI is centrally managed and operated by Open WebUI, Inc. Our governance model is straightforward and intentionalβwe do not operate on [a committee-based governance system or a community-driven voting process](https://www.reddit.com/r/OpenWebUI/comments/1ijkh6m/comment/mbf0yhm/). Strategic and operational decisions are led openly and transparently by our founder, Tim J. Baek, ensuring a clear, unified, long-term vision.
-Your understanding and cooperation are appreciated! πͺ
+Our project is specifically designed and structured to remain sustainable and independent for **decades** to comeβthanks largely to an intentional focus on remaining extremely lean, strategic, and capital-efficient. We aren't pursuing short-term milestones or temporary trends; we're carefully building something lasting and meaningful.
+
+Beyond our open-source contributors, Open WebUI, Inc. has an incredible global team working behind the scenes across multiple domains, including technology, operations, strategy, finance, legal, marketing, communications, partnerships, and community management. While Tim leads the vision, execution is supported by a growing network of talented individuals helping to ensure the long-term success of the project. Our team spans various expertise areas, ensuring that Open WebUI, Inc. thrives not just in software development but also in operational excellence, financial sustainability, legal compliance, brand awareness, and effective collaboration with partners.
+
+We greatly appreciate enthusiasm and thoughtful suggestions from our community. At the same time, **we're not looking for unsolicited governance recommendations or guidance on how to operate**βwe know exactly how we want to run our project (just as, for example, you wouldn't tell OpenAI how to run theirs). Open WebUI maintains strong, opinionated leadership because that's precisely what we believe is necessary to build something truly great, fast-moving, and purposeful.
+
+If our leadership and governance style align with your views, we're thrilled to have your continued support and contributions. However, if you fundamentally disagree with our direction, **one of the key benefits of our open-source license is the freedom to fork the project and implement your preferred approach.**
+
+Thank you for respecting our perspective and for your continued support and contributions. We're excited to keep building with the community around the vision we've established together!
-Let's keep building something awesome together! π
diff --git a/docs/tutorials/https-haproxy.md b/docs/tutorials/https-haproxy.md
new file mode 100644
index 0000000..d32c45f
--- /dev/null
+++ b/docs/tutorials/https-haproxy.md
@@ -0,0 +1,169 @@
+---
+sidebar_position: 201
+title: "π HTTPS using HAProxy"
+---
+
+:::warning
+This tutorial is a community contribution and is not supported by the Open WebUI team. It serves only as a demonstration on how to customize Open WebUI for your specific use case. Want to contribute? Check out the contributing tutorial.
+:::
+
+# HAProxy Configuration for Open WebUI
+
+HAProxy (High Availability Proxy) is specialized load-balancing and reverse proxy solution that is highly configurable and designed to handle large amounts of connections with a relatively low resource footprint. for more information, please see: https://www.haproxy.org/
+
+## Install HAProxy and Let's Encrypt
+
+First, install HAProxy and Let's Encrypt's certbot:
+### Redhat derivatives
+```sudo dnf install haproxy certbot openssl -y```
+### Debian derivatives
+```sudo apt install haproxy certbot openssl -y```
+
+## HAProxy Configuration Basics
+
+HAProxy's configuration is by default stored in ```/etc/haproxy/haproxy.cfg```. This file contains all the configuration directives that determine how HAProxy will operate.
+
+The base configuration for HAProxy to work with Open WebUI is pretty simple.
+
+```
+ #---------------------------------------------------------------------
+# Global settings
+#---------------------------------------------------------------------
+global
+ # to have these messages end up in /var/log/haproxy.log you will
+ # need to:
+ #
+ # 1) configure syslog to accept network log events. This is done
+ # by adding the '-r' option to the SYSLOGD_OPTIONS in
+ # /etc/sysconfig/syslog
+ #
+ # 2) configure local2 events to go to the /var/log/haproxy.log
+ # file. A line like the following can be added to
+ # /etc/sysconfig/syslog
+ #
+ # local2.* /var/log/haproxy.log
+ #
+ log 127.0.0.1 local2
+
+ chroot /var/lib/haproxy
+ pidfile /var/run/haproxy.pid
+ maxconn 4000
+ user haproxy
+ group haproxy
+ daemon
+
+ #adjust the dh-param if too low
+ tune.ssl.default-dh-param 2048
+#---------------------------------------------------------------------
+# common defaults that all the 'listen' and 'backend' sections will
+# use if not designated in their block
+#---------------------------------------------------------------------
+defaults
+ mode http
+ log global
+ option httplog
+ option dontlognull
+ option http-server-close
+ option forwardfor #except 127.0.0.0/8
+ option redispatch
+ retries 3
+ timeout http-request 300s
+ timeout queue 2m
+ timeout connect 120s
+ timeout client 10m
+ timeout server 10m
+ timeout http-keep-alive 120s
+ timeout check 10s
+ maxconn 3000
+
+#http
+frontend web
+ #Non-SSL
+ bind 0.0.0.0:80
+ #SSL/TLS
+ bind 0.0.0.0:443 ssl crt /path/to/ssl/folder/
+
+ #Let's Encrypt SSL
+ acl letsencrypt-acl path_beg /.well-known/acme-challenge/
+ use_backend letsencrypt-backend if letsencrypt-acl
+
+ #Subdomain method
+ acl chat-acl hdr(host) -i subdomain.domain.tld
+ #Path Method
+ acl chat-acl path_beg /owui/
+ use_backend owui_chat if chat-acl
+
+#Pass SSL Requests to Lets Encrypt
+backend letsencrypt-backend
+ server letsencrypt 127.0.0.1:8688
+
+#OWUI Chat
+backend owui_chat
+ # add X-FORWARDED-FOR
+ option forwardfor
+ # add X-CLIENT-IP
+ http-request add-header X-CLIENT-IP %[src]
+ http-request set-header X-Forwarded-Proto https if { ssl_fc }
+ server chat :3000
+```
+
+You will see that we have ACL records (routers) for both Open WebUI and Let's Encrypt. To use WebSocket with OWUI, you need to have an SSL configured, and the easiest way to do that is to use Let's Encrypt.
+
+You can use either the subdomain method or the path method for routing traffic to Open WebUI. The subdomain method requires a dedicated subdomain (e.g., chat.yourdomain.com), while the path method allows you to access Open WebUI through a specific path on your domain (e.g., yourdomain.com/owui/). Choose the method that best suits your needs and update the configuration accordingly.
+
+:::info
+You will need to expose port 80 and 443 to your HAProxy server. These ports are required for Let's Encrypt to validate your domain and for HTTPS traffic. You will also need to ensure your DNS records are properly configured to point to your HAProxy server. If you are running HAProxy at home, you will need to use port forwarding in your router to forward ports 80 and 443 to your HAProxy server.
+:::
+
+## Issuing SSL Certificates with Let's Encrypt
+
+Before starting HAProxy, you will want to generate a self signed certificate to use as a placeholder until Let's Encrypt issues a proper one. Here's how to generate a self-signed certificate:
+
+```
+openssl req -x509 -newkey rsa:2048 -keyout /tmp/haproxy.key -out /tmp/haproxy.crt -days 3650 -nodes -subj "/CN=localhost"
+```
+
+Then combine the key and certificate into a PEM file that HAProxy can use:
+
+```cat /tmp/haproxy.crt /tmp/haproxy.key > /etc/haproxy/certs/haproxy.pem```
+
+:::info
+Make sure you update the HAProxy configuration based on your needs and configuration.
+:::
+
+Once you have your HAProxy configuration set up, you can use certbot to obtain and manage your SSL certificates. Certbot will handle the validation process with Let's Encrypt and automatically update your certificates when they are close to expiring (assuming you use the certbot auto-renewal service).
+
+You can validate the HAProxy configuration by running `haproxy -c -f /etc/haproxy/haproxy.cfg`. If there are no errors, you can start HAProxy with `systemctl start haproxy` and verify it's running with `systemctl status haproxy`.
+
+To ensure HAProxy starts with the system, `systemctl enable haproxy`.
+
+When you have HAProxy configured, you can use Let's encrypt to issue your valid SSL certificate.
+First, you will need to register with Let's Encrypt. You should only need to do this one time:
+
+`certbot register --agree-tos --email your@email.com --non-interactive`
+
+Then you can request your certificate:
+
+```
+certbot certonly -n --standalone --preferred-challenges http --http-01-port-8688 -d yourdomain.com
+```
+
+Once the certificate is issued, you will need to merge the certificate and private key files into a single PEM file that HAProxy can use.
+
+```
+cat /etc/letsencrypt/live/{domain}/fullchain.pem /etc/letsencrypt/live/{domain}/privkey.pem > /etc/haproxy/certs/{domain}.pem
+chmod 600 /etc/haproxy/certs/{domain}.pem
+chown haproxy:haproxy /etc/haproxy/certs/{domain}.pem
+```
+You can then restart HAProxy to apply the new certificate:
+`systemctl restart haproxy`
+
+## HAProxy Manager (Easy Deployment Option)
+
+If you would like to have something manage your HAProxy configuration and Let's Encrypt SSLs automatically, I have written a simple python script and created a docker container you can use to create and manage your HAProxy config and manage the Let's Encrypt certificate lifecycle.
+
+https://github.com/shadowdao/haproxy-manager
+
+:::warning
+Please do not expose port 8000 publicly if you use the script or container!
+:::
\ No newline at end of file
diff --git a/docs/tutorials/integrations/apachetika.md b/docs/tutorials/integrations/apachetika.md
index c54baa8..59ccdd6 100644
--- a/docs/tutorials/integrations/apachetika.md
+++ b/docs/tutorials/integrations/apachetika.md
@@ -52,7 +52,7 @@ Alternatively, you can run Apache Tika using the following Docker command:
```bash
docker run -d --name tika \
-p 9998:9998 \
- -restart unless-stopped \
+ --restart unless-stopped \
apache/tika:latest-full
```
diff --git a/docs/tutorials/integrations/continue-dev.md b/docs/tutorials/integrations/continue-dev.md
index 7f7afa0..6af7c1d 100644
--- a/docs/tutorials/integrations/continue-dev.md
+++ b/docs/tutorials/integrations/continue-dev.md
@@ -74,6 +74,14 @@ Make sure you pull the model into your ollama instance/s beforehand.
"useLegacyCompletionsEndpoint": false,
"apiBase": "http://YOUROPENWEBUI/ollama/v1",
"apiKey": "sk-YOUR-API-KEY"
+ },
+ {
+ "title": "Model ABC from pipeline",
+ "provider": "openai",
+ "model": "PIPELINE_MODEL_ID",
+ "useLegacyCompletionsEndpoint": false,
+ "apiBase": "http://YOUROPENWEBUI/api",
+ "apiKey": "sk-YOUR-API-KEY"
}
],
"customCommands": [
diff --git a/docs/tutorials/integrations/firefox-sidebar.md b/docs/tutorials/integrations/firefox-sidebar.md
index 9def27f..eaa136b 100644
--- a/docs/tutorials/integrations/firefox-sidebar.md
+++ b/docs/tutorials/integrations/firefox-sidebar.md
@@ -11,26 +11,6 @@ This tutorial is a community contribution and is not supported by the Open WebUI
# Integrating Open WebUI as a Local AI Chatbot Browser Assistant in Mozilla Firefox
-Table of Contents
-=================
-1. [Prerequisites](#prerequisites)
-2. [Enabling AI Chatbot in Firefox](#enabling-ai-chatbot-in-firefox)
-3. [Configuring about:config Settings](#configuring-aboutconfig-settings)
- * [browser.ml.chat.enabled](#browsermlchatenabled)
- * [browser.ml.chat.hideLocalhost](#browsermlchathidelocalhost)
- * [browser.ml.chat.prompts.#](#browsermlchatsprompts)
- * [browser.ml.chat.provider](#browsermlchatprovider)
-4. [URL Parameters for Open WebUI](#url-parameters-for-open-webui)
- * [Models and Model Selection](#models-and-model-selection)
- * [YouTube Transcription](#youtube-transcription)
- * [Web Search](#web-search)
- * [Tool Selection](#tool-selection)
- * [Call Overlay](#call-overlay)
- * [Initial Query Prompt](#initial-query-prompt)
- * [Temporary Chat Sessions](#temporary-chat-sessions)
-5. [Additional about:config Settings](#additional-aboutconfig-settings)
-6. [Accessing the AI Chatbot Sidebar](#accessing-the-ai-chatbot-sidebar)
-
## Prerequisites
Before integrating Open WebUI as a AI chatbot browser assistant in Mozilla Firefox, ensure you have:
diff --git a/docs/tutorials/maintenance/_category_.json b/docs/tutorials/maintenance/_category_.json
new file mode 100644
index 0000000..0f1c0c8
--- /dev/null
+++ b/docs/tutorials/maintenance/_category_.json
@@ -0,0 +1,7 @@
+{
+ "label": "π οΈ Maintenance",
+ "position": 5,
+ "link": {
+ "type": "generated-index"
+ }
+}
diff --git a/docs/tutorials/maintenance/backups.md b/docs/tutorials/maintenance/backups.md
new file mode 100644
index 0000000..ccc8ca7
--- /dev/null
+++ b/docs/tutorials/maintenance/backups.md
@@ -0,0 +1,391 @@
+---
+sidebar_position: 1000
+title: "πΎ Backups"
+---
+
+ :::warning
+This tutorial is a community contribution and is not supported by the Open WebUI team. It serves only as a demonstration on how to customize Open WebUI for your specific use case. Want to contribute? Check out the contributing tutorial.
+:::
+
+ # Backing Up Your Instance
+
+ Nobody likes losing data!
+
+ If you're self-hosting OpenWebUI, then you may wish to institute some kind of formal backup plan in order to ensure that you retain a second and third copy of parts of your configuration.
+
+ This guide is intended to recommend some basic recommendations for how users might go about doing that.
+
+ This guide assumes that the user has installed OpenWebUI via Docker (or intends to do so)
+
+ ## Ensuring data persistence
+
+Firstly, before deploying your stack with Docker, ensure that your Docker Compose uses a persistent data store. If you're using the Docker Compose [from the Github repository](https://github.com/open-webui/open-webui/blob/main/docker-compose.yaml) that's already taken care of. But it's easy to cook up your own variations and forget to verify this.
+
+Docker containers are ephemeral and data must be persisted to ensure its survival on the host filesystem.
+
+## Using Docker volumes
+
+If you're using the Docker Compose from the project repository, you will be deploying Open Web UI using Docker volumes.
+
+For Ollama and OpenWebUI the mounts are:
+
+```yaml
+ollama:
+ volumes:
+ - ollama:/root/.ollama
+```
+
+```yaml
+open-webui:
+ volumes:
+ - open-webui:/app/backend/data
+```
+
+To find the actual bind path on host, run:
+
+`docker volume inspect ollama`
+
+and
+
+`docker volume inspect open-webui`
+
+## Using direct host binds
+
+Some users deploy Open Web UI with direct (fixed) binds to the host filesystem, like this:
+
+```yaml
+services:
+ ollama:
+ container_name: ollama
+ image: ollama/ollama:${OLLAMA_DOCKER_TAG-latest}
+ volumes:
+ - /opt/ollama:/root/.ollama
+ open-webui:
+ container_name: open-webui
+ image: ghcr.io/open-webui/open-webui:${WEBUI_DOCKER_TAG-main}
+ volumes:
+ - /opt/open-webui:/app/backend/data
+```
+
+If this is how you've deployed your instance, you'll want to note the paths on root.
+
+## Scripting A Backup Job
+
+However your instance is provisioned, it's worth inspecting the app's data store on your server to understand what data you'll be backing up. You should see something like this:
+
+```
+βββ audit.log
+βββ cache/
+βββ uploads/
+βββ vector_db/
+βββ webui.db
+```
+
+## Files in persistent data store
+
+| File/Directory | Description |
+|---|---|
+| `audit.log` | Log file for auditing events. |
+| `cache/` | Directory for storing cached data. |
+| `uploads/` | Directory for storing user-uploaded files. |
+| `vector_db/` | Directory containing the ChromaDB vector database. |
+| `webui.db` | SQLite database for persistent storage of other instance data |
+
+# File Level Backup Approaches
+
+The first way to back up the application data is to take a file level backup approach ensuring that the persistent Open Web UI data is properly backed up.
+
+There's an almost infinite number of ways in which technical services can be backed up, but `rsync` remains a popular favorite for incremental jobs and so will be used as a demonstration.
+
+Users could target the entire `data` directory to back up all the instance data at once or create more selective backup jobs targeting individual components. You could add more descriptive names for the targets also.
+
+A model rsync job could look like this:
+
+```bash
+#!/bin/bash
+
+# Configuration
+SOURCE_DIR="." # Current directory (where the file structure resides)
+B2_BUCKET="b2://OpenWebUI-backups" # Your Backblaze B2 bucket
+B2_PROFILE="your_rclone_profile" # Your rclone profile name
+# Ensure rclone is configured with your B2 credentials
+
+# Define source and destination directories
+SOURCE_UPLOADS="$SOURCE_DIR/uploads"
+SOURCE_VECTORDB="$SOURCE_DIR/vector_db"
+SOURCE_WEBUI_DB="$SOURCE_DIR/webui.db"
+
+DEST_UPLOADS="$B2_BUCKET/user_uploads"
+DEST_CHROMADB="$B2_BUCKET/ChromaDB"
+DEST_MAIN_DB="$B2_BUCKET/main_database"
+
+# Exclude cache and audit.log
+EXCLUDE_LIST=(
+ "cache/"
+ "audit.log"
+)
+
+# Construct exclude arguments for rclone
+EXCLUDE_ARGS=""
+for EXCLUDE in "${EXCLUDE_LIST[@]}"; do
+ EXCLUDE_ARGS="$EXCLUDE_ARGS --exclude '$EXCLUDE'"
+done
+
+# Function to perform rclone sync with error checking
+rclone_sync() {
+ SOURCE="$1"
+ DEST="$2"
+ echo "Syncing '$SOURCE' to '$DEST'..."
+ rclone sync "$SOURCE" "$DEST" $EXCLUDE_ARGS --progress --transfers=32 --checkers=16 --profile "$B2_PROFILE"
+ if [ $? -ne 0 ]; then
+ echo "Error: rclone sync failed for '$SOURCE' to '$DEST'"
+ exit 1
+ fi
+}
+
+# Perform rclone sync for each directory/file
+rclone_sync "$SOURCE_UPLOADS" "$DEST_UPLOADS"
+rclone_sync "$SOURCE_VECTORDB" "$DEST_CHROMADB"
+rclone_sync "$SOURCE_WEBUI_DB" "$DEST_MAIN_DB"
+
+echo "Backup completed successfully."
+exit 0
+```
+
+## Rsync Job With Container Interruption
+
+To maintain data integrity, it's generally recommended to run database backups on cold filesystems. Our default model backup job can be modified slightly to bring down the stack before running the backup script and bring it back after.
+
+The downside of this approach, of course, is that it will entail instance downtime. Consider running the job at times you won't be using the instance or taking "software" dailies (on the running data) and more robust weeklies (on cold data).
+
+```bash
+#!/bin/bash
+
+# Configuration
+COMPOSE_FILE="docker-compose.yml" # Path to your docker-compose.yml file
+B2_BUCKET="b2://OpenWebUI-backups" # Your Backblaze B2 bucket
+B2_PROFILE="your_rclone_profile" # Your rclone profile name
+SOURCE_DIR="." # Current directory (where the file structure resides)
+
+# Define source and destination directories
+SOURCE_UPLOADS="$SOURCE_DIR/uploads"
+SOURCE_VECTORDB="$SOURCE_DIR/vector_db"
+SOURCE_WEBUI_DB="$SOURCE_DIR/webui.db"
+
+DEST_UPLOADS="$B2_BUCKET/user_uploads"
+DEST_CHROMADB="$B2_BUCKET/ChromaDB"
+DEST_MAIN_DB="$B2_BUCKET/main_database"
+
+# Exclude cache and audit.log
+EXCLUDE_LIST=(
+ "cache/"
+ "audit.log"
+)
+
+# Construct exclude arguments for rclone
+EXCLUDE_ARGS=""
+for EXCLUDE in "${EXCLUDE_LIST[@]}"; do
+ EXCLUDE_ARGS="$EXCLUDE_ARGS --exclude '$EXCLUDE'"
+done
+
+# Function to perform rclone sync with error checking
+rclone_sync() {
+ SOURCE="$1"
+ DEST="$2"
+ echo "Syncing '$SOURCE' to '$DEST'..."
+ rclone sync "$SOURCE" "$DEST" $EXCLUDE_ARGS --progress --transfers=32 --checkers=16 --profile "$B2_PROFILE"
+ if [ $? -ne 0 ]; then
+ echo "Error: rclone sync failed for '$SOURCE' to '$DEST'"
+ exit 1
+ fi
+}
+
+# 1. Stop the Docker Compose environment
+echo "Stopping Docker Compose environment..."
+docker-compose -f "$COMPOSE_FILE" down
+
+# 2. Perform the backup
+echo "Starting backup..."
+rclone_sync "$SOURCE_UPLOADS" "$DEST_UPLOADS"
+rclone_sync "$SOURCE_VECTORDB" "$DEST_CHROMADB"
+rclone_sync "$SOURCE_WEBUI_DB" "$DEST_MAIN_DB"
+
+# 3. Start the Docker Compose environment
+echo "Starting Docker Compose environment..."
+docker-compose -f "$COMPOSE_FILE" up -d
+
+echo "Backup completed successfully."
+exit 0
+```
+
+## Model Backup Script Using SQLite & ChromaDB Backup Functions To B2 Remote
+
+```bash
+#!/bin/bash
+#
+# Backup script to back up ChromaDB and SQLite to Backblaze B2 bucket
+# openwebuiweeklies, maintaining 3 weekly snapshots.
+# Snapshots are independent and fully restorable.
+# Uses ChromaDB and SQLite native backup mechanisms.
+# Excludes audit.log, cache, and uploads directories.
+#
+# Ensure rclone is installed and configured correctly.
+# Install rclone: https://rclone.org/install/
+# Configure rclone: https://rclone.org/b2/
+
+# Source directory (containing ChromaDB and SQLite data)
+SOURCE="/var/lib/open-webui/data"
+
+# B2 bucket name and remote name
+B2_REMOTE="openwebuiweeklies"
+B2_BUCKET="b2:$B2_REMOTE"
+
+# Timestamp for the backup directory
+TIMESTAMP=$(date +%Y-%m-%d)
+
+# Backup directory name
+BACKUP_DIR="open-webui-backup-$TIMESTAMP"
+
+# Full path to the backup directory in the B2 bucket
+DESTINATION="$B2_BUCKET/$BACKUP_DIR"
+
+# Number of weekly snapshots to keep
+NUM_SNAPSHOTS=3
+
+# Exclude filters (applied *after* database backups)
+EXCLUDE_FILTERS="--exclude audit.log --exclude cache/** --exclude uploads/** --exclude vector_db"
+
+# ChromaDB Backup Settings (Adjust as needed)
+CHROMADB_DATA_DIR="$SOURCE/vector_db" # Path to ChromaDB data directory
+CHROMADB_BACKUP_FILE="$SOURCE/chromadb_backup.tar.gz" # Archive file for ChromaDB backup
+
+# SQLite Backup Settings (Adjust as needed)
+SQLITE_DB_FILE="$SOURCE/webui.db" # Path to the SQLite database file
+SQLITE_BACKUP_FILE="$SOURCE/webui.db.backup" # Temporary file for SQLite backup
+
+# Function to backup ChromaDB
+backup_chromadb() {
+ echo "Backing up ChromaDB..."
+
+ # Create a tar archive of the vector_db directory
+ tar -czvf "$CHROMADB_BACKUP_FILE" -C "$SOURCE" vector_db
+
+ echo "ChromaDB backup complete."
+}
+
+# Function to backup SQLite
+backup_sqlite() {
+ echo "Backing up SQLite database..."
+ # Backup the SQLite database using the .backup command
+ sqlite3 "$SQLITE_DB_FILE" ".backup '$SQLITE_BACKUP_FILE'"
+
+ # Move the backup file to the source directory
+ mv "$SQLITE_BACKUP_FILE" "$SOURCE/"
+
+ echo "SQLite backup complete."
+}
+
+# Perform database backups
+backup_chromadb
+backup_sqlite
+
+# Perform the backup with exclusions
+rclone copy "$SOURCE" "$DESTINATION" $EXCLUDE_FILTERS --progress
+
+# Remove old backups, keeping the most recent NUM_SNAPSHOTS
+find "$B2_BUCKET" -type d -name "open-webui-backup-*" | sort -r | tail -n +$((NUM_SNAPSHOTS + 1)) | while read dir; do
+ rclone purge "$dir"
+done
+
+echo "Backup completed to $DESTINATION"
+```
+
+---
+
+## Point In Time Snapshots
+
+In addition taking backups, users may also wish to create point-in-time snapshots which could be stored locally (on the server), remotely, or both.
+
+```bash
+#!/bin/bash
+
+# Configuration
+SOURCE_DIR="." # Directory to snapshot (current directory)
+SNAPSHOT_DIR="/snapshots" # Directory to store snapshots
+TIMESTAMP=$(date +%Y%m%d%H%M%S) # Generate timestamp
+
+# Create the snapshot directory if it doesn't exist
+mkdir -p "$SNAPSHOT_DIR"
+
+# Create the snapshot name
+SNAPSHOT_NAME="snapshot_$TIMESTAMP"
+SNAPSHOT_PATH="$SNAPSHOT_DIR/$SNAPSHOT_NAME"
+
+# Perform the rsync snapshot
+echo "Creating snapshot: $SNAPSHOT_PATH"
+rsync -av --delete --link-dest="$SNAPSHOT_DIR/$(ls -t "$SNAPSHOT_DIR" | head -n 1)" "$SOURCE_DIR/" "$SNAPSHOT_PATH"
+
+# Check if rsync was successful
+if [ $? -eq 0 ]; then
+ echo "Snapshot created successfully."
+else
+ echo "Error: Snapshot creation failed."
+ exit 1
+fi
+
+exit 0
+```
+## Crontab For Scheduling
+
+Once you've added your backup script and provisioned your backup storage, you'll want to QA the scripts to make sure that they're running as expected. Logging is highly advisable.
+
+Set your new script(s) up to run using crontabs according to your desired run frequency.
+
+# Commercial Utilities
+
+In addition to scripting your own backup jobs, you can find commercial offerings which generally work by installing agents on your server that will abstract the complexities of running backups. These are beyond the purview of this article but provide convenient solutions.
+
+---
+
+# Host Level Backups
+
+Your OpenWebUI instance might be provisioned on a host (physical or virtualised) which you control.
+
+Host level backups involve creating snapshots or backups but of the entire VM rather than running applications.
+
+Some may wish to leverage them as their primary or only protection while others may wish to layer them in as additional data protections.
+
+# How Many Backups Do I Need?
+
+The amount of backups that you will wish to take depends on your personal level of risk tolerance. However, remember that it's best practice to *not* consider the application itself to be a backup copy (even if it lives in the cloud!). That means that if you've provisioned your instance on a VPS, it's still a reasonable recommendation to keep two (independent) backup copies.
+
+An example backup plan that would cover the needs of many home users:
+
+## Model backup plan 1 (primary + 2 copies)
+
+| Frequency | Target | Technology | Description |
+|---|---|---|---|
+| Daily Incremental | Cloud Storage (S3/B2) | rsync | Daily incremental backup pushed to a cloud storage bucket (S3 or B2). |
+| Weekly Incremental | On-site Storage (Home NAS) | rsync | Weekly incremental backup pulled from the server to on-site storage (e.g., a home NAS). |
+
+## Model backup plan 2 (primary + 3 copies)
+
+This backup plan is a little more complicated but also more comprehensive .. it involves daily pushes to two cloud storage providers for additional redundancy.
+
+| Frequency | Target | Technology | Description |
+|---|---|---|---|
+| Daily Incremental | Cloud Storage (S3) | rsync | Daily incremental backup pushed to an S3 cloud storage bucket. |
+| Daily Incremental | Cloud Storage (B2) | rsync | Daily incremental backup pushed to a Backblaze B2 cloud storage bucket. |
+| Weekly Incremental | On-site Storage (Home NAS) | rsync | Weekly incremental backup pulled from the server to on-site storage (e.g., a home NAS). |
+
+# Additional Topics
+
+In the interest of keeping this guide reasonably thorough these additional subjects were ommitted but may be worth your consideration depending upon how much time you have to dedicate to setting up and maintaining a data protection plan for your instance:
+
+| Topic | Description |
+|---|---|
+| SQLite Built-in Backup | Consider using SQLite's `.backup` command for a consistent database backup solution. |
+| Encryption | Modify backup scripts to incorporate encryption at rest for enhanced security. |
+| Disaster Recovery and Testing | Develop a disaster recovery plan and regularly test the backup and restore process. |
+| Alternative Backup Tools | Explore other command-line backup tools like `borgbackup` or `restic` for advanced features. |
+| Email Notifications and Webhooks | Implement email notifications or webhooks to monitor backup success or failure. |
\ No newline at end of file
diff --git a/docs/tutorials/speech-to-text/_category_.json b/docs/tutorials/speech-to-text/_category_.json
new file mode 100644
index 0000000..38926e7
--- /dev/null
+++ b/docs/tutorials/speech-to-text/_category_.json
@@ -0,0 +1,7 @@
+{
+ "label": "π€ Speech To Text",
+ "position": 5,
+ "link": {
+ "type": "generated-index"
+ }
+}
diff --git a/docs/tutorials/speech-to-text/env-variables.md b/docs/tutorials/speech-to-text/env-variables.md
new file mode 100644
index 0000000..e20be96
--- /dev/null
+++ b/docs/tutorials/speech-to-text/env-variables.md
@@ -0,0 +1,25 @@
+---
+sidebar_position: 2
+title: "Environment Variables"
+---
+
+
+# Environment Variables List
+
+
+:::info
+For a complete list of all Open WebUI environment variables, see the [Environment Variable Configuration](/getting-started/env-configuration) page.
+:::
+
+The following is a summary of the environment variables for speech to text (STT).
+
+# Environment Variables For Speech To Text (STT)
+
+| Variable | Description |
+|----------|-------------|
+| `WHISPER_MODEL` | Sets the Whisper model to use for local Speech-to-Text |
+| `WHISPER_MODEL_DIR` | Specifies the directory to store Whisper model files |
+| `AUDIO_STT_ENGINE` | Specifies the Speech-to-Text engine to use (empty for local Whisper, or `openai`) |
+| `AUDIO_STT_MODEL` | Specifies the Speech-to-Text model for OpenAI-compatible endpoints |
+| `AUDIO_STT_OPENAI_API_BASE_URL` | Sets the OpenAI-compatible base URL for Speech-to-Text |
+| `AUDIO_STT_OPENAI_API_KEY` | Sets the OpenAI API key for Speech-to-Text |
\ No newline at end of file
diff --git a/docs/tutorials/speech-to-text/stt-config.md b/docs/tutorials/speech-to-text/stt-config.md
new file mode 100644
index 0000000..73fe834
--- /dev/null
+++ b/docs/tutorials/speech-to-text/stt-config.md
@@ -0,0 +1,62 @@
+---
+sidebar_position: 1
+title: "π¨οΈ Configuration"
+---
+
+Open Web UI supports both local, browser, and remote speech to text.
+
+
+
+
+
+## Cloud / Remote Speech To Text Proivders
+
+The following cloud speech to text providers are currently supported. API keys can be configured as environment variables (OpenAI) or in the admin settings page (both keys).
+
+ | Service | API Key Required |
+ | ------------- | ------------- |
+ | OpenAI | β
|
+ | DeepGram | β
|
+
+ WebAPI provides STT via the built-in browser STT provider.
+
+## Configuring Your STT Provider
+
+To configure a speech to text provider:
+
+- Navigate to the admin settings
+- Choose Audio
+- Provider an API key and choose a model from the dropdown
+
+
+
+## User-Level Settings
+
+In addition the instance settings provisioned in the admin panel, there are also a couple of user-level settings that can provide additional functionality.
+
+* **STT Settings:** Contains settings related to Speech-to-Text functionality.
+* **Speech-to-Text Engine:** Determines the engine used for speech recognition (Default or Web API).
+
+
+
+
+## Using STT
+
+Speech to text provides a highly efficient way of "writing" prompts using your voice and it performs robustly from both desktop and mobile devices.
+
+To use STT, simply click on the microphone icon:
+
+
+
+A live audio waveform will indicate successful voice capture:
+
+
+
+## STT Mode Operation
+
+Once your recording has begun you can:
+
+- Click on the tick icon to save the recording (if auto send after completion is enabled it will send for completion; otherwise you can manually send)
+- If you wish to abort the recording (for example, you wish to start a fresh recording) you can click on the 'x' icon to scape the recording interface
+
+
diff --git a/docs/tutorials/text-to-speech/openai-edge-tts-integration.md b/docs/tutorials/text-to-speech/openai-edge-tts-integration.md
index effb4d7..0608b50 100644
--- a/docs/tutorials/text-to-speech/openai-edge-tts-integration.md
+++ b/docs/tutorials/text-to-speech/openai-edge-tts-integration.md
@@ -127,7 +127,7 @@ The server will start running at `http://localhost:5050`.
#### 6. Test the API
-You can now interact with the API at `http://localhost:5050/v1/audio/speech` and other available endpoints. See the [Usage](#usage) section for request examples.
+You can now interact with the API at `http://localhost:5050/v1/audio/speech` and other available endpoints. See the Usage section for request examples.
diff --git a/docs/tutorials/tips/sqlite-database.md b/docs/tutorials/tips/sqlite-database.md
index cea41a6..d029fae 100644
--- a/docs/tutorials/tips/sqlite-database.md
+++ b/docs/tutorials/tips/sqlite-database.md
@@ -48,7 +48,7 @@ docker exec -it open-webui /bin/sh
## Table Overview
-Here is a complete list of tables in Open-WebUI's SQLite database. The tables are listed alphabetically and numbered for convinience.
+Here is a complete list of tables in Open-WebUI's SQLite database. The tables are listed alphabetically and numbered for convenience.
| **No.** | **Table Name** | **Description** |
| ------- | ---------------- | ------------------------------------------------------------ |
diff --git a/static/images/tutorials/stt/endstt.png b/static/images/tutorials/stt/endstt.png
new file mode 100644
index 0000000..6fd73da
Binary files /dev/null and b/static/images/tutorials/stt/endstt.png differ
diff --git a/static/images/tutorials/stt/image.png b/static/images/tutorials/stt/image.png
new file mode 100644
index 0000000..6fee0e5
Binary files /dev/null and b/static/images/tutorials/stt/image.png differ
diff --git a/static/images/tutorials/stt/stt-config.png b/static/images/tutorials/stt/stt-config.png
new file mode 100644
index 0000000..b578f20
Binary files /dev/null and b/static/images/tutorials/stt/stt-config.png differ
diff --git a/static/images/tutorials/stt/stt-in-progress.png b/static/images/tutorials/stt/stt-in-progress.png
new file mode 100644
index 0000000..6ce6e01
Binary files /dev/null and b/static/images/tutorials/stt/stt-in-progress.png differ
diff --git a/static/images/tutorials/stt/stt-operation.png b/static/images/tutorials/stt/stt-operation.png
new file mode 100644
index 0000000..4b3d1f5
Binary files /dev/null and b/static/images/tutorials/stt/stt-operation.png differ
diff --git a/static/images/tutorials/stt/stt-providers.png b/static/images/tutorials/stt/stt-providers.png
new file mode 100644
index 0000000..ed8927c
Binary files /dev/null and b/static/images/tutorials/stt/stt-providers.png differ
diff --git a/static/images/tutorials/stt/user-settings.png b/static/images/tutorials/stt/user-settings.png
new file mode 100644
index 0000000..224c04e
Binary files /dev/null and b/static/images/tutorials/stt/user-settings.png differ