Merge pull request #486 from silentoplayz/main

Update Logging, HTTPS Encryption, and Monitoring documentation pages
This commit is contained in:
Tim Jaeryang Baek 2025-04-10 06:56:49 -07:00 committed by GitHub
commit 66cdeecea0
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
3 changed files with 269 additions and 137 deletions

View File

@ -1,27 +1,54 @@
---
sidebar_position: 6
title: "🔒HTTPS Encryption"
title: "🔒 Enabling HTTPS Encryption"
---
## Overview
# Secure Your Open WebUI with HTTPS 🔒
While HTTPS encryption is **not required** to operate Open WebUI in most cases, certain features—such as **Voice Calls**—will be blocked by modern web browsers unless HTTPS is enabled. If you do not plan to use these features, you can skip this section.
This guide explains how to enable HTTPS encryption for your Open WebUI instance. While **HTTPS is not strictly required** for basic operation, it's highly recommended for security and is **necessary for certain features like Voice Calls** to function in modern web browsers.
## Importance of HTTPS
## Why HTTPS Matters 🛡️
For deployments at high risk of traffic interception, such as those hosted on the internet, it is recommended to implement HTTPS encryption. This ensures that the username/password signup and authentication process remains secure, protecting sensitive user data from potential threats.
HTTPS (Hypertext Transfer Protocol Secure) encrypts communication between your web browser and the Open WebUI server. This encryption provides several key benefits:
## Choosing Your HTTPS Solution
* **Privacy and Security:** Protects sensitive data like usernames, passwords, and chat content from eavesdropping and interception, especially on public networks.
* **Integrity:** Ensures that data transmitted between the browser and server is not tampered with during transit.
* **Feature Compatibility:** **Crucially, modern browsers block access to certain "secure context" features, such as microphone access for Voice Calls, unless the website is served over HTTPS.**
* **Trust and User Confidence:** HTTPS is indicated by a padlock icon in the browser address bar, building user trust and confidence in your Open WebUI deployment.
The choice of HTTPS encryption solution is up to the user and should align with the existing infrastructure. Here are some common scenarios:
**When is HTTPS Especially Important?**
- **AWS Environments**: Utilizing an AWS Elastic Load Balancer is often a practical choice for managing HTTPS.
- **Docker Container Environments**: Popular solutions include Nginx, Traefik, and Caddy.
- **Cloudflare**: Offers easy HTTPS setup with minimal server-side configuration, suitable for a wide range of applications.
- **Ngrok**: Provides a quick way to set up HTTPS for local development environments, particularly useful for testing and demos.
* **Internet-Facing Deployments:** If your Open WebUI instance is accessible from the public internet, HTTPS is strongly recommended to protect against security risks.
* **Voice Call Feature:** If you plan to use the Voice Call feature in Open WebUI, HTTPS is **mandatory**.
* **Sensitive Data Handling:** If you are concerned about the privacy of user data, enabling HTTPS is a crucial security measure.
## Further Guidance
## Choosing the Right HTTPS Solution for You 🛠️
For detailed instructions and community-submitted tutorials on actual HTTPS encryption deployments, please refer to the [Deployment Tutorials](../../tutorials/deployment/).
The best HTTPS solution depends on your existing infrastructure and technical expertise. Here are some common and effective options:
This documentation provides a starting point for understanding the options available for enabling HTTPS encryption in your environment.
* **Cloud Providers (e.g., AWS, Google Cloud, Azure):**
* **Load Balancers:** Cloud providers typically offer managed load balancers (like AWS Elastic Load Balancer) that can handle HTTPS termination (encryption/decryption) for you. This is often the most straightforward and scalable approach in cloud environments.
* **Docker Container Environments:**
* **Reverse Proxies (Nginx, Traefik, Caddy):** Popular reverse proxies like Nginx, Traefik, and Caddy are excellent choices for managing HTTPS in Dockerized deployments. They can automatically obtain and renew SSL/TLS certificates (e.g., using Let's Encrypt) and handle HTTPS termination.
* **Nginx:** Highly configurable and widely used.
* **Traefik:** Designed for modern microservices and container environments, with automatic configuration and Let's Encrypt integration.
* **Caddy:** Focuses on ease of use and automatic HTTPS configuration.
* **Cloudflare:**
* **Simplified HTTPS:** Cloudflare provides a CDN (Content Delivery Network) and security services, including very easy HTTPS setup. It often requires minimal server-side configuration changes and is suitable for a wide range of deployments.
* **Ngrok:**
* **Local Development HTTPS:** Ngrok is a convenient tool for quickly exposing your local development server over HTTPS. It's particularly useful for testing features that require HTTPS (like Voice Calls) during development and for demos. **Not recommended for production deployments.**
**Key Considerations When Choosing:**
* **Complexity:** Some solutions (like Cloudflare or Caddy) are simpler to set up than others (like manually configuring Nginx).
* **Automation:** Solutions like Traefik and Caddy offer automatic certificate management, which simplifies ongoing maintenance.
* **Scalability and Performance:** Consider the performance and scalability needs of your Open WebUI instance when choosing a solution, especially for high-traffic deployments.
* **Cost:** Some solutions (like cloud load balancers or Cloudflare's paid plans) may have associated costs. Let's Encrypt and many reverse proxies are free and open-source.
## 📚 Explore Deployment Tutorials for Step-by-Step Guides
For detailed, practical instructions and community-contributed tutorials on setting up HTTPS encryption with various solutions, please visit the **[Deployment Tutorials](../../tutorials/deployment/)** section.
These tutorials often provide specific, step-by-step guides for different environments and HTTPS solutions, making the process easier to follow.
By implementing HTTPS, you significantly enhance the security and functionality of your Open WebUI instance, ensuring a safer and more feature-rich experience for yourself and your users.

View File

@ -1,70 +1,119 @@
---
sidebar_position: 5
title: "📜 Open WebUI Logging"
title: "📜 Logging in Open WebUI"
---
## Browser Client Logging ##
# Understanding Open WebUI Logging 🪵
Client logging generally occurs via [JavaScript](https://developer.mozilla.org/en-US/docs/Web/API/console/log_static) `console.log()` and can be accessed using the built-in browser-specific developer tools:
Logging is essential for debugging, monitoring, and understanding how Open WebUI is behaving. This guide explains how logging works in both the **browser client** (frontend) and the **application server/backend**.
* Blink
* [Chrome/Chromium](https://developer.chrome.com/docs/devtools/)
* [Edge](https://learn.microsoft.com/en-us/microsoft-edge/devtools-guide-chromium/overview)
* Gecko
* [Firefox](https://firefox-source-docs.mozilla.org/devtools-user/)
* WebKit
* [Safari](https://developer.apple.com/safari/tools/)
## 🖥️ Browser Client Logging (Frontend)
## Application Server/Backend Logging ##
For frontend development and debugging, Open WebUI utilizes standard browser console logging. This means you can see logs directly within your web browser's built-in developer tools.
Logging is an ongoing work-in-progress but some level of control is available using environment variables. [Python Logging](https://docs.python.org/3/howto/logging.html) `log()` and `print()` statements send information to the console. The default level is `INFO`. Ideally, sensitive data will only be exposed with `DEBUG` level.
**How to Access Browser Logs:**
### Logging Levels ###
1. **Open Developer Tools:** In most browsers, you can open developer tools by:
- **Right-clicking** anywhere on the Open WebUI page and selecting "Inspect" or "Inspect Element".
- Pressing **F12** (or Cmd+Opt+I on macOS).
The following [logging levels](https://docs.python.org/3/howto/logging.html#logging-levels) values are supported:
2. **Navigate to the "Console" Tab:** Within the developer tools panel, find and click on the "Console" tab.
| Level | Numeric value |
| ---------- | ------------- |
| `CRITICAL` | 50 |
| `ERROR` | 40 |
| `WARNING` | 30 |
| `INFO` | 20 |
| `DEBUG` | 10 |
| `NOTSET` | 0 |
**Types of Browser Logs:**
### Global ###
Open WebUI primarily uses [JavaScript's](https://developer.mozilla.org/en-US/docs/Web/API/console/log_static) `console.log()` for client-side logging. You'll see various types of messages in the console, including:
The default global log level of `INFO` can be overridden with the `GLOBAL_LOG_LEVEL` environment variable. When set, this executes a [basicConfig](https://docs.python.org/3/library/logging.html#logging.basicConfig) statement with the `force` argument set to *True* within `config.py`. This results in reconfiguration of all attached loggers:
> *If this keyword argument is specified as true, any existing handlers attached to the root logger are removed and closed, before carrying out the configuration as specified by the other arguments.*
- **Informational messages:** General application flow and status.
- **Warnings:** Potential issues or non-critical errors.
- **Errors:** Problems that might be affecting functionality.
The stream uses standard output (`sys.stdout`). In addition to all Open-WebUI `log()` statements, this also affects any imported Python modules that use the Python Logging module `basicConfig` mechanism including [urllib](https://docs.python.org/3/library/urllib.html).
**Browser-Specific Developer Tools:**
For example, to set `DEBUG` logging level as a Docker parameter use:
Different browsers offer slightly different developer tools, but they all provide a console for viewing JavaScript logs. Here are links to documentation for popular browsers:
```
--env GLOBAL_LOG_LEVEL="DEBUG"
```
- **[Blink] Chrome/Chromium (e.g., Chrome, Edge):** [Chrome DevTools Documentation](https://developer.chrome.com/docs/devtools/)
- **[Gecko] Firefox:** [Firefox Developer Tools Documentation](https://firefox-source-docs.mozilla.org/devtools-user/)
- **[WebKit] Safari:** [Safari Developer Tools Documentation](https://developer.apple.com/safari/tools/)
or for Docker Compose put this in the environment section of the docker-compose.yml file (notice the absence of quotation signs):
```
## ⚙️ Application Server/Backend Logging (Python)
The backend of Open WebUI uses Python's built-in `logging` module to record events and information on the server side. These logs are crucial for understanding server behavior, diagnosing errors, and monitoring performance.
**Key Concepts:**
- **Python `logging` Module:** Open WebUI leverages the standard Python `logging` library. If you're familiar with Python logging, you'll find this section straightforward. (For more in-depth information, see the [Python Logging Documentation](https://docs.python.org/3/howto/logging.html#logging-levels)).
- **Console Output:** By default, backend logs are sent to the console (standard output), making them visible in your terminal or Docker container logs.
- **Logging Levels:** Logging levels control the verbosity of the logs. You can configure Open WebUI to show more or less detailed information based on these levels.
### 🚦 Logging Levels Explained
Python logging uses a hierarchy of levels to categorize log messages by severity. Here's a breakdown of the levels, from most to least severe:
| Level | Numeric Value | Description | Use Case |
| ----------- | ------------- | --------------------------------------------------------------------------- | --------------------------------------------------------------------------- |
| `CRITICAL` | 50 | **Severe errors** that may lead to application termination. | Catastrophic failures, data corruption. |
| `ERROR` | 40 | **Errors** that indicate problems but the application might still function. | Recoverable errors, failed operations. |
| `WARNING` | 30 | **Potential issues** or unexpected situations that should be investigated. | Deprecation warnings, resource constraints. |
| `INFO` | 20 | **General informational messages** about application operation. | Startup messages, key events, normal operation flow. |
| `DEBUG` | 10 | **Detailed debugging information** for developers. | Function calls, variable values, detailed execution steps. |
| `NOTSET` | 0 | **All messages are logged.** (Usually defaults to `WARNING` if not set). | Useful for capturing absolutely everything, typically for very specific debugging. |
**Default Level:** Open WebUI's default logging level is `INFO`.
### 🌍 Global Logging Level (`GLOBAL_LOG_LEVEL`)
You can change the **global** logging level for the entire Open WebUI backend using the `GLOBAL_LOG_LEVEL` environment variable. This is the most straightforward way to control overall logging verbosity.
**How it Works:**
Setting `GLOBAL_LOG_LEVEL` configures the root logger in Python, affecting all loggers in Open WebUI and potentially some third-party libraries that use [basicConfig](https://docs.python.org/3/library/logging.html#logging.basicConfig). It uses `logging.basicConfig(force=True)`, which means it will override any existing root logger configuration.
**Example: Setting to `DEBUG`**
- **Docker Parameter:**
```bash
--env GLOBAL_LOG_LEVEL="DEBUG"
```
- **Docker Compose (`docker-compose.yml`):**
```yaml
environment:
- GLOBAL_LOG_LEVEL=DEBUG
```
**Impact:** Setting `GLOBAL_LOG_LEVEL` to `DEBUG` will produce the most verbose logs, including detailed information that is helpful for development and troubleshooting. For production environments, `INFO` or `WARNING` might be more appropriate to reduce log volume.
### ⚙️ App/Backend Specific Logging Levels
For more granular control, Open WebUI provides environment variables to set logging levels for specific backend components. Logging is an ongoing work-in-progress, but some level of control is made available using these environment variables. These variables allow you to fine-tune logging for different parts of the application.
**Available Environment Variables:**
| Environment Variable | Component/Module | Description |
| -------------------- | ------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------- |
| `AUDIO_LOG_LEVEL` | Audio processing | Logging related to audio transcription (faster-whisper), text-to-speech (TTS), and audio handling. |
| `COMFYUI_LOG_LEVEL` | ComfyUI Integration | Logging for interactions with ComfyUI, if you are using this integration. |
| `CONFIG_LOG_LEVEL` | Configuration Management | Logging related to loading and processing Open WebUI configuration files. |
| `DB_LOG_LEVEL` | Database Operations (Peewee) | Logging for database interactions using the Peewee ORM (Object-Relational Mapper). |
| `IMAGES_LOG_LEVEL` | Image Generation (AUTOMATIC1111/Stable Diffusion) | Logging for image generation tasks, especially when using AUTOMATIC1111 Stable Diffusion integration. |
| `MAIN_LOG_LEVEL` | Main Application Execution (Root Logger) | Logging from the main application entry point and root logger. |
| `MODELS_LOG_LEVEL` | Model Management | Logging related to loading, managing, and interacting with language models (LLMs), including authentication. |
| `OLLAMA_LOG_LEVEL` | Ollama Backend Integration | Logging for communication and interaction with the Ollama backend. |
| `OPENAI_LOG_LEVEL` | OpenAI API Integration | Logging for interactions with the OpenAI API (e.g., for models like GPT). |
| `RAG_LOG_LEVEL` | Retrieval-Augmented Generation (RAG) | Logging for the RAG pipeline, including Chroma vector database and Sentence-Transformers. |
| `WEBHOOK_LOG_LEVEL` | Authentication Webhook | Extended logging for authentication webhook functionality. |
**How to Use:**
You can set these environment variables in the same way as `GLOBAL_LOG_LEVEL` (Docker parameters, Docker Compose `environment` section). For example, to get more detailed logging for Ollama interactions, you could set:
```yaml
environment:
- GLOBAL_LOG_LEVEL=DEBUG
- OLLAMA_LOG_LEVEL=DEBUG
```
### App/Backend ###
**Important Note:** Unlike `GLOBAL_LOG_LEVEL`, these app-specific variables might not affect logging from *all* third-party modules. They primarily control logging within Open WebUI's codebase.
Some level of granularity is possible using any of the following combination of variables. Note that `basicConfig` `force` isn't presently used so these statements may only affect Open-WebUI logging and not 3rd party modules.
| Environment Variable | App/Backend |
| -------------------- | ----------------------------------------------------------------- |
| `AUDIO_LOG_LEVEL` | Audio transcription using faster-whisper, TTS etc. |
| `COMFYUI_LOG_LEVEL` | ComfyUI integration handling |
| `CONFIG_LOG_LEVEL` | Configuration handling |
| `DB_LOG_LEVEL` | Internal Peewee Database |
| `IMAGES_LOG_LEVEL` | AUTOMATIC1111 stable diffusion image generation |
| `MAIN_LOG_LEVEL` | Main (root) execution |
| `MODELS_LOG_LEVEL` | LLM model interaction, authentication, etc. |
| `OLLAMA_LOG_LEVEL` | Ollama backend interaction |
| `OPENAI_LOG_LEVEL` | OpenAI interaction |
| `RAG_LOG_LEVEL` | Retrieval-Augmented Generation using Chroma/Sentence-Transformers |
| `WEBHOOK_LOG_LEVEL` | Authentication webhook extended logging |
By understanding and utilizing these logging mechanisms, you can effectively monitor, debug, and gain insights into your Open WebUI instance.

View File

@ -1,115 +1,171 @@
---
sidebar_position: 6
title: "📊 Monitoring"
title: "📊 Monitoring Your Open WebUI"
---
# Monitoring Open WebUI
# Keep Your Open WebUI Healthy with Monitoring 🩺
Monitoring your Open WebUI instance is crucial for ensuring reliable service and quickly identifying issues. This guide covers three levels of monitoring:
- Basic health checks for service availability
- Model connectivity verification
- Deep health checks with model response testing
Monitoring your Open WebUI instance is crucial for ensuring it runs reliably, performs well, and allows you to quickly identify and resolve any issues. This guide outlines three levels of monitoring, from basic availability checks to in-depth model response testing.
## Basic Health Check Endpoint
**Why Monitor?**
Open WebUI exposes a health check endpoint at `/health` that returns a 200 OK status when the service is running properly.
* **Ensure Uptime:** Proactively detect outages and service disruptions.
* **Performance Insights:** Track response times and identify potential bottlenecks.
* **Early Issue Detection:** Catch problems before they impact users significantly.
* **Peace of Mind:** Gain confidence that your Open WebUI instance is running smoothly.
## 🚦 Levels of Monitoring
We'll cover three levels of monitoring, progressing from basic to more comprehensive:
1. **Basic Health Check:** Verifies if the Open WebUI service is running and responding.
2. **Model Connectivity Check:** Confirms that Open WebUI can connect to and list your configured models.
3. **Model Response Testing (Deep Health Check):** Ensures that models can actually process requests and generate responses.
## Level 1: Basic Health Check Endpoint ✅
The simplest level of monitoring is checking the `/health` endpoint. This endpoint is publicly accessible (no authentication required) and returns a `200 OK` status code when the Open WebUI service is running correctly.
**How to Test:**
You can use `curl` or any HTTP client to check this endpoint:
```bash
# No auth needed for this endpoint
curl https://your-open-webuiinstance/health
# Basic health check - no authentication needed
curl https://your-open-webui-instance/health
```
### Using Uptime Kuma
[Uptime Kuma](https://github.com/louislam/uptime-kuma) is a great, easy to use, open source, self-hosted uptime monitoring tool.
**Expected Output:** A successful health check will return a `200 OK` HTTP status code. The content of the response body is usually not important for a basic health check.
1. In your Uptime Kuma dashboard, click "Add New Monitor"
2. Set the following configuration:
- Monitor Type: HTTP(s)
- Name: Open WebUI
- URL: `http://your-open-webuiinstance:8080/health`
- Monitoring Interval: 60 seconds (or your preferred interval)
- Retry count: 3 (recommended)
### Using Uptime Kuma for Basic Health Checks 🐻
The health check will verify:
- The web server is responding
- The application is running
- Basic database connectivity
[Uptime Kuma](https://github.com/louislam/uptime-kuma) is a fantastic, open-source, and easy-to-use self-hosted uptime monitoring tool. It's highly recommended for monitoring Open WebUI.
## Open WebUI Model Connectivity
**Steps to Set Up in Uptime Kuma:**
To verify that Open WebUI can successfully connect to and list your configured models, you can monitor the models endpoint. This endpoint requires authentication and checks Open WebUI's ability to communicate with your model providers.
1. **Add a New Monitor:** In your Uptime Kuma dashboard, click "Add New Monitor".
2. **Configure Monitor Settings:**
* **Monitor Type:** Select "HTTP(s)".
* **Name:** Give your monitor a descriptive name, e.g., "Open WebUI Health Check".
* **URL:** Enter the health check endpoint URL: `http://your-open-webui-instance:8080/health` (Replace `your-open-webui-instance:8080` with your actual Open WebUI address and port).
* **Monitoring Interval:** Set the frequency of checks (e.g., `60 seconds` for every minute).
* **Retry Count:** Set the number of retries before considering the service down (e.g., `3` retries).
See [API documentation](https://docs.openwebui.com/getting-started/api-endpoints/#-retrieve-all-models) for more details about the models endpoint.
**What This Check Verifies:**
* **Web Server Availability:** Ensures the web server (e.g., Nginx, Uvicorn) is responding to requests.
* **Application Running:** Confirms that the Open WebUI application itself is running and initialized.
* **Basic Database Connectivity:** Typically includes a basic check to ensure the application can connect to the database.
## Level 2: Open WebUI Model Connectivity 🔗
To go beyond basic availability, you can monitor the `/api/models` endpoint. This endpoint **requires authentication** and verifies that Open WebUI can successfully communicate with your configured model providers (e.g., Ollama, OpenAI) and retrieve a list of available models.
**Why Monitor Model Connectivity?**
* **Model Provider Issues:** Detect problems with your model provider services (e.g., API outages, authentication failures).
* **Configuration Errors:** Identify misconfigurations in your model provider settings within Open WebUI.
* **Ensure Model Availability:** Confirm that the models you expect to be available are actually accessible to Open WebUI.
**API Endpoint Details:**
See the [Open WebUI API documentation](https://docs.openwebui.com/getting-started/api-endpoints/#-retrieve-all-models) for full details about the `/api/models` endpoint and its response structure.
**How to Test with `curl` (Authenticated):**
You'll need an API key to access this endpoint. See the "Authentication Setup" section below for instructions on generating an API key.
```bash
# See steps below to get an API key
curl -H "Authorization: Bearer sk-adfadsflkhasdflkasdflkh" https://your-open-webuiinstance/api/models
# Authenticated model connectivity check
curl -H "Authorization: Bearer YOUR_API_KEY" https://your-open-webui-instance/api/models
```
### Authentication Setup
*(Replace `YOUR_API_KEY` with your actual API key and `your-open-webui-instance` with your Open WebUI address.)*
1. Enable API Keys (Admin required):
- Go to Admin Settings > General
- Enable the "Enable API Key" setting
- Save changes
**Expected Output:** A successful request will return a `200 OK` status code and a JSON response containing a list of models.
2. Get your API key [docs](https://docs.openwebui.com/getting-started/api-endpoints):
- (Optional), consider making a non-admin user for the monitoring API key
- Go to Settings > Account in Open WebUI
- Generate a new API key specifically for monitoring
- Copy the API key for use in Uptime Kuma
### Authentication Setup for API Key 🔑
Note: If you don't see the option to generate API keys in your Settings > Account, check with your administrator to ensure API keys are enabled.
Before you can monitor the `/api/models` endpoint, you need to enable API keys in Open WebUI and generate one:
### Using Uptime Kuma for Model Connectivity
1. **Enable API Keys (Admin Required):**
* Log in to Open WebUI as an administrator.
* Go to **Admin Settings** (usually in the top right menu) > **General**.
* Find the "Enable API Key" setting and **turn it ON**.
* Click **Save Changes**.
1. Create a new monitor in Uptime Kuma:
- Monitor Type: HTTP(s) - JSON Query
- Name: Open WebUI Model Connectivity
- URL: `http://your-open-webuiinstance:8080/api/models`
- Method: GET
- Expected Status Code: 200
- JSON Query: `$count(data[*])>0`
- Expected Value: `true`
- Monitoring Interval: 300 seconds (5 minutes recommended)
2. **Generate an API Key (User Settings):**
* Go to your **User Settings** (usually by clicking on your profile icon in the top right).
* Navigate to the **Account** section.
* Click **Generate New API Key**.
* Give the API key a descriptive name (e.g., "Monitoring API Key").
* **Copy the generated API key** and store it securely. You'll need this for your monitoring setup.
2. Configure Authentication:
- In the Headers section, add:
```
{
"Authorization": "Bearer sk-abc123adfsdfsdfsdfsfdsdf"
}
```
- Replace `YOUR_API_KEY` with the API key you generated
*(Optional but Recommended):* For security best practices, consider creating a **non-administrator user account** specifically for monitoring and generate an API key for that user. This limits the potential impact if the monitoring API key is compromised.
Alternative JSON Queries:
```
# At least 1 models by ollama provider
$count(data[owned_by='ollama'])>1
*If you don't see the API key generation option in your settings, contact your Open WebUI administrator to ensure API keys are enabled.*
# Check if specific model exists (returns true/false)
$exists(data[id='gpt-4o'])
### Using Uptime Kuma for Model Connectivity Monitoring 🐻
# Check multiple models (returns true if ALL exist)
$count(data[id in ['gpt-4o', 'gpt-4o-mini']]) = 2
```
1. **Create a New Monitor in Uptime Kuma:**
* Monitor Type: "HTTP(s) - JSON Query".
* Name: "Open WebUI Model Connectivity Check".
* URL: `http://your-open-webui-instance:8080/api/models` (Replace with your URL).
* Method: "GET".
* Expected Status Code: `200`.
You can test JSONata queries at [jsonata.org](https://try.jsonata.org/) to verify they work with your API response.
2. **Configure JSON Query (Verify Model List):**
* **JSON Query:** `$count(data[*])>0`
* **Explanation:** This JSONata query checks if the `data` array in the API response (which contains the list of models) has a count greater than 0. In other words, it verifies that at least one model is returned.
* **Expected Value:** `true` (The query should return `true` if models are listed).
## Model Response Testing
3. **Add Authentication Headers:**
* In the "Headers" section of the Uptime Kuma monitor configuration, click "Add Header".
* **Header Name:** `Authorization`
* **Header Value:** `Bearer YOUR_API_KEY` (Replace `YOUR_API_KEY` with the API key you generated).
To verify that models can actually process requests, you can monitor the chat completions endpoint. This provides a deeper health check by ensuring models can generate responses.
4. **Set Monitoring Interval:** Recommended interval: `300 seconds` (5 minutes) or longer, as model lists don't typically change very frequently.
**Alternative JSON Queries (Advanced):**
You can use more specific JSONata queries to check for particular models or providers. Here are some examples:
* **Check for at least one Ollama model:** `$count(data[owned_by='ollama'])>0`
* **Check if a specific model exists (e.g., 'gpt-4o'):** `$exists(data[id='gpt-4o'])`
* **Check if multiple specific models exist (e.g., 'gpt-4o' and 'gpt-4o-mini'):** `$count(data[id in ['gpt-4o', 'gpt-4o-mini']]) = 2`
You can test and refine your JSONata queries at [jsonata.org](https://try.jsonata.org/) using a sample API response to ensure they work as expected.
## Level 3: Model Response Testing (Deep Health Check) 🤖
For the most comprehensive monitoring, you can test if models are actually capable of processing requests and generating responses. This involves sending a simple chat completion request to the `/api/chat/completions` endpoint.
**Why Test Model Responses?**
* **End-to-End Verification:** Confirms that the entire model pipeline is working, from API request to model response.
* **Model Loading Issues:** Detects problems with specific models failing to load or respond.
* **Backend Processing Errors:** Catches errors in the backend logic that might prevent models from generating completions.
**How to Test with `curl` (Authenticated POST Request):**
This test requires an API key and sends a POST request with a simple message to the chat completions endpoint.
```bash
# Test model response
curl -X POST https://your-open-webuiinstance/api/chat/completions \
-H "Authorization: Bearer sk-adfadsflkhasdflkasdflkh" \
# Test model response - authenticated POST request
curl -X POST https://your-open-webui-instance/api/chat/completions \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"messages": [{"role": "user", "content": "Respond with the word HEALTHY"}],
"model": "llama3.1",
"temperature": 0
"model": "llama3.1", # Replace with a model you expect to be available
"temperature": 0 # Set temperature to 0 for consistent responses
}'
```
*(Replace `YOUR_API_KEY`, `your-open-webui-instance`, and `llama3.1` with your actual values.)*
**Expected Output:** A successful request will return a `200 OK` status code and a JSON response containing a chat completion. You can verify that the response includes the word "HEALTHY" (or a similar expected response based on your prompt).
**Setting up Level 3 Monitoring in Uptime Kuma would involve configuring an HTTP(s) monitor with a POST request, JSON body, authentication headers, and potentially JSON query to validate the response content. This is a more advanced setup and can be customized based on your specific needs.**
By implementing these monitoring levels, you can proactively ensure the health, reliability, and performance of your Open WebUI instance, providing a consistently positive experience for users.