Add Helicone integration tutorial for monitoring and debugging LLM applications

As DevRel at Helicone, I'm excited to contribute this integration guide to help the OpenWebUI community implement production-grade observability for their LLM applications. 

This tutorial provides step-by-step instructions for connecting
Helicone's open-source monitoring platform with OpenWebUI deployments.

Why this matters to the Open WebUI community:

1. Production readiness: As more OpenWebUI deployments move from experimentation to production, proper monitoring becomes critical. This integration enables developers to track costs, performance, and usage patterns.

2. Debugging capabilities: The community frequently requests better tools for troubleshooting model responses and understanding why certain interactions fail or underperform.

3. Cost optimization: LLM usage costs can add up quickly. Helicone's tracking helps identify opportunities to reduce token usage and optimize prompts.

4. Accessibility: Being open-source aligned with OpenWebUI's philosophy, Helicone provides enterprise-grade observability for Open WebUI interfaces.

5. Cross-provider support: Works with OpenAI, Ollama, and other providers compatible with OpenWebUI, giving users flexibility while maintaining consistent monitoring.

The tutorial includes Docker setup instructions, admin panel configuration steps, and verification guidance to ensure successful integration. I've also added screenshots demonstrating the dashboard experience.

This contribution helps bridge the gap between development and operations for LLM applications built with OpenWebUI.
This commit is contained in:
_juliettech 2025-04-10 13:12:44 -04:00 committed by GitHub
parent 43287d3592
commit 60256ba804
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194

View File

@ -0,0 +1,70 @@
---
title: "🕵🏻‍♀️ Evaluate, monitor, and debug with Helicone"
sidebar_position: 19
description: "Integrate Helicone with Open WebUI to monitor interactions across Ollama, OpenAI-compatible APIs, and custom LLM setups."
"twitter:title": "Open WebUI Integration - Helicone OSS LLM Observability"
---
# Evaluate, monitor and debug your OpenWebUI application with Helicone
Helicone is the open-source LLM observability platform for developers to monitor, debug, and improve **production-ready** applications, including your OpenWebUI deployment.
By enabling the Helicone, you can log your LLM requests, evaluate and experiment with prompts, and get instant insights that helps you push changes to production with confidence.
- **Real-time monitoring with consolidated view across model types**: Monitor both local Ollama models and cloud APIs through a single interface
- **Request visualization and replay**: See exactly what prompts were sent to each model in Open WebUI and the outputs generated by the LLMs for evaluation
- **Local LLM performance tracking**: Measure response times and throughput of your self-hosted models
- **Usage analytics by model**: Compare usage patterns between different models in your Open WebUI setup
- **User analytics** to understand interaction patterns
- **Debug capabilities** to troubleshoot issues with model responses
- **Cost tracking** for your LLM usage across providers
## How to integrate Helicone with OpenWebUI
![openwebui-helicone-setup.gif](attachment:dcdf0ed1-30d6-4fb8-8bbb-6d43d1577d9d:openwebui-helicone-setup.gif)
### Step 1: Create a Helicone account and generate your API key
Create a [Helicone account](https://www.helicone.ai/) and log in to generate an [API key here](https://us.helicone.ai/settings/api-keys).
*— Make sure to generate a [write only API key](helicone-headers/helicone-auth). This ensures you only allow logging data to Helicone without read access to your private data.*
### Step 2: Create an OpenAI account and generate your API key
Create an OpenAI account and log into [OpenAI's Developer Portal](https://platform.openai.com/account/api-keys) to generate an API key.
### Step 3: Run your Open WebUI application using Helicone's base URL
To launch your first Open WebUI application, use the command from [Open WebUI docs](https://docs.openwebui.com/) and include Helicones API BASE URL so you can query and monitor automatically.
```bash
# Set your environment variables
export HELICONE_API_KEY=<YOUR_API_KEY>
export OPENAI_API_KEY=<YOUR_OPENAI_API_KEY>
# Run Open WebUI with Helicone integration
docker run -d -p 3000:8080 \\
-e OPENAI_API_BASE_URL="<https://oai.helicone.ai/v1/$HELICONE_API_KEY>" \\
-e OPENAI_API_KEY="$OPENAI_API_KEY" \\
--name open-webui \\
ghcr.io/open-webui/open-webui
```
If you already have a Open WebUI application deployed, go to the `Admin Panel` > `Settings` > `Connections` and click on the `+` sign for "Managing OpenAI API Connections". Update the following properties:
- Your `API Base URL` would be "https://oai.helicone.ai/v1/<YOUR_HELICONE_API_KEY>"
- The `API KEY` would be your OpenAI API key.
### Step 4: Make sure monitoring is working
To make sure your integration is working, log into Helicones dashboard and review the “Requests” tab.
You should see the requests you have made through your Open WebUI interface already being logged into Helicone.
**Example trace in Helicone UI:**
![CleanShot 2025-04-10 at 09.59.46.png](attachment:02ec1f65-d1a7-4691-b768-d9d19258d0b8:CleanShot_2025-04-10_at_09.59.46.png)
## Learn more
For a comprehensive guide on Helicone, you can check out [Helicones documentation here](https://docs.helicone.ai/getting-started/quick-start)