mirror of
https://github.com/open-webui/docs
synced 2025-05-19 10:52:14 +00:00
Merge pull request #538 from thiswillbeyourgithub/enh_valves_fix
fix: wrongly deduplicated valve doc + misc enhancements
This commit is contained in:
commit
3d9f8be42f
@ -27,7 +27,7 @@ Actions have a single main component called an action function. This component t
|
||||
<details>
|
||||
<summary>Example</summary>
|
||||
|
||||
```
|
||||
```python
|
||||
async def action(
|
||||
self,
|
||||
body: dict,
|
||||
|
@ -177,10 +177,10 @@ def stream(self, event: dict) -> dict:
|
||||
```
|
||||
|
||||
> **Example Streamed Events:**
|
||||
```json
|
||||
{'id': 'chatcmpl-B4l99MMaP3QLGU5uV7BaBM0eDS0jb','choices': [{'delta': {'content': 'Hi'}}]}
|
||||
{'id': 'chatcmpl-B4l99MMaP3QLGU5uV7BaBM0eDS0jb','choices': [{'delta': {'content': '!'}}]}
|
||||
{'id': 'chatcmpl-B4l99MMaP3QLGU5uV7BaBM0eDS0jb','choices': [{'delta': {'content': ' 😊'}}]}
|
||||
```jsonl
|
||||
{"id": "chatcmpl-B4l99MMaP3QLGU5uV7BaBM0eDS0jb","choices": [{"delta": {"content": "Hi"}}]}
|
||||
{"id": "chatcmpl-B4l99MMaP3QLGU5uV7BaBM0eDS0jb","choices": [{"delta": {"content": "!"}}]}
|
||||
{"id": "chatcmpl-B4l99MMaP3QLGU5uV7BaBM0eDS0jb","choices": [{"delta": {"content": " 😊"}}]}
|
||||
```
|
||||
📖 **What Happens?**
|
||||
- Each line represents a **small fragment** of the model's streamed response.
|
||||
@ -289,4 +289,4 @@ By now, you’ve learned:
|
||||
|
||||
🚀 **Your Turn**: Start experimenting! What small tweak or context addition could elevate your Open WebUI experience? Filters are fun to build, flexible to use, and can take your models to the next level!
|
||||
|
||||
Happy coding! ✨
|
||||
Happy coding! ✨
|
||||
|
@ -56,11 +56,131 @@ Each tool must have type hints for arguments. The types may also be nested, such
|
||||
|
||||
Valves and UserValves are used for specifying customizable settings of the Tool, you can read more on the dedicated [Valves & UserValves](../valves/index.mdx) page.
|
||||
|
||||
### Optional Arguments
|
||||
Below is a list of optional arguments your tools can depend on:
|
||||
- `__event_emitter__`: Emit events (see following section)
|
||||
- `__event_call__`: Same as event emitter but can be used for user interactions
|
||||
- `__user__`: A dictionary with user information. It also contains the `UserValves` object in `__user__["valves"]`.
|
||||
- `__metadata__`: Dictionary with chat metadata
|
||||
- `__messages__`: List of previous messages
|
||||
- `__files__`: Attached files
|
||||
- `__model__`: A dictionary with model information
|
||||
|
||||
Just add them as argument to any method of your Tool class just like `__user__` in the example above.
|
||||
|
||||
### Event Emitters
|
||||
Event Emitters are used to add additional information to the chat interface. Similarly to Filter Outlets, Event Emitters are capable of appending content to the chat. Unlike Filter Outlets, they are not capable of stripping information. Additionally, emitters can be activated at any stage during the Tool.
|
||||
|
||||
There are two different types of Event Emitters:
|
||||
|
||||
If the model seems to be unable to call the tool, make sure it is enabled (either via the Model page or via the `+` sign next to the chat input field). You can also turn the `Function Calling` argument of the `Advanced Params` section of the Model page from `Default` to `Native`.
|
||||
|
||||
#### Status
|
||||
This is used to add statuses to a message while it is performing steps. These can be done at any stage during the Tool. These statuses appear right above the message content. These are very useful for Tools that delay the LLM response or process large amounts of information. This allows you to inform users what is being processed in real-time.
|
||||
|
||||
```python
|
||||
await __event_emitter__(
|
||||
{
|
||||
"type": "status", # We set the type here
|
||||
"data": {"description": "Message that shows up in the chat", "done": False, "hidden": False},
|
||||
# Note done is False here indicating we are still emitting statuses
|
||||
}
|
||||
)
|
||||
```
|
||||
|
||||
<details>
|
||||
<summary>Example</summary>
|
||||
|
||||
```python
|
||||
async def test_function(
|
||||
self, prompt: str, __user__: dict, __event_emitter__=None
|
||||
) -> str:
|
||||
"""
|
||||
This is a demo
|
||||
|
||||
:param test: this is a test parameter
|
||||
"""
|
||||
|
||||
await __event_emitter__(
|
||||
{
|
||||
"type": "status", # We set the type here
|
||||
"data": {"description": "Message that shows up in the chat", "done": False},
|
||||
# Note done is False here indicating we are still emitting statuses
|
||||
}
|
||||
)
|
||||
|
||||
# Do some other logic here
|
||||
await __event_emitter__(
|
||||
{
|
||||
"type": "status",
|
||||
"data": {"description": "Completed a task message", "done": True, "hidden": False},
|
||||
# Note done is True here indicating we are done emitting statuses
|
||||
# You can also set "hidden": True if you want to remove the status once the message is returned
|
||||
}
|
||||
)
|
||||
|
||||
except Exception as e:
|
||||
await __event_emitter__(
|
||||
{
|
||||
"type": "status",
|
||||
"data": {"description": f"An error occured: {e}", "done": True},
|
||||
}
|
||||
)
|
||||
|
||||
return f"Tell the user: {e}"
|
||||
```
|
||||
</details>
|
||||
|
||||
#### Message
|
||||
This type is used to append a message to the LLM at any stage in the Tool. This means that you can append messages, embed images, and even render web pages before, or after, or during the LLM response.
|
||||
|
||||
```python
|
||||
await __event_emitter__(
|
||||
{
|
||||
"type": "message", # We set the type here
|
||||
"data": {"content": "This message will be appended to the chat."},
|
||||
# Note that with message types we do NOT have to set a done condition
|
||||
}
|
||||
)
|
||||
```
|
||||
|
||||
<details>
|
||||
<summary>Example</summary>
|
||||
|
||||
```python
|
||||
async def test_function(
|
||||
self, prompt: str, __user__: dict, __event_emitter__=None
|
||||
) -> str:
|
||||
"""
|
||||
This is a demo
|
||||
|
||||
:param test: this is a test parameter
|
||||
"""
|
||||
|
||||
await __event_emitter__(
|
||||
{
|
||||
"type": "message", # We set the type here
|
||||
"data": {"content": "This message will be appended to the chat."},
|
||||
# Note that with message types we do NOT have to set a done condition
|
||||
}
|
||||
)
|
||||
|
||||
except Exception as e:
|
||||
await __event_emitter__(
|
||||
{
|
||||
"type": "status",
|
||||
"data": {"description": f"An error occured: {e}", "done": True},
|
||||
}
|
||||
)
|
||||
|
||||
return f"Tell the user: {e}"
|
||||
```
|
||||
</details>
|
||||
|
||||
#### Citations
|
||||
This type is used to provide citations or references in the chat. You can utilize it to specify the content, the source, and any relevant metadata. Below is an example of how to emit a citation event:
|
||||
|
||||
```
|
||||
```python
|
||||
await __event_emitter__(
|
||||
{
|
||||
"type": "citation",
|
||||
@ -89,7 +209,7 @@ Warning: if you set `self.citation = True`, this will replace any custom citatio
|
||||
<details>
|
||||
<summary>Example</summary>
|
||||
|
||||
```
|
||||
```python
|
||||
class Tools:
|
||||
class UserValves(BaseModel):
|
||||
test: bool = Field(
|
||||
@ -138,7 +258,7 @@ No measures are taken to handle package conflicts with Open WebUI's requirements
|
||||
<details>
|
||||
<summary>Example</summary>
|
||||
|
||||
```
|
||||
```python
|
||||
"""
|
||||
title: myToolName
|
||||
author: myName
|
||||
|
@ -14,7 +14,8 @@ Valves are configurable by admins alone via the Tools or Functions menus. On the
|
||||
<details>
|
||||
<summary>Commented example</summary>
|
||||
|
||||
```
|
||||
```python
|
||||
|
||||
from pydantic import BaseModel, Field
|
||||
from typing import Literal
|
||||
|
||||
@ -74,114 +75,3 @@ class Filter:
|
||||
```
|
||||
|
||||
</details>
|
||||
|
||||
### Event Emitters
|
||||
|
||||
Event Emitters are used to add additional information to the chat interface. Similarly to Filter Outlets, Event Emitters are capable of appending content to the chat. Unlike Filter Outlets, they are not capable of stripping information. Additionally, emitters can be activated at any stage during the function.
|
||||
|
||||
There are two different types of Event Emitters:
|
||||
|
||||
#### Status
|
||||
|
||||
This is used to add statuses to a message while it is performing steps. These can be done at any stage during the Function. These statuses appear right above the message content. These are very useful for Functions that delay the LLM response or process large amounts of information. This allows you to inform users what is being processed in real-time.
|
||||
|
||||
```
|
||||
await __event_emitter__(
|
||||
{
|
||||
"type": "status", # We set the type here
|
||||
"data": {"description": "Message that shows up in the chat", "done": False},
|
||||
# Note done is False here indicating we are still emitting statuses
|
||||
}
|
||||
)
|
||||
```
|
||||
|
||||
<details>
|
||||
<summary>Example</summary>
|
||||
|
||||
```
|
||||
async def test_function(
|
||||
self, prompt: str, __user__: dict, __event_emitter__=None
|
||||
) -> str:
|
||||
"""
|
||||
This is a demo
|
||||
|
||||
:param test: this is a test parameter
|
||||
"""
|
||||
|
||||
await __event_emitter__(
|
||||
{
|
||||
"type": "status", # We set the type here
|
||||
"data": {"description": "Message that shows up in the chat", "done": False},
|
||||
# Note done is False here indicating we are still emitting statuses
|
||||
}
|
||||
)
|
||||
|
||||
# Do some other logic here
|
||||
await __event_emitter__(
|
||||
{
|
||||
"type": "status",
|
||||
"data": {"description": "Completed a task message", "done": True},
|
||||
# Note done is True here indicating we are done emitting statuses
|
||||
}
|
||||
)
|
||||
|
||||
except Exception as e:
|
||||
await __event_emitter__(
|
||||
{
|
||||
"type": "status",
|
||||
"data": {"description": f"An error occured: {e}", "done": True},
|
||||
}
|
||||
)
|
||||
|
||||
return f"Tell the user: {e}"
|
||||
```
|
||||
|
||||
</details>
|
||||
|
||||
#### Message
|
||||
|
||||
This type is used to append a message to the LLM at any stage in the Function. This means that you can append messages, embed images, and even render web pages before, or after, or during the LLM response.
|
||||
|
||||
```
|
||||
await __event_emitter__(
|
||||
{
|
||||
"type": "message", # We set the type here
|
||||
"data": {"content": "This message will be appended to the chat."},
|
||||
# Note that with message types we do NOT have to set a done condition
|
||||
}
|
||||
)
|
||||
```
|
||||
|
||||
<details>
|
||||
<summary>Example</summary>
|
||||
|
||||
```
|
||||
async def test_function(
|
||||
self, prompt: str, __user__: dict, __event_emitter__=None
|
||||
) -> str:
|
||||
"""
|
||||
This is a demo
|
||||
|
||||
:param test: this is a test parameter
|
||||
"""
|
||||
|
||||
await __event_emitter__(
|
||||
{
|
||||
"type": "message", # We set the type here
|
||||
"data": {"content": "This message will be appended to the chat."},
|
||||
# Note that with message types we do NOT have to set a done condition
|
||||
}
|
||||
)
|
||||
|
||||
except Exception as e:
|
||||
await __event_emitter__(
|
||||
{
|
||||
"type": "status",
|
||||
"data": {"description": f"An error occured: {e}", "done": True},
|
||||
}
|
||||
)
|
||||
|
||||
return f"Tell the user: {e}"
|
||||
```
|
||||
|
||||
</details>
|
||||
|
@ -20,7 +20,7 @@ import { TopBanners } from "@site/src/components/TopBanners";
|
||||
# Pipelines: UI-Agnostic OpenAI API Plugin Framework
|
||||
|
||||
:::warning
|
||||
**DO NOT USE PIPELINES!**
|
||||
**DO NOT USE PIPELINES IF!**
|
||||
|
||||
If your goal is simply to add support for additional providers like Anthropic or basic filters, you likely don't need Pipelines . For those cases, Open WebUI Functions are a better fit—it's built-in, much more convenient, and easier to configure. Pipelines, however, comes into play when you're dealing with computationally heavy tasks (e.g., running large models or complex logic) that you want to offload from your main Open WebUI instance for better performance and scalability.
|
||||
:::
|
||||
|
@ -13,7 +13,7 @@ When adding valves to your pipeline, include a way to ensure that valves can be
|
||||
|
||||
- Use `os.getenv()` to set an environment variable to use for the pipeline, and a default value to use if the environment variable isn't set. An example can be seen below:
|
||||
|
||||
```
|
||||
```python
|
||||
self.valves = self.Valves(
|
||||
**{
|
||||
"LLAMAINDEX_OLLAMA_BASE_URL": os.getenv("LLAMAINDEX_OLLAMA_BASE_URL", "http://localhost:11434"),
|
||||
@ -25,7 +25,7 @@ self.valves = self.Valves(
|
||||
|
||||
- Set the valve to the `Optional` type, which will allow the pipeline to load even if no value is set for the valve.
|
||||
|
||||
```
|
||||
```python
|
||||
class Pipeline:
|
||||
class Valves(BaseModel):
|
||||
target_user_roles: List[str] = ["user"]
|
||||
|
Loading…
Reference in New Issue
Block a user