mirror of
https://github.com/open-webui/docs
synced 2025-05-20 19:26:22 +00:00
refac
This commit is contained in:
parent
c971f63561
commit
9132db6fe4
@ -9,28 +9,15 @@ Explore the essential concepts and features of Open WebUI, including models, kno
|
|||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## 📥 Troubleshooting Ollama
|
|
||||||
Many users wish to make use of their existing Ollama instance, but encounter common issues.
|
|
||||||
If this is you, then check out the [Troubleshooting Ollama guide](./troubleshooting-ollama.mdx)
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 📚 Terminology
|
|
||||||
Understand key components: models, prompts, knowledge, functions, pipes, and actions.
|
|
||||||
[Read the Terminology Guide](./terminology.mdx)
|
|
||||||
|
|
||||||
## 🌐 Additional Resources and Integrations
|
## 🌐 Additional Resources and Integrations
|
||||||
Find community tools, integrations, and official resources.
|
Find community tools, integrations, and official resources.
|
||||||
[Additional Resources Guide](./resources)
|
[Additional Resources Guide](./resources)
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 📖 Community Tutorials
|
## 📖 Community Tutorials
|
||||||
If you like the documentation you are reading right now, then check out this tutorial on [Configuring RAG with OpenWebUI Documentation](../../tutorials/tips/rag-tutorial.md).
|
If you like the documentation you are reading right now, then check out this tutorial on [Configuring RAG with OpenWebUI Documentation](../../tutorials/tips/rag-tutorial.md).
|
||||||
Then go on to explore other community-submitted tutorials to enhance your OpenWebUI experience.
|
Then go on to explore other community-submitted tutorials to enhance your OpenWebUI experience.
|
||||||
[Explore Community Tutorials](/category/-tutorials)
|
[Explore Community Tutorials](/category/-tutorials)
|
||||||
|
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
Stay tuned for more updates as we continue to expand these sections!
|
Stay tuned for more updates as we continue to expand these sections!
|
@ -1,43 +0,0 @@
|
|||||||
|
|
||||||
### 🐳 Ollama Inside Docker
|
|
||||||
|
|
||||||
If **Ollama is deployed inside Docker** (e.g., using Docker Compose or Kubernetes), the service will be available:
|
|
||||||
|
|
||||||
- **Inside the container**: `http://127.0.0.1:11434`
|
|
||||||
- **From the host**: `http://localhost:11435` (if exposed via host network)
|
|
||||||
|
|
||||||
#### Step 1: Check Available Models
|
|
||||||
|
|
||||||
- Inside the container:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
docker exec -it openwebui curl http://ollama:11434/v1/models
|
|
||||||
```
|
|
||||||
|
|
||||||
- From the host (if exposed):
|
|
||||||
|
|
||||||
```bash
|
|
||||||
curl http://localhost:11435/v1/models
|
|
||||||
```
|
|
||||||
|
|
||||||
This command lists all available models and confirms that Ollama is running.
|
|
||||||
|
|
||||||
#### Step 2: Download Llama 3.2
|
|
||||||
|
|
||||||
Run the following command:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
docker exec -it ollama ollama pull llama3.2
|
|
||||||
```
|
|
||||||
|
|
||||||
**Tip:** You can download other models from Hugging Face by specifying the appropriate URL. For example, to download a higher-quality **8-bit version of Llama 3.2**:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
ollama pull hf.co/bartowski/Llama-3.2-3B-Instruct-GGUF:Q8_0
|
|
||||||
```
|
|
||||||
|
|
||||||
#### Step 3: Access the WebUI
|
|
||||||
|
|
||||||
Once everything is set up, access the WebUI at:
|
|
||||||
|
|
||||||
[http://localhost:3000](http://localhost:3000)
|
|
@ -1,50 +0,0 @@
|
|||||||
|
|
||||||
### 🛠️ Bring Your Own Ollama (BYO Ollama)
|
|
||||||
|
|
||||||
If Ollama is running on the **host machine** or another server on your network, follow these steps.
|
|
||||||
|
|
||||||
#### Step 1: Check Available Models
|
|
||||||
|
|
||||||
- If Ollama is **local**, run:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
curl http://localhost:11434/v1/models
|
|
||||||
```
|
|
||||||
|
|
||||||
- If Ollama is **remote**, use:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
curl http://<remote-ip>:11434/v1/models
|
|
||||||
```
|
|
||||||
|
|
||||||
This confirms that Ollama is available and lists the available models.
|
|
||||||
|
|
||||||
#### Step 2: Set the OLLAMA_BASE_URL
|
|
||||||
|
|
||||||
If Ollama is running **remotely** or on the host, set the following environment variable:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
export OLLAMA_HOST=<remote-ip>:11434
|
|
||||||
```
|
|
||||||
|
|
||||||
This ensures Open WebUI can reach the remote Ollama instance.
|
|
||||||
|
|
||||||
#### Step 3: Download Llama 3.2
|
|
||||||
|
|
||||||
From your local or remote machine, run:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
ollama pull llama3.2
|
|
||||||
```
|
|
||||||
|
|
||||||
**Tip:** Use this command to download the 8-bit version from Hugging Face:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
ollama pull hf.co/bartowski/Llama-3.2-3B-Instruct-GGUF:Q8_0
|
|
||||||
```
|
|
||||||
|
|
||||||
#### Step 4: Access the WebUI
|
|
||||||
|
|
||||||
You can now access the WebUI at:
|
|
||||||
|
|
||||||
[http://localhost:3000](http://localhost:3000)
|
|
@ -1,27 +0,0 @@
|
|||||||
---
|
|
||||||
sidebar_position: 3
|
|
||||||
title: "📖 OpenWebUI Terminology"
|
|
||||||
---
|
|
||||||
|
|
||||||
# 📖 OpenWebUI Terminology
|
|
||||||
|
|
||||||
Enhance your understanding of OpenWebUI with key concepts and components to improve your usage and configuration.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Explore the Workspace
|
|
||||||
Begin by exploring the [Workspace](../../features/workspace) to discover essential concepts such as Modelfiles, Knowledge, Prompts, Tools, and Functions.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Interact with the Playground
|
|
||||||
Visit the Playground to directly engage with a Large Language Model. Here, you can experiment with different `System Prompts` to modify the model's behavior and persona.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Personalize in Settings
|
|
||||||
Access the Settings to personalize your experience. Customize features like Memory, adjust Voice settings for both TTS (Text-to-Speech) and STT (Speech-to-Text), and toggle between Dark/Light mode for optimal viewing.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
This terminology guide will help you navigate and configure OpenWebUI effectively!
|
|
@ -1,82 +0,0 @@
|
|||||||
---
|
|
||||||
sidebar_position: 2
|
|
||||||
title: "🤖Troubleshooting Ollama"
|
|
||||||
---
|
|
||||||
|
|
||||||
import Tabs from '@theme/Tabs';
|
|
||||||
import TabItem from '@theme/TabItem';
|
|
||||||
|
|
||||||
# Troubleshooting Ollama
|
|
||||||
|
|
||||||
Explore how to download, load, and use models with Ollama, both via **Docker** and **Remote** setups.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
<Tabs groupId="ollama-setup">
|
|
||||||
<TabItem value="docker-ollama" label="Ollama Inside Docker">
|
|
||||||
## 🐳 Ollama Inside Docker
|
|
||||||
|
|
||||||
If **Ollama is deployed inside Docker** (e.g., using Docker Compose or Kubernetes), the service will be available:
|
|
||||||
|
|
||||||
- **Inside the container**: `http://127.0.0.1:11434`
|
|
||||||
- **From the host**: `http://localhost:11435` (if exposed via host network)
|
|
||||||
|
|
||||||
### Step 1: Check Available Models
|
|
||||||
```bash
|
|
||||||
docker exec -it openwebui curl http://ollama:11434/v1/models
|
|
||||||
```
|
|
||||||
|
|
||||||
From the host (if exposed):
|
|
||||||
```bash
|
|
||||||
curl http://localhost:11435/v1/models
|
|
||||||
```
|
|
||||||
|
|
||||||
### Step 2: Download Llama 3.2
|
|
||||||
```bash
|
|
||||||
docker exec -it ollama ollama pull llama3.2
|
|
||||||
```
|
|
||||||
|
|
||||||
You can also download a higher-quality version (8-bit) from Hugging Face:
|
|
||||||
```bash
|
|
||||||
docker exec -it ollama ollama pull hf.co/bartowski/Llama-3.2-3B-Instruct-GGUF:Q8_0
|
|
||||||
```
|
|
||||||
|
|
||||||
</TabItem>
|
|
||||||
|
|
||||||
<TabItem value="byo-ollama" label="BYO Ollama (External Ollama)">
|
|
||||||
## 🛠️ Bring Your Own Ollama (BYO Ollama)
|
|
||||||
|
|
||||||
If Ollama is running on the **host machine** or another server on your network, follow these steps.
|
|
||||||
|
|
||||||
### Step 1: Check Available Models
|
|
||||||
Local:
|
|
||||||
```bash
|
|
||||||
curl http://localhost:11434/v1/models
|
|
||||||
```
|
|
||||||
|
|
||||||
Remote:
|
|
||||||
```bash
|
|
||||||
curl http://<remote-ip>:11434/v1/models
|
|
||||||
```
|
|
||||||
|
|
||||||
### Step 2: Set the OLLAMA_BASE_URL
|
|
||||||
```bash
|
|
||||||
export OLLAMA_HOST=<remote-ip>:11434
|
|
||||||
```
|
|
||||||
|
|
||||||
### Step 3: Download Llama 3.2
|
|
||||||
```bash
|
|
||||||
ollama pull llama3.2
|
|
||||||
```
|
|
||||||
|
|
||||||
Or download the 8-bit version from Hugging Face:
|
|
||||||
```bash
|
|
||||||
ollama pull hf.co/bartowski/Llama-3.2-3B-Instruct-GGUF:Q8_0
|
|
||||||
```
|
|
||||||
|
|
||||||
</TabItem>
|
|
||||||
</Tabs>
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
You now have everything you need to download and run models with **Ollama**. Happy exploring!
|
|
Loading…
Reference in New Issue
Block a user