From 9132db6fe403d5841bed2bc0a9b7918225187ce3 Mon Sep 17 00:00:00 2001 From: Timothy Jaeryang Baek Date: Sun, 8 Dec 2024 21:53:43 -0800 Subject: [PATCH] refac --- .../getting-started/using-openwebui/index.mdx | 13 --- .../tab-ollama/OllamaDocker.md | 43 ---------- .../tab-ollama/OllamaRemote.md | 50 ----------- .../using-openwebui/terminology.mdx | 27 ------ .../troubleshooting-ollama.mdx | 82 ------------------- 5 files changed, 215 deletions(-) delete mode 100644 docs/getting-started/using-openwebui/tab-ollama/OllamaDocker.md delete mode 100644 docs/getting-started/using-openwebui/tab-ollama/OllamaRemote.md delete mode 100644 docs/getting-started/using-openwebui/terminology.mdx delete mode 100644 docs/getting-started/using-openwebui/troubleshooting-ollama.mdx diff --git a/docs/getting-started/using-openwebui/index.mdx b/docs/getting-started/using-openwebui/index.mdx index 28c6925..5ce08f6 100644 --- a/docs/getting-started/using-openwebui/index.mdx +++ b/docs/getting-started/using-openwebui/index.mdx @@ -9,28 +9,15 @@ Explore the essential concepts and features of Open WebUI, including models, kno --- -## ๐Ÿ“ฅ Troubleshooting Ollama -Many users wish to make use of their existing Ollama instance, but encounter common issues. -If this is you, then check out the [Troubleshooting Ollama guide](./troubleshooting-ollama.mdx) - ---- - -## ๐Ÿ“š Terminology -Understand key components: models, prompts, knowledge, functions, pipes, and actions. -[Read the Terminology Guide](./terminology.mdx) - ## ๐ŸŒ Additional Resources and Integrations Find community tools, integrations, and official resources. [Additional Resources Guide](./resources) ---- - ## ๐Ÿ“– Community Tutorials If you like the documentation you are reading right now, then check out this tutorial on [Configuring RAG with OpenWebUI Documentation](../../tutorials/tips/rag-tutorial.md). Then go on to explore other community-submitted tutorials to enhance your OpenWebUI experience. [Explore Community Tutorials](/category/-tutorials) - --- Stay tuned for more updates as we continue to expand these sections! \ No newline at end of file diff --git a/docs/getting-started/using-openwebui/tab-ollama/OllamaDocker.md b/docs/getting-started/using-openwebui/tab-ollama/OllamaDocker.md deleted file mode 100644 index 093ae6f..0000000 --- a/docs/getting-started/using-openwebui/tab-ollama/OllamaDocker.md +++ /dev/null @@ -1,43 +0,0 @@ - -### ๐Ÿณ Ollama Inside Docker - -If **Ollama is deployed inside Docker** (e.g., using Docker Compose or Kubernetes), the service will be available: - -- **Inside the container**: `http://127.0.0.1:11434` -- **From the host**: `http://localhost:11435` (if exposed via host network) - -#### Step 1: Check Available Models - -- Inside the container: - - ```bash - docker exec -it openwebui curl http://ollama:11434/v1/models - ``` - -- From the host (if exposed): - - ```bash - curl http://localhost:11435/v1/models - ``` - -This command lists all available models and confirms that Ollama is running. - -#### Step 2: Download Llama 3.2 - -Run the following command: - -```bash -docker exec -it ollama ollama pull llama3.2 -``` - -**Tip:** You can download other models from Hugging Face by specifying the appropriate URL. For example, to download a higher-quality **8-bit version of Llama 3.2**: - -```bash -ollama pull hf.co/bartowski/Llama-3.2-3B-Instruct-GGUF:Q8_0 -``` - -#### Step 3: Access the WebUI - -Once everything is set up, access the WebUI at: - -[http://localhost:3000](http://localhost:3000) diff --git a/docs/getting-started/using-openwebui/tab-ollama/OllamaRemote.md b/docs/getting-started/using-openwebui/tab-ollama/OllamaRemote.md deleted file mode 100644 index c99af07..0000000 --- a/docs/getting-started/using-openwebui/tab-ollama/OllamaRemote.md +++ /dev/null @@ -1,50 +0,0 @@ - -### ๐Ÿ› ๏ธ Bring Your Own Ollama (BYO Ollama) - -If Ollama is running on the **host machine** or another server on your network, follow these steps. - -#### Step 1: Check Available Models - -- If Ollama is **local**, run: - - ```bash - curl http://localhost:11434/v1/models - ``` - -- If Ollama is **remote**, use: - - ```bash - curl http://:11434/v1/models - ``` - -This confirms that Ollama is available and lists the available models. - -#### Step 2: Set the OLLAMA_BASE_URL - -If Ollama is running **remotely** or on the host, set the following environment variable: - -```bash -export OLLAMA_HOST=:11434 -``` - -This ensures Open WebUI can reach the remote Ollama instance. - -#### Step 3: Download Llama 3.2 - -From your local or remote machine, run: - -```bash -ollama pull llama3.2 -``` - -**Tip:** Use this command to download the 8-bit version from Hugging Face: - -```bash -ollama pull hf.co/bartowski/Llama-3.2-3B-Instruct-GGUF:Q8_0 -``` - -#### Step 4: Access the WebUI - -You can now access the WebUI at: - -[http://localhost:3000](http://localhost:3000) diff --git a/docs/getting-started/using-openwebui/terminology.mdx b/docs/getting-started/using-openwebui/terminology.mdx deleted file mode 100644 index 76c8dfa..0000000 --- a/docs/getting-started/using-openwebui/terminology.mdx +++ /dev/null @@ -1,27 +0,0 @@ ---- -sidebar_position: 3 -title: "๐Ÿ“– OpenWebUI Terminology" ---- - -# ๐Ÿ“– OpenWebUI Terminology - -Enhance your understanding of OpenWebUI with key concepts and components to improve your usage and configuration. - ---- - -## Explore the Workspace -Begin by exploring the [Workspace](../../features/workspace) to discover essential concepts such as Modelfiles, Knowledge, Prompts, Tools, and Functions. - ---- - -## Interact with the Playground -Visit the Playground to directly engage with a Large Language Model. Here, you can experiment with different `System Prompts` to modify the model's behavior and persona. - ---- - -## Personalize in Settings -Access the Settings to personalize your experience. Customize features like Memory, adjust Voice settings for both TTS (Text-to-Speech) and STT (Speech-to-Text), and toggle between Dark/Light mode for optimal viewing. - ---- - -This terminology guide will help you navigate and configure OpenWebUI effectively! \ No newline at end of file diff --git a/docs/getting-started/using-openwebui/troubleshooting-ollama.mdx b/docs/getting-started/using-openwebui/troubleshooting-ollama.mdx deleted file mode 100644 index 4f813a8..0000000 --- a/docs/getting-started/using-openwebui/troubleshooting-ollama.mdx +++ /dev/null @@ -1,82 +0,0 @@ ---- -sidebar_position: 2 -title: "๐Ÿค–Troubleshooting Ollama" ---- - -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; - -# Troubleshooting Ollama - -Explore how to download, load, and use models with Ollama, both via **Docker** and **Remote** setups. - ---- - - - - ## ๐Ÿณ Ollama Inside Docker - - If **Ollama is deployed inside Docker** (e.g., using Docker Compose or Kubernetes), the service will be available: - - - **Inside the container**: `http://127.0.0.1:11434` - - **From the host**: `http://localhost:11435` (if exposed via host network) - - ### Step 1: Check Available Models - ```bash - docker exec -it openwebui curl http://ollama:11434/v1/models - ``` - - From the host (if exposed): - ```bash - curl http://localhost:11435/v1/models - ``` - - ### Step 2: Download Llama 3.2 - ```bash - docker exec -it ollama ollama pull llama3.2 - ``` - - You can also download a higher-quality version (8-bit) from Hugging Face: - ```bash - docker exec -it ollama ollama pull hf.co/bartowski/Llama-3.2-3B-Instruct-GGUF:Q8_0 - ``` - - - - - ## ๐Ÿ› ๏ธ Bring Your Own Ollama (BYO Ollama) - - If Ollama is running on the **host machine** or another server on your network, follow these steps. - - ### Step 1: Check Available Models - Local: - ```bash - curl http://localhost:11434/v1/models - ``` - - Remote: - ```bash - curl http://:11434/v1/models - ``` - - ### Step 2: Set the OLLAMA_BASE_URL - ```bash - export OLLAMA_HOST=:11434 - ``` - - ### Step 3: Download Llama 3.2 - ```bash - ollama pull llama3.2 - ``` - - Or download the 8-bit version from Hugging Face: - ```bash - ollama pull hf.co/bartowski/Llama-3.2-3B-Instruct-GGUF:Q8_0 - ``` - - - - ---- - -You now have everything you need to download and run models with **Ollama**. Happy exploring!