From 486c91e487c2d62d33ba218ec23f87952e70bd05 Mon Sep 17 00:00:00 2001 From: Matthew Hand Date: Tue, 5 Nov 2024 19:53:02 +0000 Subject: [PATCH] Add OllamaDocker.md changes from feature-nginx-combined --- .../tab-ollama/OllamaDocker.md | 43 +++++++++++++++++++ 1 file changed, 43 insertions(+) create mode 100644 docs/getting-started/using-openwebui/tab-ollama/OllamaDocker.md diff --git a/docs/getting-started/using-openwebui/tab-ollama/OllamaDocker.md b/docs/getting-started/using-openwebui/tab-ollama/OllamaDocker.md new file mode 100644 index 0000000..093ae6f --- /dev/null +++ b/docs/getting-started/using-openwebui/tab-ollama/OllamaDocker.md @@ -0,0 +1,43 @@ + +### 🐳 Ollama Inside Docker + +If **Ollama is deployed inside Docker** (e.g., using Docker Compose or Kubernetes), the service will be available: + +- **Inside the container**: `http://127.0.0.1:11434` +- **From the host**: `http://localhost:11435` (if exposed via host network) + +#### Step 1: Check Available Models + +- Inside the container: + + ```bash + docker exec -it openwebui curl http://ollama:11434/v1/models + ``` + +- From the host (if exposed): + + ```bash + curl http://localhost:11435/v1/models + ``` + +This command lists all available models and confirms that Ollama is running. + +#### Step 2: Download Llama 3.2 + +Run the following command: + +```bash +docker exec -it ollama ollama pull llama3.2 +``` + +**Tip:** You can download other models from Hugging Face by specifying the appropriate URL. For example, to download a higher-quality **8-bit version of Llama 3.2**: + +```bash +ollama pull hf.co/bartowski/Llama-3.2-3B-Instruct-GGUF:Q8_0 +``` + +#### Step 3: Access the WebUI + +Once everything is set up, access the WebUI at: + +[http://localhost:3000](http://localhost:3000)