Add OllamaDocker.md changes from feature-nginx-combined

This commit is contained in:
Matthew Hand 2024-11-05 19:53:02 +00:00
parent 452064dfcb
commit 486c91e487

View File

@ -0,0 +1,43 @@
### 🐳 Ollama Inside Docker
If **Ollama is deployed inside Docker** (e.g., using Docker Compose or Kubernetes), the service will be available:
- **Inside the container**: `http://127.0.0.1:11434`
- **From the host**: `http://localhost:11435` (if exposed via host network)
#### Step 1: Check Available Models
- Inside the container:
```bash
docker exec -it openwebui curl http://ollama:11434/v1/models
```
- From the host (if exposed):
```bash
curl http://localhost:11435/v1/models
```
This command lists all available models and confirms that Ollama is running.
#### Step 2: Download Llama 3.2
Run the following command:
```bash
docker exec -it ollama ollama pull llama3.2
```
**Tip:** You can download other models from Hugging Face by specifying the appropriate URL. For example, to download a higher-quality **8-bit version of Llama 3.2**:
```bash
ollama pull hf.co/bartowski/Llama-3.2-3B-Instruct-GGUF:Q8_0
```
#### Step 3: Access the WebUI
Once everything is set up, access the WebUI at:
[http://localhost:3000](http://localhost:3000)