mirror of
https://github.com/open-webui/docs
synced 2025-06-16 11:28:36 +00:00
Add OllamaDocker.md changes from feature-nginx-combined
This commit is contained in:
parent
7d71c2f5ca
commit
a5a3e77cd2
@ -0,0 +1,43 @@
|
||||
|
||||
### 🐳 Ollama Inside Docker
|
||||
|
||||
If **Ollama is deployed inside Docker** (e.g., using Docker Compose or Kubernetes), the service will be available:
|
||||
|
||||
- **Inside the container**: `http://127.0.0.1:11434`
|
||||
- **From the host**: `http://localhost:11435` (if exposed via host network)
|
||||
|
||||
#### Step 1: Check Available Models
|
||||
|
||||
- Inside the container:
|
||||
|
||||
```bash
|
||||
docker exec -it openwebui curl http://ollama:11434/v1/models
|
||||
```
|
||||
|
||||
- From the host (if exposed):
|
||||
|
||||
```bash
|
||||
curl http://localhost:11435/v1/models
|
||||
```
|
||||
|
||||
This command lists all available models and confirms that Ollama is running.
|
||||
|
||||
#### Step 2: Download Llama 3.2
|
||||
|
||||
Run the following command:
|
||||
|
||||
```bash
|
||||
docker exec -it ollama ollama pull llama3.2
|
||||
```
|
||||
|
||||
**Tip:** You can download other models from Hugging Face by specifying the appropriate URL. For example, to download a higher-quality **8-bit version of Llama 3.2**:
|
||||
|
||||
```bash
|
||||
ollama pull hf.co/bartowski/Llama-3.2-3B-Instruct-GGUF:Q8_0
|
||||
```
|
||||
|
||||
#### Step 3: Access the WebUI
|
||||
|
||||
Once everything is set up, access the WebUI at:
|
||||
|
||||
[http://localhost:3000](http://localhost:3000)
|
Loading…
Reference in New Issue
Block a user