mirror of
https://github.com/open-webui/docs
synced 2025-05-20 03:08:56 +00:00
Merge pull request #37 from justinh-rahb/rocm-compose
Add AMD GPU Docker installation instructions
This commit is contained in:
commit
c0494a3f7e
@ -101,6 +101,30 @@ title: "🚀 Getting Started"
|
||||
docker compose -f docker-compose.yaml -f docker-compose.gpu.yaml up -d --build
|
||||
```
|
||||
|
||||
- **For AMD GPU Support:** Some AMD GPUs require setting an environment variable for proper functionality:
|
||||
|
||||
```bash
|
||||
HSA_OVERRIDE_GFX_VERSION=11.0.0 docker compose -f docker-compose.yaml -f docker-compose.amdgpu.yaml up -d --build
|
||||
```
|
||||
|
||||
<details>
|
||||
<summary>AMD GPU Support with HSA_OVERRIDE_GFX_VERSION</summary>
|
||||
|
||||
For AMD GPU users encountering compatibility issues, setting the `HSA_OVERRIDE_GFX_VERSION` environment variable is crucial. This variable instructs the ROCm platform to emulate a specific GPU architecture, ensuring compatibility with various AMD GPUs not officially supported. Depending on your GPU model, adjust the `HSA_OVERRIDE_GFX_VERSION` as follows:
|
||||
|
||||
- **For RDNA1 & RDNA2 GPUs** (e.g., RX 6700, RX 680M): Use `HSA_OVERRIDE_GFX_VERSION=10.3.0`.
|
||||
- **For RDNA3 GPUs**: Set `HSA_OVERRIDE_GFX_VERSION=11.0.0`.
|
||||
- **For older GCN (Graphics Core Next) GPUs**: The version to use varies. GCN 4th gen and earlier might require different settings, such as `ROC_ENABLE_PRE_VEGA=1` for GCN4, or `HSA_OVERRIDE_GFX_VERSION=9.0.0` for Vega (GCN5.0) emulation.
|
||||
|
||||
Ensure to replace `<version>` with the appropriate version number based on your GPU model and the guidelines above. For a detailed list of compatible versions and more in-depth instructions, refer to the [ROCm documentation](https://rocm.docs.amd.com) and the [openSUSE Wiki on AMD GPGPU](https://en.opensuse.org/SDB:AMD_GPGPU).
|
||||
|
||||
Example command for RDNA1 & RDNA2 GPUs:
|
||||
```bash
|
||||
HSA_OVERRIDE_GFX_VERSION=10.3.0 docker compose -f docker-compose.yaml -f docker-compose.amdgpu.yaml up -d --build
|
||||
```
|
||||
|
||||
</details>
|
||||
|
||||
- **To Expose Ollama API:** Use another Docker Compose file:
|
||||
|
||||
```bash
|
||||
@ -155,6 +179,41 @@ When using Docker to install Open WebUI, make sure to include the `-v open-webui
|
||||
|
||||
- After installation, you can access Open WebUI at [http://localhost:3000](http://localhost:3000). Enjoy! 😄
|
||||
|
||||
### GPU Support
|
||||
|
||||
#### Nvidia CUDA
|
||||
|
||||
To run Ollama with Nvidia GPU support, utilize the Nvidia-docker tool for GPU access, and set the appropriate environment variables for CUDA support:
|
||||
|
||||
```bash
|
||||
docker run -d -p 3000:8080 \
|
||||
--gpus all \
|
||||
--add-host=host.docker.internal:host-gateway \
|
||||
--volume open-webui:/app/backend/data \
|
||||
--name open-webui \
|
||||
--restart always \
|
||||
ghcr.io/open-webui/open-webui:main
|
||||
```
|
||||
|
||||
#### AMD ROCm
|
||||
|
||||
To run Ollama with AMD GPU support, set the `HSA_OVERRIDE_GFX_VERSION` environment variable and ensure the Docker container can access the GPU:
|
||||
|
||||
```bash
|
||||
docker run -d -p 3000:8080 \
|
||||
-e HSA_OVERRIDE_GFX_VERSION=11.0.0 \
|
||||
--device /dev/kfd \
|
||||
--device /dev/dri \
|
||||
--group-add video \
|
||||
--add-host=host.docker.internal:host-gateway \
|
||||
--volume open-webui:/app/backend/data \
|
||||
--name open-webui \
|
||||
--restart always \
|
||||
ghcr.io/open-webui/open-webui:main
|
||||
```
|
||||
|
||||
Replace `HSA_OVERRIDE_GFX_VERSION=11.0.0` with the version appropriate for your AMD GPU model as described in the earlier sections. This command ensures compatibility and optimal performance with AMD GPUs.
|
||||
|
||||
#### Open WebUI: Server Connection Error
|
||||
|
||||
Encountering connection issues between the Open WebUI Docker container and the Ollama server? This problem often arises because distro-packaged versions of Docker—like those from the Ubuntu repository—do not support the `host.docker.internal` alias for reaching the host directly. Inside a container, referring to `localhost` or `127.0.0.1` typically points back to the container itself, not the host machine.
|
||||
|
Loading…
Reference in New Issue
Block a user