Modified the Dockerfile to install PyTorch, torchvision, and torchaudio from a CUDA 11.8 specific wheel URL. This ensures compatibility with the CUDA version in our environment and potentially improves performance and stability for GPU-accelerated operations.
Okay, this was driving my OCD crazy.
Corrected a spelling error in the Dockerfile's comment section to enhance documentation clarity. The typo 'persormance' was updated to 'performance,' ensuring accurate guidance on using multilingual sentence transformer models for better performance and language support.
Enabled NVIDIA CUDA backend build stage in the Dockerfile for enhanced performance with GPU support. Moved the environment variable defining the device type for the embedding and TTS models to be shared between CPU and GPU configurations. The default device type for CPU build is now explicitly set to "cpu", while the CUDA build retains "cuda", ensuring clarity and performance optimization across different hardware setups.
Switched to Chainguard images as base for both CPU and CUDA backend builds for improved security and compatibility. Replaced Ubuntu base with Chainguard's Python image for the CPU builds and PyTorch CUDA image for GPU acceleration, resolving python requirements conflicts. Updated package installation commands to align with the new Redhat-compatible base images. The Dockerfile now installs only the necessary dependencies, as Python is provided by the base image.
These changes will facilitate a more secure and streamlined build process with better dependency management across different platforms.
Standardized CUDA_VERSION as a global ARG to ensure consistency and facilitate version updates across the Dockerfile. This change allows the CUDA version to be defined once at the beginning and reused, reducing the chance of mismatched versions and easing maintenance when changing CUDA versions. It further streamlines the build process for potential multi-stage builds with varying CUDA dependencies.
Refs #nvidia-update
Refactored the Dockerfile to better organize and streamline environment variable settings, emphasizing support for a CUDA-based WebUI backend while retaining the ability to build a CPU-only image. Consolidated ENV commands to reduce layers, improving build efficiency, and set a default PORT environment to enhance container usability. Enabled exposure of the backend service on port 8080 and leveraged combined RUN directives to minimize the image footprint. These changes facilitate a more robust deployment process, catering to both CPU and CUDA environments.