mirror of
https://github.com/NVIDIA/nvidia-container-toolkit
synced 2024-11-22 00:08:11 +00:00
20604621e4
For most practical purposes, it should be fine to set NVIDIA_DRIVER_CAPABILITIES=all nowadays. Historically, these different capabilities exist because they were added incrementally, with varying degrees of stability. It's fairly common to run with GPUs in containers today, but a few years ago the driver didn't support them very well, and it was important to make sure the libraries being injected into the container actually worked in a containerized environment. When they didn't, it was common to get information leaks, crashes, or even silent failures. In the past, whenever a new set of libraries was being vetted for injected, a new capability was added to make sure that users had control to explicitly include only those libraries they were comfortable having injected into their containers. The idea being that whoever puts together a container image for use with GPUs should have the knowledge of what capabilities the software in that container image requires, and can set the NVIDIA_DRIVER_CAPABILITIES envvar in that image appropriately. After some back and forth, we've decided it doesn't quite make sense to set it to "all" just yet, but we should set it to "utility, compute" instead of just "utility", so that at least the core CUDA libraries work by default (once installed in the container). Signed-off-by: Kevin Klues <kklues@nvidia.com> |
||
---|---|---|
.. | ||
capabilities.go | ||
container_config.go | ||
container_test.go | ||
hook_config.go | ||
hook_test.go | ||
main.go |