This change fixes a bug where the value of NVIDIA_VISIBLE_DEVICES would be used to
select devices even if the `swarm-resource` config option is specified.
Note that this does not change the value of NVIDIA_VISIBLE_DEVICES in the container.
Signed-off-by: Evan Lezar <elezar@nvidia.com>
This change ignores the value of NVIDIA_VISIBLE_DEVICES instead of
raising an error when launching a container with insufficient permissions.
This changes the behaviour under the following conditions:
NVIDIA_VISIBLE_DEVICES is set
and
accept-nvidia-visible-devices-envvar-when-unprivileged = false (default: true)
or
privileged = false (default: false)
This means that a user need not explicitly clear the NVIDIA_VISIBLE_DEVICES
environment variable if no GPUs are to be used in unprivileged containers.
Note that this envvar is set to 'all' by default in many CUDA images that
are used as base images.
Signed-off-by: Evan Lezar <elezar@nvidia.com>
For most practical purposes, it should be fine to set
NVIDIA_DRIVER_CAPABILITIES=all nowadays.
Historically, these different capabilities exist because they were added
incrementally, with varying degrees of stability. It's fairly common to
run with GPUs in containers today, but a few years ago the driver didn't
support them very well, and it was important to make sure the libraries
being injected into the container actually worked in a containerized
environment. When they didn't, it was common to get information leaks,
crashes, or even silent failures.
In the past, whenever a new set of libraries was being vetted for
injected, a new capability was added to make sure that users had control
to explicitly include only those libraries they were comfortable having
injected into their containers.
The idea being that whoever puts together a container image for use with
GPUs should have the knowledge of what capabilities the software in that
container image requires, and can set the NVIDIA_DRIVER_CAPABILITIES
envvar in that image appropriately.
After some back and forth, we've decided it doesn't quite make sense to
set it to "all" just yet, but we should set it to "utility, compute"
instead of just "utility", so that at least the core CUDA libraries work
by default (once installed in the container).
Signed-off-by: Kevin Klues <kklues@nvidia.com>
Also hard code the "root" path where these volume mounts will be looked
for rather than making it configurable.
Signed-off-by: Kevin Klues <kklues@nvidia.com>
This was added to fix a regression with support for the default runc
shipped with CentOS 7.
The version of runc that is installed by default on CentOS 7 is
1.0.0-rc2 which uses OCI spec 1.0.0-rc2-dev.
This is a prerelease of the OCI spec, which defines the capabilities
section of a process configuration to be a flat list of capabilities
(e.g. SYS_ADMIN, SYS_PTRACE, SYS_RAWIO, etc.)
https://github.com/opencontainers/runtime-spec/blob/v1.0.0-rc2/config.md#process-configuration
By the time the official 1.0.0 version of the OCI spec came out, the
capabilities section of a process configuration was expanded to include
embedded fields for effective, bounding, inheritable, permitted and
ambient (each of which can contain a flat list of capabilities of the
form SYS_ADMIN, SYS_PTRACE, SYS_RAWIO, etc.)
https://github.com/opencontainers/runtime-spec/blob/v1.0.0/config.md#linux-process
Previously, we only inspected the capabilities section of a process
configuration assuming it was in the format of OCI spec 1.0.0.
This patch makes sure we can parse the capaibilites in either format.
Signed-off-by: Kevin Klues <kklues@nvidia.com>
These flags can only be injected into priviliged containers. If the
container is unpriviliged, and one of these flags is specified, then we
exit with an error.
Signed-off-by: Kevin Klues <kklues@nvidia.com>
This also includes a helper to look through the capabilities contained
in the spec to determine if the container is privileged or not.
Signed-off-by: Kevin Klues <kklues@nvidia.com>
This allows someone to (for example) pass the following environment
variables:
NVIDIA_VISIBLE_DEVICES_0="0,1"
NVIDIA_VISIBLE_DEVICES_1="2,3"
NVIDIA_VISIBLE_DEVICES_WHATEVER="4,5"
and have the nvidia-container-toolkit automatically merge these into:
NVIDIA_VISIBLE_DEVICES="0,1,2,3,4,5"
This is useful (for example) if the full list of devices comes
from multiple, disparate sources.
Note: This will override whatever the original value of
NVIDIA_VISIBLE_DEVICES was (*excluding* its original value) if it also
exists as an environment variable already. We exclude the original value
to ensure that we have a way to override the default value of
NVIDIA_VISIBLE_DEVICES set to "all" inside a container image.
Signed-off-by: Kevin Klues <kklues@nvidia.com>