a77331f8f0
This change implements the discovery of versioned driver libaries by reusing the mounts and update ldcache discoverers use for, for example, CVS file discovery. This allows the container paths to be correctly generated without requiring specific manipulation. Signed-off-by: Evan Lezar <elezar@nvidia.com> |
||
---|---|---|
.. | ||
cdi | ||
hook | ||
info | ||
runtime | ||
main.go | ||
README.md |
NVIDIA Container Toolkit CLI
The NVIDIA Container Toolkit CLI nvidia-ctk
provides a number of utilities that are useful for working with the NVIDIA Container Toolkit.
Functionality
Configure runtimes
The runtime
command of the nvidia-ctk
CLI provides a set of utilities to related to the configuration
and management of supported container engines.
For example, running the following command:
nvidia-ctk runtime configure --set-as-default
will ensure that the NVIDIA Container Runtime is added as the default runtime to the default container engine.
Generate CDI specifications
The Container Device Interface (CDI) provides a vendor-agnostic mechanism to make arbitrary devices accessible in containerized environments. To allow NVIDIA devices to be used in these environments, the NVIDIA Container Toolkit CLI includes functionality to generate a CDI specification for the available NVIDIA GPUs in a system.
In order to generate the CDI specification for the available devices, run the following command:\
nvidia-ctk cdi generate
The default is to print the specification to STDOUT and a filename can be specified using the --output
flag.
The specification will contain a device entries as follows (where applicable):
- An
nvidia.com/gpu=gpu{INDEX}
device for each non-MIG-enabled full GPU in the system - An
nvidia.com/gpu=mig{GPU_INDEX}:{MIG_INDEX}
device for each MIG-device in the system - A special device called
nvidia.com/gpu=all
which represents all available devices.
For example, to generate the CDI specification in the default location where CDI-enabled tools such as podman
, containerd
, cri-o
, or the NVIDIA Container Runtime can be configured to load it, the following command can be run:
sudo nvidia-ctk cdi generate --output=/etc/cdi/nvidia.yaml
(Note that sudo
is used to ensure the correct permissions to write to the /etc/cdi
folder)
With the specification generated, a GPU can be requested by specifying the fully-qualified CDI device name. With podman
as an exmaple:
podman run --rm -ti --device=nvidia.com/gpu=gpu0 ubuntu nvidia-smi -L