nvidia-container-toolkit/cmd/nvidia-ctk
Evan Lezar a77331f8f0 Reuse mount discovery for driver libraries
This change implements the discovery of versioned driver libaries
by reusing the mounts and update ldcache discoverers use for, for example,
CVS file discovery. This allows the container paths to be correctly generated
without requiring specific manipulation.

Signed-off-by: Evan Lezar <elezar@nvidia.com>
2023-01-19 12:11:13 +01:00
..
cdi Reuse mount discovery for driver libraries 2023-01-19 12:11:13 +01:00
hook Use :: as link separator 2022-11-02 14:42:51 +01:00
info Rename nvidia-ctk info generate-cdi command 2022-11-02 14:56:26 +01:00
runtime Refactor docker config update 2022-10-11 11:42:38 +02:00
main.go Rename nvidia-ctk info generate-cdi command 2022-11-02 14:56:26 +01:00
README.md Add README for generating CDI specifications 2022-11-08 15:15:27 +01:00

NVIDIA Container Toolkit CLI

The NVIDIA Container Toolkit CLI nvidia-ctk provides a number of utilities that are useful for working with the NVIDIA Container Toolkit.

Functionality

Configure runtimes

The runtime command of the nvidia-ctk CLI provides a set of utilities to related to the configuration and management of supported container engines.

For example, running the following command:

nvidia-ctk runtime configure --set-as-default

will ensure that the NVIDIA Container Runtime is added as the default runtime to the default container engine.

Generate CDI specifications

The Container Device Interface (CDI) provides a vendor-agnostic mechanism to make arbitrary devices accessible in containerized environments. To allow NVIDIA devices to be used in these environments, the NVIDIA Container Toolkit CLI includes functionality to generate a CDI specification for the available NVIDIA GPUs in a system.

In order to generate the CDI specification for the available devices, run the following command:\

nvidia-ctk cdi generate

The default is to print the specification to STDOUT and a filename can be specified using the --output flag.

The specification will contain a device entries as follows (where applicable):

  • An nvidia.com/gpu=gpu{INDEX} device for each non-MIG-enabled full GPU in the system
  • An nvidia.com/gpu=mig{GPU_INDEX}:{MIG_INDEX} device for each MIG-device in the system
  • A special device called nvidia.com/gpu=all which represents all available devices.

For example, to generate the CDI specification in the default location where CDI-enabled tools such as podman, containerd, cri-o, or the NVIDIA Container Runtime can be configured to load it, the following command can be run:

sudo nvidia-ctk cdi generate --output=/etc/cdi/nvidia.yaml

(Note that sudo is used to ensure the correct permissions to write to the /etc/cdi folder)

With the specification generated, a GPU can be requested by specifying the fully-qualified CDI device name. With podman as an exmaple:

podman run --rm -ti --device=nvidia.com/gpu=gpu0 ubuntu nvidia-smi -L