nvidia-container-toolkit/cmd/nvidia-ctk
Evan Lezar 3a13747e10
Allow container runtime executable path to be specified
This change adds support for specifying the container runtime
executable path. This can be used if, for example, there are
two containerd executables and a specific one must be used.

Signed-off-by: Evan Lezar <elezar@nvidia.com>
2025-04-03 09:17:06 +02:00
..
cdi [no-relnote] Update CDI tests for new whitespace formatting 2025-04-02 11:38:46 +02:00
config Allow nvidia-ctk config --set to accept comma-separated lists 2025-03-12 15:47:32 +02:00
hook Move nvidia-ctk hook command into own binary 2024-05-21 12:19:44 +02:00
info run goimports -local against the entire codebase 2023-12-01 11:13:17 +01:00
runtime Allow container runtime executable path to be specified 2025-04-03 09:17:06 +02:00
system Remove watch option from create-dev-char-symlinks 2025-01-14 18:34:14 +01:00
main.go Allow nvidia-ctk config --set to accept comma-separated lists 2025-03-12 15:47:32 +02:00
README.md Use tags.cncf.io for CDI imports 2023-11-01 12:40:51 +01:00

NVIDIA Container Toolkit CLI

The NVIDIA Container Toolkit CLI nvidia-ctk provides a number of utilities that are useful for working with the NVIDIA Container Toolkit.

Functionality

Configure runtimes

The runtime command of the nvidia-ctk CLI provides a set of utilities to related to the configuration and management of supported container engines.

For example, running the following command:

nvidia-ctk runtime configure --set-as-default

will ensure that the NVIDIA Container Runtime is added as the default runtime to the default container engine.

Configure the NVIDIA Container Toolkit

The config command of the nvidia-ctk CLI allows a user to display and manipulate the NVIDIA Container Toolkit configuration.

For example, running the following command:

nvidia-ctk config default

will display the default config for the detected platform.

Whereas

nvidia-ctk config

will display the effective NVIDIA Container Toolkit config using the configured config file, and running:

Individual config options can be set by specifying these are key-value pairs to the --set argument:

nvidia-ctk config --set nvidia-container-cli.no-cgroups=true

By default, all commands output to STDOUT, but specifying the --output flag writes the config to the specified file.

Generate CDI specifications

The Container Device Interface (CDI) provides a vendor-agnostic mechanism to make arbitrary devices accessible in containerized environments. To allow NVIDIA devices to be used in these environments, the NVIDIA Container Toolkit CLI includes functionality to generate a CDI specification for the available NVIDIA GPUs in a system.

In order to generate the CDI specification for the available devices, run the following command:\

nvidia-ctk cdi generate

The default is to print the specification to STDOUT and a filename can be specified using the --output flag.

The specification will contain a device entries as follows (where applicable):

  • An nvidia.com/gpu=gpu{INDEX} device for each non-MIG-enabled full GPU in the system
  • An nvidia.com/gpu=mig{GPU_INDEX}:{MIG_INDEX} device for each MIG-device in the system
  • A special device called nvidia.com/gpu=all which represents all available devices.

For example, to generate the CDI specification in the default location where CDI-enabled tools such as podman, containerd, cri-o, or the NVIDIA Container Runtime can be configured to load it, the following command can be run:

sudo nvidia-ctk cdi generate --output=/etc/cdi/nvidia.yaml

(Note that sudo is used to ensure the correct permissions to write to the /etc/cdi folder)

With the specification generated, a GPU can be requested by specifying the fully-qualified CDI device name. With podman as an exmaple:

podman run --rm -ti --device=nvidia.com/gpu=gpu0 ubuntu nvidia-smi -L