nvidia-container-toolkit/cmd/nvidia-ctk
Evan Lezar 671d787a42 Switch to reflect package for config updates
This change switches to using the reflect package to determine
the type of config options instead of inferring the type from the
Toml data structure.

Signed-off-by: Evan Lezar <elezar@nvidia.com>
2023-11-23 10:29:38 +01:00
..
cdi Add --dev-root option to CDI spec generation 2023-11-20 21:29:35 +01:00
config Switch to reflect package for config updates 2023-11-23 10:29:38 +01:00
hook Merge branch 'fix-ldconfig-resolution' into 'main' 2023-11-21 16:45:21 +00:00
info Define a basic logger interface 2023-06-12 10:46:10 +02:00
runtime Add option to nvidia-ctk to enable CDI in docker 2023-11-23 10:15:58 +01:00
system Use github.com/NVIDIA/go-nvlib imports 2023-11-15 21:38:26 +01:00
main.go Add quiet mode to nvidia-ctk cli 2023-07-05 11:26:04 +02:00
README.md Use tags.cncf.io for CDI imports 2023-11-01 12:40:51 +01:00

NVIDIA Container Toolkit CLI

The NVIDIA Container Toolkit CLI nvidia-ctk provides a number of utilities that are useful for working with the NVIDIA Container Toolkit.

Functionality

Configure runtimes

The runtime command of the nvidia-ctk CLI provides a set of utilities to related to the configuration and management of supported container engines.

For example, running the following command:

nvidia-ctk runtime configure --set-as-default

will ensure that the NVIDIA Container Runtime is added as the default runtime to the default container engine.

Configure the NVIDIA Container Toolkit

The config command of the nvidia-ctk CLI allows a user to display and manipulate the NVIDIA Container Toolkit configuration.

For example, running the following command:

nvidia-ctk config default

will display the default config for the detected platform.

Whereas

nvidia-ctk config

will display the effective NVIDIA Container Toolkit config using the configured config file, and running:

Individual config options can be set by specifying these are key-value pairs to the --set argument:

nvidia-ctk config --set nvidia-container-cli.no-cgroups=true

By default, all commands output to STDOUT, but specifying the --output flag writes the config to the specified file.

Generate CDI specifications

The Container Device Interface (CDI) provides a vendor-agnostic mechanism to make arbitrary devices accessible in containerized environments. To allow NVIDIA devices to be used in these environments, the NVIDIA Container Toolkit CLI includes functionality to generate a CDI specification for the available NVIDIA GPUs in a system.

In order to generate the CDI specification for the available devices, run the following command:\

nvidia-ctk cdi generate

The default is to print the specification to STDOUT and a filename can be specified using the --output flag.

The specification will contain a device entries as follows (where applicable):

  • An nvidia.com/gpu=gpu{INDEX} device for each non-MIG-enabled full GPU in the system
  • An nvidia.com/gpu=mig{GPU_INDEX}:{MIG_INDEX} device for each MIG-device in the system
  • A special device called nvidia.com/gpu=all which represents all available devices.

For example, to generate the CDI specification in the default location where CDI-enabled tools such as podman, containerd, cri-o, or the NVIDIA Container Runtime can be configured to load it, the following command can be run:

sudo nvidia-ctk cdi generate --output=/etc/cdi/nvidia.yaml

(Note that sudo is used to ensure the correct permissions to write to the /etc/cdi folder)

With the specification generated, a GPU can be requested by specifying the fully-qualified CDI device name. With podman as an exmaple:

podman run --rm -ti --device=nvidia.com/gpu=gpu0 ubuntu nvidia-smi -L