Go to file
Carlos Eduardo Arango Gutierrez 445414272f
Some checks failed
CI Pipeline / code-scanning (push) Has been cancelled
CI Pipeline / variables (push) Has been cancelled
CI Pipeline / golang (push) Has been cancelled
CI Pipeline / image (push) Has been cancelled
CI Pipeline / e2e-test (push) Has been cancelled
Add nvidia-cdi-refresh service
Automatic regeneration of /var/run/cdi/nvidia.yaml
New units:
	•	nvidia-cdi-refresh.service – one-shot wrapper for
			nvidia-ctk cdi generate (adds sleep + required caps).
	•	nvidia-cdi-refresh.path   – fires on driver install/upgrade via
			modules.dep.bin changes.
Packaging
	•	RPM %post reloads systemd and enables the path unit on fresh
			installs.
	•	DEB postinst does the same (configure, skip on upgrade).

Result: CDI spec is always up to date

Signed-off-by: Carlos Eduardo Arango Gutierrez <eduardoa@nvidia.com>
2025-05-21 17:24:08 +02:00
.github [no-relnote] Update E2E suite 2025-05-14 11:22:14 +02:00
cmd Merge commit from fork 2025-05-16 15:15:21 +02:00
deployments Add nvidia-cdi-refresh service 2025-05-21 17:24:08 +02:00
docker Add nvidia-cdi-refresh service 2025-05-21 17:24:08 +02:00
hack [no-relnote] Fix typo in script 2024-10-16 10:53:45 +02:00
internal Merge pull request #980 from elezar/add-rprivate-to-mount-options 2025-05-16 07:53:39 +02:00
packaging Add nvidia-cdi-refresh service 2025-05-21 17:24:08 +02:00
pkg Allow container runtime executable path to be specified 2025-04-07 17:28:11 +02:00
scripts [no-relnote] Use centos:stream9 for signing container 2025-03-12 12:44:35 +02:00
testdata Add imex mode to CDI spec generation 2024-11-25 13:46:43 +01:00
tests [no-relnote] Update E2E suite 2025-05-14 11:22:14 +02:00
third_party Bump third_party/libnvidia-container from d26524a to 51a7f20 2025-05-16 08:27:41 +00:00
vendor Run update-ldcache in isolated namespaces 2025-05-15 12:45:49 +02:00
.common-ci.yml Updated .release:staging to stage images in nvstaging 2025-04-17 14:02:33 +02:00
.dockerignore
.gitignore [no-relnote] Update E2E suite 2025-05-14 11:22:14 +02:00
.gitlab-ci.yml [no-relnote] Add toolkit install unit test 2024-11-05 14:23:35 -08:00
.gitmodules Update libnvidia-container to github ref 2024-02-01 16:36:10 +01:00
.golangci.yml [no-relnote] Migrate golangci-lint config to v2 2025-04-02 14:18:32 +02:00
.nvidia-ci.yml add ngc image signing job for auto signing 2024-06-12 13:20:35 +05:30
CHANGELOG.md Bump version for v1.17.4 release 2025-02-10 13:24:41 +01:00
CONTRIBUTING.md
DEVELOPMENT.md Rename test folder to tests 2025-01-23 11:46:14 +01:00
go.mod Run update-ldcache in isolated namespaces 2025-05-15 12:45:49 +02:00
go.sum Run update-ldcache in isolated namespaces 2025-05-15 12:45:49 +02:00
LICENSE
Makefile [no-relnote] Use --exit-code instead of --quiet for mod check 2025-03-06 11:33:48 +02:00
README.md
RELEASE.md [no-relnote] Add RELEASE.md 2024-07-15 14:25:12 +02:00
SECURITY.md [no-relnote] Add SECURITY.md to repo 2025-05-15 16:38:43 +02:00
versions.mk Bump version for v1.17.4 release 2025-02-10 13:24:27 +01:00

NVIDIA Container Toolkit

GitHub license Documentation Package repository

nvidia-container-stack

Introduction

The NVIDIA Container Toolkit allows users to build and run GPU accelerated containers. The toolkit includes a container runtime library and utilities to automatically configure containers to leverage NVIDIA GPUs.

Product documentation including an architecture overview, platform support, and installation and usage guides can be found in the documentation repository.

Getting Started

Make sure you have installed the NVIDIA driver for your Linux Distribution Note that you do not need to install the CUDA Toolkit on the host system, but the NVIDIA driver needs to be installed

For instructions on getting started with the NVIDIA Container Toolkit, refer to the installation guide.

Usage

The user guide provides information on the configuration and command line options available when running GPU containers with Docker.

Issues and Contributing

Checkout the Contributing document!