Go to file
Carlos Eduardo Arango Gutierrez 8865f6d848
Some checks failed
CI Pipeline / code-scanning (push) Has been cancelled
CI Pipeline / variables (push) Has been cancelled
CI Pipeline / golang (push) Has been cancelled
CI Pipeline / image (push) Has been cancelled
CI Pipeline / e2e-test (push) Has been cancelled
Add nvidia-cdi-refresh service
Automatic regeneration of /var/run/cdi/nvidia.yaml
New units:
	•	nvidia-cdi-refresh.service – one-shot wrapper for
			nvidia-ctk cdi generate (adds sleep + required caps).
	•	nvidia-cdi-refresh.path   – fires on driver install/upgrade via
			modules.dep.bin changes.
Packaging
	•	RPM %post reloads systemd and enables the path unit on fresh
			installs.
	•	DEB postinst does the same (configure, skip on upgrade).

Result: CDI spec is always up to date

Signed-off-by: Carlos Eduardo Arango Gutierrez <eduardoa@nvidia.com>
2025-06-03 15:25:25 +02:00
.github [no-relnote] E2E GitHub action to run with internal runner 2025-06-03 12:47:35 +02:00
cmd Add envvar to control debug logging in CDI hooks 2025-05-30 15:27:52 +02:00
deployments Add nvidia-cdi-refresh service 2025-06-03 15:25:25 +02:00
docker Add nvidia-cdi-refresh service 2025-06-03 15:25:25 +02:00
hack [no-relnote] Fix typo in script 2024-10-16 10:53:45 +02:00
internal Add envvar to control debug logging in CDI hooks 2025-05-30 15:27:52 +02:00
packaging Add nvidia-cdi-refresh service 2025-06-03 15:25:25 +02:00
pkg Add envvar to control debug logging in CDI hooks 2025-05-30 15:27:52 +02:00
scripts [no-relnote] Remove release:archive CI step 2025-05-30 17:10:03 +02:00
testdata Add imex mode to CDI spec generation 2024-11-25 13:46:43 +01:00
tests [no-relnote] Update E2E suite 2025-05-14 11:22:14 +02:00
third_party Bump third_party/libnvidia-container from caf057b to 6eda4d7 2025-05-28 08:16:30 +00:00
vendor Run update-ldcache in isolated namespaces 2025-05-15 12:45:49 +02:00
.common-ci.yml Updated .release:staging to stage images in nvstaging 2025-04-17 14:02:33 +02:00
.dockerignore Add dockerfile and makefile to build toolkit-container 2021-10-22 11:57:55 +02:00
.gitignore [no-relnote] Update gitignore 2025-05-21 10:19:52 +02:00
.gitlab-ci.yml [no-relnote] Add toolkit install unit test 2024-11-05 14:23:35 -08:00
.gitmodules Update libnvidia-container to github ref 2024-02-01 16:36:10 +01:00
.golangci.yml [no-relnote] Migrate golangci-lint config to v2 2025-04-02 14:18:32 +02:00
.nvidia-ci.yml [no-relnote] Remove release:archive CI step 2025-05-30 17:10:03 +02:00
CHANGELOG.md Bump version for v1.17.4 release 2025-02-10 13:24:41 +01:00
CONTRIBUTING.md Ensure LICENSE and CONTRIBUTING.md files are present 2019-10-31 12:56:46 -07:00
DEVELOPMENT.md Rename test folder to tests 2025-01-23 11:46:14 +01:00
go.mod Run update-ldcache in isolated namespaces 2025-05-15 12:45:49 +02:00
go.sum Run update-ldcache in isolated namespaces 2025-05-15 12:45:49 +02:00
LICENSE Ensure LICENSE and CONTRIBUTING.md files are present 2019-10-31 12:56:46 -07:00
Makefile [no-relnote] Enable Coveralls 2025-05-23 13:53:45 +02:00
README.md Change master references to main 2022-04-12 14:52:38 +02:00
RELEASE.md [no-relnote] Add RELEASE.md 2024-07-15 14:25:12 +02:00
SECURITY.md [no-relnote] Add SECURITY.md to repo 2025-05-15 16:38:43 +02:00
versions.mk Bump version for v1.17.4 release 2025-02-10 13:24:27 +01:00

NVIDIA Container Toolkit

GitHub license Documentation Package repository

nvidia-container-stack

Introduction

The NVIDIA Container Toolkit allows users to build and run GPU accelerated containers. The toolkit includes a container runtime library and utilities to automatically configure containers to leverage NVIDIA GPUs.

Product documentation including an architecture overview, platform support, and installation and usage guides can be found in the documentation repository.

Getting Started

Make sure you have installed the NVIDIA driver for your Linux Distribution Note that you do not need to install the CUDA Toolkit on the host system, but the NVIDIA driver needs to be installed

For instructions on getting started with the NVIDIA Container Toolkit, refer to the installation guide.

Usage

The user guide provides information on the configuration and command line options available when running GPU containers with Docker.

Issues and Contributing

Checkout the Contributing document!