Carlos Eduardo Arango Gutierrez dede03f322 Refactor extracting requested devices from the container image
This change consolidates the logic for determining requested devices
from the container image. The logic for this has been integrated into
the image.CUDA type so that multiple implementations are not required.

Signed-off-by: Carlos Eduardo Arango Gutierrez <eduardoa@nvidia.com>
Co-authored-by: Evan Lezar <elezar@nvidia.com>
2025-06-05 12:38:45 +02:00
2024-10-16 10:53:45 +02:00
2025-05-14 11:22:14 +02:00
2025-05-21 10:19:52 +02:00
2025-02-10 13:24:41 +01:00
2025-01-23 11:46:14 +01:00
2025-05-23 13:53:45 +02:00
2022-04-12 14:52:38 +02:00
2024-07-15 14:25:12 +02:00
2025-02-10 13:24:27 +01:00

NVIDIA Container Toolkit

GitHub license Documentation Package repository

nvidia-container-stack

Introduction

The NVIDIA Container Toolkit allows users to build and run GPU accelerated containers. The toolkit includes a container runtime library and utilities to automatically configure containers to leverage NVIDIA GPUs.

Product documentation including an architecture overview, platform support, and installation and usage guides can be found in the documentation repository.

Getting Started

Make sure you have installed the NVIDIA driver for your Linux Distribution Note that you do not need to install the CUDA Toolkit on the host system, but the NVIDIA driver needs to be installed

For instructions on getting started with the NVIDIA Container Toolkit, refer to the installation guide.

Usage

The user guide provides information on the configuration and command line options available when running GPU containers with Docker.

Issues and Contributing

Checkout the Contributing document!

Description
No description provided
Readme 20 MiB
Languages
Go 88.4%
Shell 5.9%
C 3.3%
Makefile 1.6%
Dockerfile 0.7%
Other 0.1%