This change adds support for containerd configs with version=3. From the perspective of the runtime configuration the contents of the config are the same. This means that we just have to load the new version and ensure that this is propagated to the generated config. Note that v3 config also requires a switch to the 'io.containerd.cri.v1.runtime' CRI runtime plugin. See: https://github.com/containerd/containerd/blob/v2.0.0/docs/PLUGINS.md https://github.com/containerd/containerd/issues/10132 Note that we still use a default config of version=2 since we need to ensure compatibility with older containerd versions (1.6.x and 1.7.x). Signed-off-by: Sam Lockart <sam.lockart@zendesk.com> Signed-off-by: Evan Lezar <elezar@nvidia.com> Signed-off-by: Christopher Desiniotis <cdesiniotis@nvidia.com> |
||
---|---|---|
.github | ||
cmd | ||
deployments | ||
docker | ||
hack | ||
internal | ||
packaging | ||
pkg | ||
scripts | ||
test | ||
testdata | ||
third_party | ||
tools/container | ||
vendor | ||
.common-ci.yml | ||
.dockerignore | ||
.gitignore | ||
.gitlab-ci.yml | ||
.gitmodules | ||
.golangci.yml | ||
.nvidia-ci.yml | ||
CHANGELOG.md | ||
CONTRIBUTING.md | ||
DEVELOPMENT.md | ||
go.mod | ||
go.sum | ||
LICENSE | ||
Makefile | ||
README.md | ||
RELEASE.md | ||
versions.mk |
NVIDIA Container Toolkit
Introduction
The NVIDIA Container Toolkit allows users to build and run GPU accelerated containers. The toolkit includes a container runtime library and utilities to automatically configure containers to leverage NVIDIA GPUs.
Product documentation including an architecture overview, platform support, and installation and usage guides can be found in the documentation repository.
Getting Started
Make sure you have installed the NVIDIA driver for your Linux Distribution Note that you do not need to install the CUDA Toolkit on the host system, but the NVIDIA driver needs to be installed
For instructions on getting started with the NVIDIA Container Toolkit, refer to the installation guide.
Usage
The user guide provides information on the configuration and command line options available when running GPU containers with Docker.
Issues and Contributing
Checkout the Contributing document!
- Please let us know by filing a new issue
- You can contribute by creating a merge request to our public GitLab repository