Compare commits

..

58 Commits

Author SHA1 Message Date
dependabot[bot]
3ba9073d21 Bump github.com/NVIDIA/go-nvlib from 0.7.2 to 0.7.3
Bumps [github.com/NVIDIA/go-nvlib](https://github.com/NVIDIA/go-nvlib) from 0.7.2 to 0.7.3.
- [Release notes](https://github.com/NVIDIA/go-nvlib/releases)
- [Commits](https://github.com/NVIDIA/go-nvlib/compare/v0.7.2...v0.7.3)

---
updated-dependencies:
- dependency-name: github.com/NVIDIA/go-nvlib
  dependency-version: 0.7.3
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-06-15 08:36:54 +00:00
Evan Lezar
002148a4e5 Merge pull request #1137 from NVIDIA/dependabot/docker/deployments/devel/release-1.17/golang-1.23.10
Some checks failed
CI Pipeline / code-scanning (push) Has been cancelled
CI Pipeline / variables (push) Has been cancelled
CI Pipeline / golang (push) Has been cancelled
CI Pipeline / image (push) Has been cancelled
CI Pipeline / e2e-test (push) Has been cancelled
Bump golang from 1.23.9 to 1.23.10 in /deployments/devel
2025-06-11 14:54:27 +02:00
dependabot[bot]
6eba1b7a8e Bump golang from 1.23.9 to 1.23.10 in /deployments/devel
Bumps golang from 1.23.9 to 1.23.10.

---
updated-dependencies:
- dependency-name: golang
  dependency-version: 1.23.10
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-06-08 08:32:04 +00:00
Evan Lezar
483204d807 Merge pull request #1122 from NVIDIA/dependabot/github_actions/release-1.17/NVIDIA/holodeck-0.2.11
Some checks failed
CI Pipeline / code-scanning (push) Has been cancelled
CI Pipeline / variables (push) Has been cancelled
CI Pipeline / golang (push) Has been cancelled
CI Pipeline / image (push) Has been cancelled
CI Pipeline / e2e-test (push) Has been cancelled
Bump NVIDIA/holodeck from 0.2.7 to 0.2.12
2025-06-03 10:10:17 +02:00
dependabot[bot]
f91b894b84 Bump NVIDIA/holodeck from 0.2.7 to 0.2.11
Bumps [NVIDIA/holodeck](https://github.com/nvidia/holodeck) from 0.2.7 to 0.2.11.
- [Release notes](https://github.com/nvidia/holodeck/releases)
- [Commits](https://github.com/nvidia/holodeck/compare/v0.2.7...v0.2.11)

---
updated-dependencies:
- dependency-name: NVIDIA/holodeck
  dependency-version: 0.2.11
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-06-03 04:45:02 +00:00
Evan Lezar
f202b80a9b Merge pull request #1117 from elezar/fix-unit-tests
Some checks failed
CI Pipeline / code-scanning (push) Has been cancelled
CI Pipeline / variables (push) Has been cancelled
CI Pipeline / golang (push) Has been cancelled
CI Pipeline / image (push) Has been cancelled
CI Pipeline / e2e-test (push) Has been cancelled
Fix unit tests
2025-05-30 15:44:16 +02:00
Evan Lezar
54af66f48c [no-relnotes] Fix missed unit tests
Signed-off-by: Evan Lezar <elezar@nvidia.com>
2025-05-30 15:41:32 +02:00
Evan Lezar
d6f610790f Merge commit from fork
Add `NVIDIA_CTK_DEBUG=false` to hook envs
2025-05-30 15:31:26 +02:00
Evan Lezar
007faf8491 Add envvar to control debug logging in CDI hooks
This change allows hooks to be configured with debug logging. This
is currently always set to false, but may be extended in future.

Signed-off-by: Evan Lezar <elezar@nvidia.com>
2025-05-30 15:30:18 +02:00
Evan Lezar
d7f498ade7 Merge pull request #1115 from elezar/bump-release-v1.17.8
Some checks failed
CI Pipeline / code-scanning (push) Has been cancelled
CI Pipeline / variables (push) Has been cancelled
CI Pipeline / golang (push) Has been cancelled
CI Pipeline / image (push) Has been cancelled
CI Pipeline / e2e-test (push) Has been cancelled
Bump release v1.17.8
2025-05-28 11:03:16 +02:00
Evan Lezar
e34b8cebdb Update CHANGELOG for v1.17.8 release
Signed-off-by: Evan Lezar <elezar@nvidia.com>
2025-05-28 10:49:29 +02:00
Evan Lezar
76bb848f40 Bump version for v1.17.8 release
Signed-off-by: Evan Lezar <elezar@nvidia.com>
2025-05-28 10:49:23 +02:00
Evan Lezar
02000c07f9 Merge pull request #1114 from elezar/bump-libnvidia-container-6eda4d7
Bump third_party/libnvidia-container from `caf057b` to `6eda4d7`
2025-05-28 10:44:29 +02:00
dependabot[bot]
b3b6b824cd Bump third_party/libnvidia-container from caf057b to 6eda4d7
Bumps [third_party/libnvidia-container](https://github.com/NVIDIA/libnvidia-container) from `caf057b` to `6eda4d7`.
- [Release notes](https://github.com/NVIDIA/libnvidia-container/releases)
- [Commits](caf057b009...6eda4d76c8)

---
updated-dependencies:
- dependency-name: third_party/libnvidia-container
  dependency-version: 6eda4d76c8c5f8fc174e4abca83e513fb4dd63b0
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-05-28 10:40:31 +02:00
Carlos Eduardo Arango Gutierrez
1aed5f4aa2 Merge pull request #1104 from ArangoGutierrez/bp/1102
Some checks failed
CI Pipeline / code-scanning (push) Has been cancelled
CI Pipeline / variables (push) Has been cancelled
CI Pipeline / golang (push) Has been cancelled
CI Pipeline / image (push) Has been cancelled
CI Pipeline / e2e-test (push) Has been cancelled
[release-1.17] Edit discover.mounts to have a deterministic output
2025-05-23 11:07:11 +02:00
Carlos Eduardo Arango Gutierrez
dd40dadbdc Edit discover.mounts to have a deterministic output
Signed-off-by: Carlos Eduardo Arango Gutierrez <eduardoa@nvidia.com>
Signed-off-by: Evan Lezar <elezar@nvidia.com>
2025-05-23 10:32:05 +02:00
Evan Lezar
77326385ea Merge pull request #1096 from elezar/bump-libnvidia-container
Some checks are pending
CI Pipeline / code-scanning (push) Waiting to run
CI Pipeline / variables (push) Waiting to run
CI Pipeline / golang (push) Waiting to run
CI Pipeline / image (push) Blocked by required conditions
CI Pipeline / e2e-test (push) Blocked by required conditions
Bump third_party/libnvidia-container from d26524a to caf057b
2025-05-22 13:19:29 +02:00
Evan Lezar
fe56514d01 Bump third_party/libnvidia-container from d26524a to caf057b
Signed-off-by: Evan Lezar <elezar@nvidia.com>
2025-05-20 20:38:49 +02:00
Evan Lezar
bae3e7842e Merge pull request #1087 from elezar/update-changelog-1.17
Some checks failed
CI Pipeline / code-scanning (push) Has been cancelled
CI Pipeline / variables (push) Has been cancelled
CI Pipeline / golang (push) Has been cancelled
CI Pipeline / image (push) Has been cancelled
CI Pipeline / e2e-test (push) Has been cancelled
Update changelog for v1.17.7 release
2025-05-16 15:23:02 +02:00
Evan Lezar
e78999b08c Update changelog for v1.17.7 release
Signed-off-by: Evan Lezar <elezar@nvidia.com>
2025-05-16 15:22:23 +02:00
Evan Lezar
462ca9f93f Merge commit from fork
Run update-ldcache in isolated namespaces
2025-05-16 15:15:21 +02:00
Evan Lezar
ac9146832b Run update-ldcache in isolated namespaces
This change uses the reexec package to run the update of the
ldcache in a container in a process with isolated namespaces.
Since the hook is invoked as a createContainer hook, these
namespaces are cloned from the container's namespaces.

In the reexec handler, we further isolate the proc filesystem,
mount the host ldconfig to a tmpfs, and pivot into the containers
root.

Signed-off-by: Evan Lezar <elezar@nvidia.com>
2025-05-15 12:51:13 +02:00
Evan Lezar
a734438ce2 [no-relnote] Minor edits to update-ldcache
Signed-off-by: Evan Lezar <elezar@nvidia.com>
2025-05-15 12:50:02 +02:00
Evan Lezar
61d94f7856 Merge pull request #1080 from elezar/bump-release-v1.17.7
Some checks failed
CI Pipeline / code-scanning (push) Has been cancelled
CI Pipeline / variables (push) Has been cancelled
CI Pipeline / golang (push) Has been cancelled
CI Pipeline / image (push) Has been cancelled
CI Pipeline / e2e-test (push) Has been cancelled
Bump release v1.17.7
2025-05-13 22:21:42 +02:00
Evan Lezar
e2ff6830f5 Update CHANGELOG for v1.17.7 release
Signed-off-by: Evan Lezar <elezar@nvidia.com>
2025-05-13 22:03:40 +02:00
Evan Lezar
ab050837ce Bump version for v1.17.7 release
Signed-off-by: Evan Lezar <elezar@nvidia.com>
2025-05-13 22:03:40 +02:00
Evan Lezar
becddb70e6 Merge pull request #1082 from elezar/backport-add-cuda-compat-mode
Add cuda-compat-mode config option
2025-05-13 22:00:18 +02:00
Evan Lezar
8069346746 Add cuda-compat-mode config option
This change adds an nvidia-container-runtime.modes.legacy.cuda-compat-mode
config option. This can be set to one of four values:

* ldconfig (default): the --cuda-compat-mode=ldconfig flag is passed to the nvidia-container-cli
* mount: the --cuda-compat-mode=mount flag is passed to the nvidia-conainer-cli
* disabled: the --cuda-compat-mode=disabled flag is passed to the nvidia-container-cli
* hook: the --cuda-compat-mode=disabled flag is passed to the nvidia-container-cli AND the
  enable-cuda-compat hook is used to provide forward compatibility.

Note that the disable-cuda-compat-lib-hook feature flag will prevent the enable-cuda-compat
hook from being used. This change also means that the allow-cuda-compat-libs-from-container
feature flag no longer has any effect.

Signed-off-by: Evan Lezar <elezar@nvidia.com>
2025-05-13 21:52:01 +02:00
dependabot[bot]
34526b19c0 Bump third_party/libnvidia-container from a198166 to d26524a
Bumps [third_party/libnvidia-container](https://github.com/NVIDIA/libnvidia-container) from `a198166` to `d26524a`.
- [Release notes](https://github.com/NVIDIA/libnvidia-container/releases)
- [Commits](a198166e1c...d26524ab5d)

---
updated-dependencies:
- dependency-name: third_party/libnvidia-container
  dependency-version: d26524ab5db96a55ae86033f53de50d3794fb547
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-05-13 21:51:32 +02:00
Evan Lezar
f8b0b43a3f Merge pull request #1079 from elezar/backport-add-thor-support
Fix mode detection on Thor-based systems
2025-05-13 21:36:08 +02:00
Evan Lezar
ce6928ccca Fix mode detection on Thor-based systems
This change updates github.com/NVIDIA/go-nvlib from v0.7.1 to v0.7.2
to allow Thor systems to be detected as Tegra-based. This allows fixes
automatic mode detection to work on these systems.

Signed-off-by: Evan Lezar <elezar@nvidia.com>
2025-05-13 21:30:27 +02:00
Evan Lezar
63e8ecbc8e Merge pull request #1070 from NVIDIA/dependabot/github_actions/release-1.17/slackapi/slack-github-action-2.1.0
Some checks are pending
CI Pipeline / code-scanning (push) Waiting to run
CI Pipeline / variables (push) Waiting to run
CI Pipeline / golang (push) Waiting to run
CI Pipeline / image (push) Blocked by required conditions
CI Pipeline / e2e-test (push) Blocked by required conditions
Bump slackapi/slack-github-action from 2.0.0 to 2.1.0
2025-05-13 17:10:00 +02:00
Evan Lezar
d4739cb17f Merge pull request #1073 from NVIDIA/dependabot/docker/deployments/devel/release-1.17/golang-1.23.9
Bump golang from 1.23.8 to 1.23.9 in /deployments/devel
2025-05-13 17:09:18 +02:00
Evan Lezar
e8ac80146f Merge pull request #1068 from elezar/resolve-ldcache-libs-on-arm64
Some checks are pending
CI Pipeline / code-scanning (push) Waiting to run
CI Pipeline / variables (push) Waiting to run
CI Pipeline / golang (push) Waiting to run
CI Pipeline / image (push) Blocked by required conditions
CI Pipeline / e2e-test (push) Blocked by required conditions
Fix resolution of libs in LDCache on ARM
2025-05-12 14:13:47 +02:00
Evan Lezar
dc0dee1f33 Merge pull request #1069 from elezar/skip-nill-discoverers
[no-relnote] Skip nil discoverers in merge
2025-05-12 14:13:38 +02:00
dependabot[bot]
21827ad367 Bump golang from 1.23.8 to 1.23.9 in /deployments/devel
Bumps golang from 1.23.8 to 1.23.9.

---
updated-dependencies:
- dependency-name: golang
  dependency-version: 1.23.9
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-05-12 09:52:42 +00:00
Evan Lezar
651e9f541a Merge pull request #1072 from NVIDIA/dependabot/docker/deployments/container/release-1.17/nvidia/cuda-12.9.0-base-ubuntu20.04
Bump nvidia/cuda from 12.8.1-base-ubuntu20.04 to 12.9.0-base-ubuntu20.04 in /deployments/container
2025-05-12 11:51:38 +02:00
dependabot[bot]
56b80c94b0 Bump nvidia/cuda in /deployments/container
Bumps nvidia/cuda from 12.8.1-base-ubuntu20.04 to 12.9.0-base-ubuntu20.04.

---
updated-dependencies:
- dependency-name: nvidia/cuda
  dependency-version: 12.9.0-base-ubuntu20.04
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-05-11 08:36:47 +00:00
dependabot[bot]
e096251183 Bump slackapi/slack-github-action from 2.0.0 to 2.1.0
Bumps [slackapi/slack-github-action](https://github.com/slackapi/slack-github-action) from 2.0.0 to 2.1.0.
- [Release notes](https://github.com/slackapi/slack-github-action/releases)
- [Commits](https://github.com/slackapi/slack-github-action/compare/v2.0.0...v2.1.0)

---
updated-dependencies:
- dependency-name: slackapi/slack-github-action
  dependency-version: 2.1.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-05-11 08:06:11 +00:00
Evan Lezar
cf35409004 [no-relnote] Skip nil discoverers in merge
When constructing a list of discoverers using discover.Merge we
explicitly skip `nil` discoverers to simplify usage as we don't
have to explicitly check validity when processing the discoverers
in the list.

Signed-off-by: Evan Lezar <elezar@nvidia.com>
2025-05-09 15:07:19 +02:00
Evan Lezar
8012e4f1be Fix resolution of libs in LDCache on ARM
Since we explicitly check for the architecture of the
libraries in the ldcache, we need to also check the architecture
flag against the ARM constants.

Signed-off-by: Evan Lezar <elezar@nvidia.com>
2025-05-09 15:04:49 +02:00
Evan Lezar
570e223276 Merge pull request #1023 from NVIDIA/dependabot/github_actions/release-1.17/NVIDIA/holodeck-0.2.7
Some checks failed
CI Pipeline / code-scanning (push) Has been cancelled
CI Pipeline / variables (push) Has been cancelled
CI Pipeline / golang (push) Has been cancelled
CI Pipeline / image (push) Has been cancelled
CI Pipeline / e2e-test (push) Has been cancelled
Bump NVIDIA/holodeck from 0.2.6 to 0.2.7
2025-04-30 15:51:23 +02:00
Evan Lezar
e627eb2e21 Merge pull request #1022 from NVIDIA/dependabot/docker/deployments/devel/release-1.17/golang-1.23.8
Some checks failed
CI Pipeline / code-scanning (push) Has been cancelled
CI Pipeline / variables (push) Has been cancelled
CI Pipeline / golang (push) Has been cancelled
CI Pipeline / image (push) Has been cancelled
CI Pipeline / e2e-test (push) Has been cancelled
Bump golang from 1.23.5 to 1.23.8 in /deployments/devel
2025-04-22 10:25:21 +02:00
Evan Lezar
24859f56d2 Merge pull request #1044 from elezar/bump-release-v1.17.6
Bump version for v1.17.6 release
2025-04-22 10:24:52 +02:00
Evan Lezar
8676b5625a Bump version for v1.17.6 release
Signed-off-by: Evan Lezar <elezar@nvidia.com>
2025-04-22 10:24:02 +02:00
Evan Lezar
6bb4a5c7de Merge pull request #1043 from elezar/bump-libnvidia-container-a198166
Bump third_party/libnvidia-container from `95d3e86` to `a198166`
2025-04-22 10:19:22 +02:00
dependabot[bot]
a8e7ffcc95 Bump third_party/libnvidia-container from 95d3e86 to a198166
Bumps [third_party/libnvidia-container](https://github.com/NVIDIA/libnvidia-container) from `95d3e86` to `a198166`.
- [Release notes](https://github.com/NVIDIA/libnvidia-container/releases)
- [Commits](95d3e86522...a198166e1c)

---
updated-dependencies:
- dependency-name: third_party/libnvidia-container
  dependency-version: a198166e1c1166f4847598438115ea97dacc7a92
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
Signed-off-by: Evan Lezar <elezar@nvidia.com>
2025-04-22 09:39:53 +02:00
Evan Lezar
58f54b937a Merge pull request #1029 from elezar/allow-runtime-path
Some checks failed
CI Pipeline / code-scanning (push) Has been cancelled
CI Pipeline / variables (push) Has been cancelled
CI Pipeline / golang (push) Has been cancelled
CI Pipeline / image (push) Has been cancelled
CI Pipeline / e2e-test (push) Has been cancelled
Allow container runtime executable path to be specified
2025-04-09 12:10:30 +02:00
Evan Lezar
8176ac40ee Allow container runtime executable path to be specified
This change adds support for specifying the container runtime
executable path. This can be used if, for example, there are
two containerd or crio executables and a specific one must be used.

Signed-off-by: Evan Lezar <elezar@nvidia.com>
2025-04-08 17:51:54 +02:00
Evan Lezar
01e55461e8 [no-relnote] Remove unused runtimeConfigOverideJSON variable
Signed-off-by: Evan Lezar <elezar@nvidia.com>
2025-04-08 17:36:25 +02:00
dependabot[bot]
32fe41a3d5 Bump NVIDIA/holodeck from 0.2.6 to 0.2.7
Bumps [NVIDIA/holodeck](https://github.com/nvidia/holodeck) from 0.2.6 to 0.2.7.
- [Release notes](https://github.com/nvidia/holodeck/releases)
- [Commits](https://github.com/nvidia/holodeck/compare/v0.2.6...v0.2.7)

---
updated-dependencies:
- dependency-name: NVIDIA/holodeck
  dependency-version: 0.2.7
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-04-06 08:50:59 +00:00
dependabot[bot]
3436b5b032 Bump golang from 1.23.5 to 1.23.8 in /deployments/devel
Bumps golang from 1.23.5 to 1.23.8.

---
updated-dependencies:
- dependency-name: golang
  dependency-version: 1.23.8
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-04-06 08:34:22 +00:00
Evan Lezar
c4f46e7354 Merge pull request #1005 from NVIDIA/dependabot/go_modules/release-1.17/github.com/opencontainers/runc-1.2.6
Some checks failed
CI Pipeline / code-scanning (push) Has been cancelled
CI Pipeline / variables (push) Has been cancelled
CI Pipeline / golang (push) Has been cancelled
CI Pipeline / image (push) Has been cancelled
CI Pipeline / e2e-test (push) Has been cancelled
Bump github.com/opencontainers/runc from 1.2.5 to 1.2.6
2025-04-02 14:17:07 +02:00
dependabot[bot]
753b5d1595 Bump github.com/opencontainers/runc from 1.2.5 to 1.2.6
Bumps [github.com/opencontainers/runc](https://github.com/opencontainers/runc) from 1.2.5 to 1.2.6.
- [Release notes](https://github.com/opencontainers/runc/releases)
- [Changelog](https://github.com/opencontainers/runc/blob/v1.2.6/CHANGELOG.md)
- [Commits](https://github.com/opencontainers/runc/compare/v1.2.5...v1.2.6)

---
updated-dependencies:
- dependency-name: github.com/opencontainers/runc
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-03-23 08:09:31 +00:00
Evan Lezar
e0b651668d Merge pull request #997 from NVIDIA/dependabot/docker/deployments/container/release-1.17/nvidia/cuda-12.8.1-base-ubuntu20.04
Some checks failed
CI Pipeline / code-scanning (push) Has been cancelled
CI Pipeline / variables (push) Has been cancelled
CI Pipeline / golang (push) Has been cancelled
CI Pipeline / image (push) Has been cancelled
CI Pipeline / e2e-test (push) Has been cancelled
Bump nvidia/cuda from 12.8.0-base-ubuntu20.04 to 12.8.1-base-ubuntu20.04 in /deployments/container
2025-03-17 12:41:30 +02:00
dependabot[bot]
6e59255149 Bump nvidia/cuda in /deployments/container
Bumps nvidia/cuda from 12.8.0-base-ubuntu20.04 to 12.8.1-base-ubuntu20.04.

---
updated-dependencies:
- dependency-name: nvidia/cuda
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-03-16 08:41:14 +00:00
Evan Lezar
a152a2fd7e Merge pull request #986 from elezar/fix-signing-container
Some checks failed
CI Pipeline / code-scanning (push) Has been cancelled
CI Pipeline / variables (push) Has been cancelled
CI Pipeline / golang (push) Has been cancelled
CI Pipeline / image (push) Has been cancelled
CI Pipeline / e2e-test (push) Has been cancelled
[no-relnote] Use centos:stream9 for signing container
2025-03-12 12:47:07 +02:00
Evan Lezar
b43c8c424e [no-relnote] Use centos:stream9 for signing container
The signing container need not be based on a legacy centos version.
This change updates the signing contianer to be centos:stream9 based.

Signed-off-by: Evan Lezar <elezar@nvidia.com>
2025-03-12 12:46:24 +02:00
95 changed files with 12273 additions and 2536 deletions

View File

@@ -55,7 +55,7 @@ jobs:
go-version: ${{ env.GOLANG_VERSION }}
- name: Set up Holodeck
uses: NVIDIA/holodeck@v0.2.6
uses: NVIDIA/holodeck@v0.2.12
with:
aws_access_key_id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws_secret_access_key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
@@ -86,7 +86,7 @@ jobs:
- name: Send Slack alert notification
if: ${{ failure() }}
uses: slackapi/slack-github-action@v2.0.0
uses: slackapi/slack-github-action@v2.1.0
with:
method: chat.postMessage
token: ${{ secrets.SLACK_BOT_TOKEN }}

View File

@@ -1,5 +1,43 @@
# NVIDIA Container Toolkit Changelog
## v1.17.8
- Updated the ordering of Mounts in CDI to have a deterministic output. This makes testing more consistent.
- Added NVIDIA_CTK_DEBUG envvar to hooks.
### Changes in libnvidia-container
- Fixed bug in setting default for `--cuda-compat-mode` flag. This caused failures in use cases invoking the `nvidia-container-cli` directly.
- Added additional logging to the `nvidia-container-cli`.
- Fixed variable initialisation when updating the ldcache. This caused failures on Arch linux or other platforms where the `nvidia-container-cli` was built from source.
## v1.17.7
- Fix mode detection on Thor-based systems. This correctly resolves `auto` mode to `csv`.
- Fix resolution of libs in LDCache on ARM. This fixes CDI spec generation on ARM-based systems using NVML.
- Run update-ldcache hook in isolated namespaces.
### Changes in the Toolkit Container
- Bump CUDA base image version to 12.9.0
### Changes in libnvidia-container
- Add `--cuda-compat-mode` flag to the `nvidia-container-cli configure` command.
## v1.17.6
### Changes in the Toolkit Container
- Allow container runtime executable path to be specified when configuring containerd.
- Bump CUDA base image version to 12.8.1
### Changes in libnvidia-container
- Skip files when user has insufficient permissions. This prevents errors when discovering IPC sockets when the `nvidia-container-cli` is run as a non-root user.
- Fix building with Go 1.24
- Fix some typos in text.
## v1.17.5
- Allow the `enabled-cuda-compat` hook to be skipped when generating CDI specifications. This improves compatibility with older NVIDIA Container Toolkit installations. The hook is explicitly ignored for management CDI specifications.

View File

@@ -58,13 +58,15 @@ func main() {
Aliases: []string{"d"},
Usage: "Enable debug-level logging",
Destination: &opts.Debug,
EnvVars: []string{"NVIDIA_CDI_DEBUG"},
// TODO: Support for NVIDIA_CDI_DEBUG is deprecated and NVIDIA_CTK_DEBUG should be used instead.
EnvVars: []string{"NVIDIA_CTK_DEBUG", "NVIDIA_CDI_DEBUG"},
},
&cli.BoolFlag{
Name: "quiet",
Usage: "Suppress all output except for errors; overrides --debug",
Destination: &opts.Quiet,
EnvVars: []string{"NVIDIA_CDI_QUIET"},
// TODO: Support for NVIDIA_CDI_QUIET is deprecated and NVIDIA_CTK_QUIET should be used instead.
EnvVars: []string{"NVDIA_CTK_QUIET", "NVIDIA_CDI_QUIET"},
},
}

View File

@@ -0,0 +1,200 @@
//go:build linux
/**
# SPDX-FileCopyrightText: Copyright (c) 2025 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
# SPDX-License-Identifier: Apache-2.0
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
**/
package ldcache
import (
"errors"
"fmt"
"os"
"os/exec"
"path/filepath"
"strconv"
"syscall"
securejoin "github.com/cyphar/filepath-securejoin"
"github.com/moby/sys/reexec"
"github.com/opencontainers/runc/libcontainer/utils"
"golang.org/x/sys/unix"
)
// pivotRoot will call pivot_root such that rootfs becomes the new root
// filesystem, and everything else is cleaned up.
// This is adapted from the implementation here:
//
// https://github.com/opencontainers/runc/blob/e89a29929c775025419ab0d218a43588b4c12b9a/libcontainer/rootfs_linux.go#L1056-L1113
//
// With the `mount` and `unmount` calls changed to direct unix.Mount and unix.Unmount calls.
func pivotRoot(rootfs string) error {
// While the documentation may claim otherwise, pivot_root(".", ".") is
// actually valid. What this results in is / being the new root but
// /proc/self/cwd being the old root. Since we can play around with the cwd
// with pivot_root this allows us to pivot without creating directories in
// the rootfs. Shout-outs to the LXC developers for giving us this idea.
oldroot, err := unix.Open("/", unix.O_DIRECTORY|unix.O_RDONLY, 0)
if err != nil {
return &os.PathError{Op: "open", Path: "/", Err: err}
}
defer unix.Close(oldroot) //nolint: errcheck
newroot, err := unix.Open(rootfs, unix.O_DIRECTORY|unix.O_RDONLY, 0)
if err != nil {
return &os.PathError{Op: "open", Path: rootfs, Err: err}
}
defer unix.Close(newroot) //nolint: errcheck
// Change to the new root so that the pivot_root actually acts on it.
if err := unix.Fchdir(newroot); err != nil {
return &os.PathError{Op: "fchdir", Path: "fd " + strconv.Itoa(newroot), Err: err}
}
if err := unix.PivotRoot(".", "."); err != nil {
return &os.PathError{Op: "pivot_root", Path: ".", Err: err}
}
// Currently our "." is oldroot (according to the current kernel code).
// However, purely for safety, we will fchdir(oldroot) since there isn't
// really any guarantee from the kernel what /proc/self/cwd will be after a
// pivot_root(2).
if err := unix.Fchdir(oldroot); err != nil {
return &os.PathError{Op: "fchdir", Path: "fd " + strconv.Itoa(oldroot), Err: err}
}
// Make oldroot rslave to make sure our unmounts don't propagate to the
// host (and thus bork the machine). We don't use rprivate because this is
// known to cause issues due to races where we still have a reference to a
// mount while a process in the host namespace are trying to operate on
// something they think has no mounts (devicemapper in particular).
if err := unix.Mount("", ".", "", unix.MS_SLAVE|unix.MS_REC, ""); err != nil {
return err
}
// Perform the unmount. MNT_DETACH allows us to unmount /proc/self/cwd.
if err := unix.Unmount(".", unix.MNT_DETACH); err != nil {
return err
}
// Switch back to our shiny new root.
if err := unix.Chdir("/"); err != nil {
return &os.PathError{Op: "chdir", Path: "/", Err: err}
}
return nil
}
// mountLdConfig mounts the host ldconfig to the mount namespace of the hook.
// We use WithProcfd to perform the mount operations to ensure that the changes
// are persisted across the pivot root.
func mountLdConfig(hostLdconfigPath string, containerRootDirPath string) (string, error) {
hostLdconfigInfo, err := os.Stat(hostLdconfigPath)
if err != nil {
return "", fmt.Errorf("error reading host ldconfig: %w", err)
}
hookScratchDirPath := "/var/run/nvidia-ctk-hook"
ldconfigPath := filepath.Join(hookScratchDirPath, "ldconfig")
if err := utils.MkdirAllInRoot(containerRootDirPath, hookScratchDirPath, 0755); err != nil {
return "", fmt.Errorf("error creating hook scratch folder: %w", err)
}
err = utils.WithProcfd(containerRootDirPath, hookScratchDirPath, func(hookScratchDirFdPath string) error {
return createTmpFs(hookScratchDirFdPath, int(hostLdconfigInfo.Size()))
})
if err != nil {
return "", fmt.Errorf("error creating tmpfs: %w", err)
}
if _, err := createFileInRoot(containerRootDirPath, ldconfigPath, hostLdconfigInfo.Mode()); err != nil {
return "", fmt.Errorf("error creating ldconfig: %w", err)
}
err = utils.WithProcfd(containerRootDirPath, ldconfigPath, func(ldconfigFdPath string) error {
return unix.Mount(hostLdconfigPath, ldconfigFdPath, "", unix.MS_BIND|unix.MS_RDONLY|unix.MS_NODEV|unix.MS_PRIVATE|unix.MS_NOSYMFOLLOW, "")
})
if err != nil {
return "", fmt.Errorf("error bind mounting host ldconfig: %w", err)
}
return ldconfigPath, nil
}
func createFileInRoot(containerRootDirPath string, destinationPath string, mode os.FileMode) (string, error) {
dest, err := securejoin.SecureJoin(containerRootDirPath, destinationPath)
if err != nil {
return "", err
}
// Make the parent directory.
destDir, destBase := filepath.Split(dest)
destDirFd, err := utils.MkdirAllInRootOpen(containerRootDirPath, destDir, 0755)
if err != nil {
return "", fmt.Errorf("error creating parent dir: %w", err)
}
defer destDirFd.Close()
// Make the target file. We want to avoid opening any file that is
// already there because it could be a "bad" file like an invalid
// device or hung tty that might cause a DoS, so we use mknodat.
// destBase does not contain any "/" components, and mknodat does
// not follow trailing symlinks, so we can safely just call mknodat
// here.
if err := unix.Mknodat(int(destDirFd.Fd()), destBase, unix.S_IFREG|uint32(mode), 0); err != nil {
// If we get EEXIST, there was already an inode there and
// we can consider that a success.
if !errors.Is(err, unix.EEXIST) {
return "", fmt.Errorf("error creating empty file: %w", err)
}
}
return dest, nil
}
// mountProc mounts a clean proc filesystem in the new root.
func mountProc(newroot string) error {
target := filepath.Join(newroot, "/proc")
if err := os.MkdirAll(target, 0755); err != nil {
return fmt.Errorf("error creating directory: %w", err)
}
return unix.Mount("proc", target, "proc", 0, "")
}
// createTmpFs creates a tmpfs at the specified location with the specified size.
func createTmpFs(target string, size int) error {
return unix.Mount("tmpfs", target, "tmpfs", 0, fmt.Sprintf("size=%d", size))
}
// createReexecCommand creates a command that can be used to trigger the reexec
// initializer.
// On linux this command runs in new namespaces.
func createReexecCommand(args []string) *exec.Cmd {
cmd := reexec.Command(args...)
cmd.Stdin = os.Stdin
cmd.Stdout = os.Stdout
cmd.Stderr = os.Stderr
cmd.SysProcAttr = &syscall.SysProcAttr{
Cloneflags: syscall.CLONE_NEWNS |
syscall.CLONE_NEWUTS |
syscall.CLONE_NEWIPC |
syscall.CLONE_NEWPID |
syscall.CLONE_NEWNET,
}
return cmd
}

View File

@@ -0,0 +1,51 @@
//go:build !linux
/**
# SPDX-FileCopyrightText: Copyright (c) 2025 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
# SPDX-License-Identifier: Apache-2.0
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
**/
package ldcache
import (
"fmt"
"os"
"os/exec"
"github.com/moby/sys/reexec"
)
func pivotRoot(newroot string) error {
return fmt.Errorf("not supported")
}
func mountLdConfig(hostLdconfigPath string, containerRootDirPath string) (string, error) {
return "", fmt.Errorf("not supported")
}
func mountProc(newroot string) error {
return fmt.Errorf("not supported")
}
// createReexecCommand creates a command that can be used ot trigger the reexec
// initializer.
func createReexecCommand(args []string) *exec.Cmd {
cmd := reexec.Command(args...)
cmd.Stdin = os.Stdin
cmd.Stdout = os.Stdout
cmd.Stderr = os.Stderr
return cmd
}

View File

@@ -1,3 +1,5 @@
//go:build linux
/**
# Copyright (c) 2025, NVIDIA CORPORATION. All rights reserved.
#
@@ -26,10 +28,9 @@ import (
)
// SafeExec attempts to clone the specified binary (as an memfd, for example) before executing it.
func (m command) SafeExec(path string, args []string, envv []string) error {
func SafeExec(path string, args []string, envv []string) error {
safeExe, err := cloneBinary(path)
if err != nil {
m.logger.Warningf("Failed to clone binary %q: %v; falling back to Exec", path, err)
//nolint:gosec // TODO: Can we harden this so that there is less risk of command injection
return syscall.Exec(path, args, envv)
}

View File

@@ -1,5 +1,4 @@
//go:build !linux
// +build !linux
/**
# Copyright (c) 2025, NVIDIA CORPORATION. All rights reserved.
@@ -23,7 +22,7 @@ import "syscall"
// SafeExec is not implemented on non-linux systems and forwards directly to the
// Exec syscall.
func (m *command) SafeExec(path string, args []string, envv []string) error {
func SafeExec(path string, args []string, envv []string) error {
//nolint:gosec // TODO: Can we harden this so that there is less risk of command injection
return syscall.Exec(path, args, envv)
}

View File

@@ -19,10 +19,11 @@ package ldcache
import (
"errors"
"fmt"
"log"
"os"
"path/filepath"
"strings"
"github.com/moby/sys/reexec"
"github.com/urfave/cli/v2"
"github.com/NVIDIA/nvidia-container-toolkit/internal/config"
@@ -37,6 +38,8 @@ const (
// higher precedence than other libraries on the system, but lower than
// the 00-cuda-compat that is included in some containers.
ldsoconfdFilenamePattern = "00-nvcr-*.conf"
reexecUpdateLdCacheCommandName = "reexec-update-ldcache"
)
type command struct {
@@ -49,6 +52,13 @@ type options struct {
containerSpec string
}
func init() {
reexec.Register(reexecUpdateLdCacheCommandName, updateLdCacheHandler)
if reexec.Init() {
os.Exit(0)
}
}
// NewCommand constructs an update-ldcache command with the specified logger
func NewCommand(logger logger.Interface) *cli.Command {
c := command{
@@ -109,62 +119,109 @@ func (m command) run(c *cli.Context, cfg *options) error {
}
containerRootDir, err := s.GetContainerRoot()
if err != nil {
if err != nil || containerRootDir == "" || containerRootDir == "/" {
return fmt.Errorf("failed to determined container root: %v", err)
}
ldconfigPath := m.resolveLDConfigPath(cfg.ldconfigPath)
args := []string{filepath.Base(ldconfigPath)}
if containerRootDir != "" {
args = append(args, "-r", containerRootDir)
args := []string{
reexecUpdateLdCacheCommandName,
strings.TrimPrefix(config.NormalizeLDConfigPath("@"+cfg.ldconfigPath), "@"),
containerRootDir,
}
args = append(args, cfg.folders.Value()...)
cmd := createReexecCommand(args)
return cmd.Run()
}
// updateLdCacheHandler wraps updateLdCache with error handling.
func updateLdCacheHandler() {
if err := updateLdCache(os.Args); err != nil {
log.Printf("Error updating ldcache: %v", err)
os.Exit(1)
}
}
// updateLdCache is invoked from a reexec'd handler and provides namespace
// isolation for the operations performed by this hook.
// At the point where this is invoked, we are in a new mount namespace that is
// cloned from the parent.
//
// args[0] is the reexec initializer function name
// args[1] is the path of the ldconfig binary on the host
// args[2] is the container root directory
// The remaining args are folders that need to be added to the ldcache.
func updateLdCache(args []string) error {
if len(args) < 3 {
return fmt.Errorf("incorrect arguments: %v", args)
}
hostLdconfigPath := args[1]
containerRootDirPath := args[2]
// To prevent leaking the parent proc filesystem, we create a new proc mount
// in the container root.
if err := mountProc(containerRootDirPath); err != nil {
return fmt.Errorf("error mounting /proc: %w", err)
}
containerRoot := containerRoot(containerRootDir)
// We mount the host ldconfig before we pivot root since host paths are not
// visible after the pivot root operation.
ldconfigPath, err := mountLdConfig(hostLdconfigPath, containerRootDirPath)
if err != nil {
return fmt.Errorf("error mounting host ldconfig: %w", err)
}
// We pivot to the container root for the new process, this further limits
// access to the host.
if err := pivotRoot(containerRootDirPath); err != nil {
return fmt.Errorf("error running pivot_root: %w", err)
}
return runLdconfig(ldconfigPath, args[3:]...)
}
// runLdconfig runs the ldconfig binary and ensures that the specified directories
// are processed for the ldcache.
func runLdconfig(ldconfigPath string, directories ...string) error {
args := []string{
"ldconfig",
// Explicitly specify using /etc/ld.so.conf since the host's ldconfig may
// be configured to use a different config file by default.
// Note that since we apply the `-r {{ .containerRootDir }}` argument, /etc/ld.so.conf is
// in the container.
"-f", "/etc/ld.so.conf",
}
containerRoot := containerRoot("/")
if containerRoot.hasPath("/etc/ld.so.cache") {
args = append(args, "-C", "/etc/ld.so.cache")
} else {
m.logger.Debugf("No ld.so.cache found, skipping update")
args = append(args, "-N")
}
folders := cfg.folders.Value()
if containerRoot.hasPath("/etc/ld.so.conf.d") {
err := m.createLdsoconfdFile(containerRoot, ldsoconfdFilenamePattern, folders...)
err := createLdsoconfdFile(ldsoconfdFilenamePattern, directories...)
if err != nil {
return fmt.Errorf("failed to update ld.so.conf.d: %v", err)
return fmt.Errorf("failed to update ld.so.conf.d: %w", err)
}
} else {
args = append(args, folders...)
args = append(args, directories...)
}
// Explicitly specify using /etc/ld.so.conf since the host's ldconfig may
// be configured to use a different config file by default.
args = append(args, "-f", "/etc/ld.so.conf")
return m.SafeExec(ldconfigPath, args, nil)
return SafeExec(ldconfigPath, args, nil)
}
// resolveLDConfigPath determines the LDConfig path to use for the system.
// On systems such as Ubuntu where `/sbin/ldconfig` is a wrapper around
// /sbin/ldconfig.real, the latter is returned.
func (m command) resolveLDConfigPath(path string) string {
return strings.TrimPrefix(config.NormalizeLDConfigPath("@"+path), "@")
}
// createLdsoconfdFile creates a file at /etc/ld.so.conf.d/ in the specified root.
// createLdsoconfdFile creates a file at /etc/ld.so.conf.d/.
// The file is created at /etc/ld.so.conf.d/{{ .pattern }} using `CreateTemp` and
// contains the specified directories on each line.
func (m command) createLdsoconfdFile(in containerRoot, pattern string, dirs ...string) error {
func createLdsoconfdFile(pattern string, dirs ...string) error {
if len(dirs) == 0 {
m.logger.Debugf("No directories to add to /etc/ld.so.conf")
return nil
}
ldsoconfdDir, err := in.resolve("/etc/ld.so.conf.d")
if err != nil {
return err
}
ldsoconfdDir := "/etc/ld.so.conf.d"
if err := os.MkdirAll(ldsoconfdDir, 0755); err != nil {
return fmt.Errorf("failed to create ld.so.conf.d: %w", err)
}
@@ -173,16 +230,16 @@ func (m command) createLdsoconfdFile(in containerRoot, pattern string, dirs ...s
if err != nil {
return fmt.Errorf("failed to create config file: %w", err)
}
defer configFile.Close()
m.logger.Debugf("Adding directories %v to %v", dirs, configFile.Name())
defer func() {
_ = configFile.Close()
}()
added := make(map[string]bool)
for _, dir := range dirs {
if added[dir] {
continue
}
_, err = configFile.WriteString(fmt.Sprintf("%s\n", dir))
_, err = fmt.Fprintf(configFile, "%s\n", dir)
if err != nil {
return fmt.Errorf("failed to update config file: %w", err)
}

View File

@@ -104,3 +104,26 @@ func (c *hookConfig) getSwarmResourceEnvvars() []string {
return envvars
}
// nvidiaContainerCliCUDACompatModeFlags returns required --cuda-compat-mode
// flag(s) depending on the hook and runtime configurations.
func (c *hookConfig) nvidiaContainerCliCUDACompatModeFlags() []string {
var flag string
switch c.NVIDIAContainerRuntimeConfig.Modes.Legacy.CUDACompatMode {
case config.CUDACompatModeLdconfig:
flag = "--cuda-compat-mode=ldconfig"
case config.CUDACompatModeMount:
flag = "--cuda-compat-mode=mount"
case config.CUDACompatModeDisabled, config.CUDACompatModeHook:
flag = "--cuda-compat-mode=disabled"
default:
if !c.Features.AllowCUDACompatLibsFromContainer.IsEnabled() {
flag = "--cuda-compat-mode=disabled"
}
}
if flag == "" {
return nil
}
return []string{flag}
}

View File

@@ -114,9 +114,8 @@ func doPrestart() {
}
args = append(args, "configure")
if !hook.Features.AllowCUDACompatLibsFromContainer.IsEnabled() {
args = append(args, "--no-cntlibs")
}
args = append(args, hook.nvidiaContainerCliCUDACompatModeFlags()...)
if ldconfigPath := cli.NormalizeLDConfigPath(); ldconfigPath != "" {
args = append(args, fmt.Sprintf("--ldconfig=%s", ldconfigPath))
}

View File

@@ -68,12 +68,11 @@ type config struct {
dryRun bool
runtime string
configFilePath string
executablePath string
configSource string
mode string
hookFilePath string
runtimeConfigOverrideJSON string
nvidiaRuntime struct {
name string
path string
@@ -120,6 +119,11 @@ func (m command) build() *cli.Command {
Usage: "path to the config file for the target runtime",
Destination: &config.configFilePath,
},
&cli.StringFlag{
Name: "executable-path",
Usage: "The path to the runtime executable. This is used to extract the current config",
Destination: &config.executablePath,
},
&cli.StringFlag{
Name: "config-mode",
Usage: "the config mode for runtimes that support multiple configuration mechanisms",
@@ -208,9 +212,9 @@ func (m command) validateFlags(c *cli.Context, config *config) error {
config.cdi.enabled = false
}
if config.runtimeConfigOverrideJSON != "" && config.runtime != "containerd" {
m.logger.Warningf("Ignoring runtime-config-override flag for %v", config.runtime)
config.runtimeConfigOverrideJSON = ""
if config.executablePath != "" && config.runtime == "docker" {
m.logger.Warningf("Ignoring executable-path=%q flag for %v", config.executablePath, config.runtime)
config.executablePath = ""
}
switch config.configSource {
@@ -330,9 +334,9 @@ func (c *config) resolveConfigSource() (toml.Loader, error) {
func (c *config) getCommandConfigSource() toml.Loader {
switch c.runtime {
case "containerd":
return containerd.CommandLineSource("")
return containerd.CommandLineSource("", c.executablePath)
case "crio":
return crio.CommandLineSource("")
return crio.CommandLineSource("", c.executablePath)
}
return toml.Empty
}

View File

@@ -14,7 +14,7 @@
ARG GOLANG_VERSION=x.x.x
FROM nvidia/cuda:12.8.0-base-ubuntu20.04
FROM nvidia/cuda:12.9.0-base-ubuntu20.04
ARG ARTIFACTS_ROOT
COPY ${ARTIFACTS_ROOT} /artifacts/packages/

View File

@@ -15,7 +15,7 @@
ARG GOLANG_VERSION=x.x.x
ARG VERSION="N/A"
FROM nvidia/cuda:12.8.0-base-ubi8 as build
FROM nvidia/cuda:12.9.0-base-ubi8 as build
RUN yum install -y \
wget make git gcc \
@@ -48,7 +48,7 @@ COPY . .
RUN GOPATH=/artifacts go install -ldflags="-s -w -X 'main.Version=${VERSION}'" ./tools/...
FROM nvidia/cuda:12.8.0-base-ubi8
FROM nvidia/cuda:12.9.0-base-ubi8
ENV NVIDIA_DISABLE_REQUIRE="true"
ENV NVIDIA_VISIBLE_DEVICES=void

View File

@@ -15,7 +15,7 @@
ARG GOLANG_VERSION=x.x.x
ARG VERSION="N/A"
FROM nvidia/cuda:12.8.0-base-ubuntu20.04 as build
FROM nvidia/cuda:12.9.0-base-ubuntu20.04 as build
RUN apt-get update && \
apt-get install -y wget make git gcc \
@@ -47,7 +47,7 @@ COPY . .
RUN GOPATH=/artifacts go install -ldflags="-s -w -X 'main.Version=${VERSION}'" ./tools/...
FROM nvcr.io/nvidia/cuda:12.8.0-base-ubuntu20.04
FROM nvcr.io/nvidia/cuda:12.9.0-base-ubuntu20.04
# Remove the CUDA repository configurations to avoid issues with rotated GPG keys
RUN rm -f /etc/apt/sources.list.d/cuda.list

View File

@@ -14,7 +14,7 @@
# This Dockerfile is also used to define the golang version used in this project
# This allows dependabot to manage this version in addition to other images.
FROM golang:1.23.5
FROM golang:1.23.10
WORKDIR /work
COPY * .

12
go.mod
View File

@@ -3,15 +3,17 @@ module github.com/NVIDIA/nvidia-container-toolkit
go 1.22
require (
github.com/NVIDIA/go-nvlib v0.6.1
github.com/NVIDIA/go-nvml v0.12.4-1
github.com/NVIDIA/go-nvlib v0.7.3
github.com/NVIDIA/go-nvml v0.12.9-0
github.com/cyphar/filepath-securejoin v0.4.1
github.com/fsnotify/fsnotify v1.7.0
github.com/moby/sys/reexec v0.1.0
github.com/moby/sys/symlink v0.3.0
github.com/opencontainers/runc v1.2.5
github.com/opencontainers/runc v1.2.6
github.com/opencontainers/runtime-spec v1.2.1
github.com/pelletier/go-toml v1.9.5
github.com/sirupsen/logrus v1.9.3
github.com/stretchr/testify v1.9.0
github.com/stretchr/testify v1.10.0
github.com/urfave/cli/v2 v2.27.5
golang.org/x/mod v0.20.0
golang.org/x/sys v0.28.0
@@ -21,13 +23,13 @@ require (
require (
github.com/cpuguy83/go-md2man/v2 v2.0.5 // indirect
github.com/cyphar/filepath-securejoin v0.4.1 // indirect
github.com/davecgh/go-spew v1.1.1 // indirect
github.com/google/uuid v1.6.0 // indirect
github.com/hashicorp/errwrap v1.1.0 // indirect
github.com/kr/pretty v0.3.1 // indirect
github.com/opencontainers/runtime-tools v0.9.1-0.20221107090550-2e043c6bd626 // indirect
github.com/pmezard/go-difflib v1.0.0 // indirect
github.com/rogpeppe/go-internal v1.11.0 // indirect
github.com/russross/blackfriday/v2 v2.1.0 // indirect
github.com/syndtr/gocapability v0.0.0-20200815063812-42c35b437635 // indirect
github.com/xeipuuv/gojsonpointer v0.0.0-20190905194746-02993c407bfb // indirect

21
go.sum
View File

@@ -1,7 +1,7 @@
github.com/NVIDIA/go-nvlib v0.6.1 h1:0/5FvaKvDJoJeJ+LFlh+NDQMxMlVw9wOXrOVrGXttfE=
github.com/NVIDIA/go-nvlib v0.6.1/go.mod h1:9UrsLGx/q1OrENygXjOuM5Ey5KCtiZhbvBlbUIxtGWY=
github.com/NVIDIA/go-nvml v0.12.4-1 h1:WKUvqshhWSNTfm47ETRhv0A0zJyr1ncCuHiXwoTrBEc=
github.com/NVIDIA/go-nvml v0.12.4-1/go.mod h1:8Llmj+1Rr+9VGGwZuRer5N/aCjxGuR5nPb/9ebBiIEQ=
github.com/NVIDIA/go-nvlib v0.7.3 h1:kXc8PkWUlrwedSpM4fR8xT/DAq1NKy8HqhpgteFcGAw=
github.com/NVIDIA/go-nvlib v0.7.3/go.mod h1:i95Je7GinMy/+BDs++DAdbPmT2TubjNP8i8joC7DD7I=
github.com/NVIDIA/go-nvml v0.12.9-0 h1:e344UK8ZkeMeeLkdQtRhmXRxNf+u532LDZPGMtkdus0=
github.com/NVIDIA/go-nvml v0.12.9-0/go.mod h1:+KNA7c7gIBH7SKSJ1ntlwkfN80zdx8ovl4hrK3LmPt4=
github.com/blang/semver/v4 v4.0.0 h1:1PFHFE6yCCTv8C1TeyNNarDzntLi7wMI5i/pzqYIsAM=
github.com/blang/semver/v4 v4.0.0/go.mod h1:IbckMUScFkM3pff0VJDNKRiT6TG/YpiHIM2yvyW5YoQ=
github.com/cpuguy83/go-md2man/v2 v2.0.5 h1:ZtcqGrnekaHpVLArFSe4HK5DoKx1T0rq2DwVB0alcyc=
@@ -30,11 +30,13 @@ github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI=
github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY=
github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE=
github.com/mndrix/tap-go v0.0.0-20171203230836-629fa407e90b/go.mod h1:pzzDgJWZ34fGzaAZGFW22KVZDfyrYW+QABMrWnJBnSs=
github.com/moby/sys/reexec v0.1.0 h1:RrBi8e0EBTLEgfruBOFcxtElzRGTEUkeIFaVXgU7wok=
github.com/moby/sys/reexec v0.1.0/go.mod h1:EqjBg8F3X7iZe5pU6nRZnYCMUTXoxsjiIfHup5wYIN8=
github.com/moby/sys/symlink v0.3.0 h1:GZX89mEZ9u53f97npBy4Rc3vJKj7JBDj/PN2I22GrNU=
github.com/moby/sys/symlink v0.3.0/go.mod h1:3eNdhduHmYPcgsJtZXW1W4XUJdZGBIkttZ8xKqPUJq0=
github.com/mrunalp/fileutils v0.5.0/go.mod h1:M1WthSahJixYnrXQl/DFQuteStB1weuxD2QJNHXfbSQ=
github.com/opencontainers/runc v1.2.5 h1:8KAkq3Wrem8bApgOHyhRI/8IeLXIfmZ6Qaw6DNSLnA4=
github.com/opencontainers/runc v1.2.5/go.mod h1:dOQeFo29xZKBNeRBI0B19mJtfHv68YgCTh1X+YphA+4=
github.com/opencontainers/runc v1.2.6 h1:P7Hqg40bsMvQGCS4S7DJYhUZOISMLJOB2iGX5COWiPk=
github.com/opencontainers/runc v1.2.6/go.mod h1:dOQeFo29xZKBNeRBI0B19mJtfHv68YgCTh1X+YphA+4=
github.com/opencontainers/runtime-spec v1.0.3-0.20220825212826-86290f6a00fb/go.mod h1:jwyrGlmzljRJv/Fgzds9SsS/C5hL+LL3ko9hs6T5lQ0=
github.com/opencontainers/runtime-spec v1.2.1 h1:S4k4ryNgEpxW1dzyqffOmhI1BHYcjzU8lpJfSlR0xww=
github.com/opencontainers/runtime-spec v1.2.1/go.mod h1:jwyrGlmzljRJv/Fgzds9SsS/C5hL+LL3ko9hs6T5lQ0=
@@ -48,8 +50,9 @@ github.com/pelletier/go-toml v1.9.5/go.mod h1:u1nR/EPcESfeI/szUZKdtJ0xRNbUoANCko
github.com/pkg/diff v0.0.0-20210226163009-20ebb0f2a09e/go.mod h1:pJLUxLENpZxwdsKMEsNbx1VGcRFpLqf3715MtcvvzbA=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/rogpeppe/go-internal v1.9.0 h1:73kH8U+JUqXU8lRuOHeVHaa/SZPifC7BkcraZVejAe8=
github.com/rogpeppe/go-internal v1.9.0/go.mod h1:WtVeX8xhTBvf0smdhujwtBcq4Qrzq/fJaraNFVN+nFs=
github.com/rogpeppe/go-internal v1.11.0 h1:cWPaGQEPrBb5/AsnsZesgZZ9yb1OQ+GOISoDNXVBh4M=
github.com/rogpeppe/go-internal v1.11.0/go.mod h1:ddIwULY96R17DhadqLgMfk9H9tvdUzkipdSkR5nkCZA=
github.com/russross/blackfriday/v2 v2.1.0 h1:JIOH55/0cWyOuilr9/qlrm0BSXldqnqwMsf35Ld67mk=
github.com/russross/blackfriday/v2 v2.1.0/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM=
github.com/sirupsen/logrus v1.8.1/go.mod h1:yWOB1SBYBC5VeMP7gHvWumXLIWorT60ONWic61uBYv0=
@@ -59,8 +62,8 @@ github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+
github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=
github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=
github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/stretchr/testify v1.9.0 h1:HtqpIVDClZ4nwg75+f6Lvsy/wHu+3BoSGCbBAcpTsTg=
github.com/stretchr/testify v1.9.0/go.mod h1:r2ic/lqez/lEtzL7wO/rwa5dbSLXVDPFyf8C91i36aY=
github.com/stretchr/testify v1.10.0 h1:Xv5erBjTwe/5IxqUQTdXv5kgmIvbHo3QQyRwhJsOfJA=
github.com/stretchr/testify v1.10.0/go.mod h1:r2ic/lqez/lEtzL7wO/rwa5dbSLXVDPFyf8C91i36aY=
github.com/syndtr/gocapability v0.0.0-20200815063812-42c35b437635 h1:kdXcSzyDtseVEc4yCz2qF8ZrQvIDBJLl4S1c3GCXmoI=
github.com/syndtr/gocapability v0.0.0-20200815063812-42c35b437635/go.mod h1:hkRG7XYTFWNJGYcbNJQlaLq0fg1yr4J4t/NcTQtrfww=
github.com/urfave/cli v1.19.1/go.mod h1:70zkFmudgCuE/ngEzBv17Jvp/497gISqfk5gWijbERA=

View File

@@ -121,6 +121,9 @@ func GetDefault() (*Config, error) {
AnnotationPrefixes: []string{cdi.AnnotationPrefix},
SpecDirs: cdi.DefaultSpecDirs,
},
Legacy: legacyModeConfig{
CUDACompatMode: defaultCUDACompatMode,
},
},
},
NVIDIAContainerRuntimeHookConfig: RuntimeHookConfig{

View File

@@ -74,6 +74,9 @@ func TestGetConfig(t *testing.T) {
AnnotationPrefixes: []string{"cdi.k8s.io/"},
SpecDirs: []string{"/etc/cdi", "/var/run/cdi"},
},
Legacy: legacyModeConfig{
CUDACompatMode: "ldconfig",
},
},
},
NVIDIAContainerRuntimeHookConfig: RuntimeHookConfig{
@@ -93,6 +96,7 @@ func TestGetConfig(t *testing.T) {
"nvidia-container-cli.load-kmods = false",
"nvidia-container-cli.ldconfig = \"@/foo/bar/ldconfig\"",
"nvidia-container-cli.user = \"foo:bar\"",
"nvidia-container-cli.cuda-compat-mode = \"mount\"",
"nvidia-container-runtime.debug = \"/foo/bar\"",
"nvidia-container-runtime.discover-mode = \"not-legacy\"",
"nvidia-container-runtime.log-level = \"debug\"",
@@ -102,6 +106,7 @@ func TestGetConfig(t *testing.T) {
"nvidia-container-runtime.modes.cdi.annotation-prefixes = [\"cdi.k8s.io/\", \"example.vendor.com/\",]",
"nvidia-container-runtime.modes.cdi.spec-dirs = [\"/except/etc/cdi\", \"/not/var/run/cdi\",]",
"nvidia-container-runtime.modes.csv.mount-spec-path = \"/not/etc/nvidia-container-runtime/host-files-for-container.d\"",
"nvidia-container-runtime.modes.legacy.cuda-compat-mode = \"mount\"",
"nvidia-container-runtime-hook.path = \"/foo/bar/nvidia-container-runtime-hook\"",
"nvidia-ctk.path = \"/foo/bar/nvidia-ctk\"",
},
@@ -134,6 +139,9 @@ func TestGetConfig(t *testing.T) {
"/not/var/run/cdi",
},
},
Legacy: legacyModeConfig{
CUDACompatMode: "mount",
},
},
},
NVIDIAContainerRuntimeHookConfig: RuntimeHookConfig{
@@ -178,6 +186,9 @@ func TestGetConfig(t *testing.T) {
"/var/run/cdi",
},
},
Legacy: legacyModeConfig{
CUDACompatMode: "ldconfig",
},
},
},
NVIDIAContainerRuntimeHookConfig: RuntimeHookConfig{
@@ -200,6 +211,7 @@ func TestGetConfig(t *testing.T) {
"root = \"/bar/baz\"",
"load-kmods = false",
"ldconfig = \"@/foo/bar/ldconfig\"",
"cuda-compat-mode = \"mount\"",
"user = \"foo:bar\"",
"[nvidia-container-runtime]",
"debug = \"/foo/bar\"",
@@ -213,6 +225,8 @@ func TestGetConfig(t *testing.T) {
"spec-dirs = [\"/except/etc/cdi\", \"/not/var/run/cdi\",]",
"[nvidia-container-runtime.modes.csv]",
"mount-spec-path = \"/not/etc/nvidia-container-runtime/host-files-for-container.d\"",
"[nvidia-container-runtime.modes.legacy]",
"cuda-compat-mode = \"mount\"",
"[nvidia-container-runtime-hook]",
"path = \"/foo/bar/nvidia-container-runtime-hook\"",
"[nvidia-ctk]",
@@ -247,6 +261,9 @@ func TestGetConfig(t *testing.T) {
"/not/var/run/cdi",
},
},
Legacy: legacyModeConfig{
CUDACompatMode: "mount",
},
},
},
NVIDIAContainerRuntimeHookConfig: RuntimeHookConfig{
@@ -283,6 +300,9 @@ func TestGetConfig(t *testing.T) {
AnnotationPrefixes: []string{"cdi.k8s.io/"},
SpecDirs: []string{"/etc/cdi", "/var/run/cdi"},
},
Legacy: legacyModeConfig{
CUDACompatMode: "ldconfig",
},
},
},
NVIDIAContainerRuntimeHookConfig: RuntimeHookConfig{
@@ -322,6 +342,9 @@ func TestGetConfig(t *testing.T) {
AnnotationPrefixes: []string{"cdi.k8s.io/"},
SpecDirs: []string{"/etc/cdi", "/var/run/cdi"},
},
Legacy: legacyModeConfig{
CUDACompatMode: "ldconfig",
},
},
},
NVIDIAContainerRuntimeHookConfig: RuntimeHookConfig{

View File

@@ -29,8 +29,9 @@ type RuntimeConfig struct {
// modesConfig defines (optional) per-mode configs
type modesConfig struct {
CSV csvModeConfig `toml:"csv"`
CDI cdiModeConfig `toml:"cdi"`
CSV csvModeConfig `toml:"csv"`
CDI cdiModeConfig `toml:"cdi"`
Legacy legacyModeConfig `toml:"legacy"`
}
type cdiModeConfig struct {
@@ -45,3 +46,31 @@ type cdiModeConfig struct {
type csvModeConfig struct {
MountSpecPath string `toml:"mount-spec-path"`
}
type legacyModeConfig struct {
// CUDACompatMode sets the mode to be used to make CUDA Forward Compat
// libraries discoverable in the container.
CUDACompatMode cudaCompatMode `toml:"cuda-compat-mode,omitempty"`
}
type cudaCompatMode string
const (
defaultCUDACompatMode = CUDACompatModeLdconfig
// CUDACompatModeDisabled explicitly disables the handling of CUDA Forward
// Compatibility in the NVIDIA Container Runtime and NVIDIA Container
// Runtime Hook.
CUDACompatModeDisabled = cudaCompatMode("disabled")
// CUDACompatModeHook uses a container lifecycle hook to implement CUDA
// Forward Compatibility support. This requires the use of the NVIDIA
// Container Runtime and is not compatible with use cases where only the
// NVIDIA Container Runtime Hook is used (e.g. the Docker --gpus flag).
CUDACompatModeHook = cudaCompatMode("hook")
// CUDACompatModeLdconfig adds the folders containing CUDA Forward Compat
// libraries to the ldconfig command invoked from the NVIDIA Container
// Runtime Hook.
CUDACompatModeLdconfig = cudaCompatMode("ldconfig")
// CUDACompatModeMount mounts CUDA Forward Compat folders from the container
// to the container when using the NVIDIA Container Runtime Hook.
CUDACompatModeMount = cudaCompatMode("mount")
)

View File

@@ -74,6 +74,9 @@ spec-dirs = ["/etc/cdi", "/var/run/cdi"]
[nvidia-container-runtime.modes.csv]
mount-spec-path = "/etc/nvidia-container-runtime/host-files-for-container.d"
[nvidia-container-runtime.modes.legacy]
cuda-compat-mode = "ldconfig"
[nvidia-container-runtime-hook]
path = "nvidia-container-runtime-hook"
skip-mode-detection = false

View File

@@ -34,6 +34,7 @@ type Hook struct {
Lifecycle string
Path string
Args []string
Env []string
}
// Discover defines an interface for discovering the devices, mounts, and hooks available on a system

View File

@@ -70,6 +70,7 @@ func TestGraphicsLibrariesDiscoverer(t *testing.T) {
Args: []string{"nvidia-cdi-hook", "create-symlinks",
"--link", "../libnvidia-allocator.so.1::/usr/lib64/gbm/nvidia-drm_gbm.so",
},
Env: []string{"NVIDIA_CTK_DEBUG=false"},
},
},
},
@@ -97,6 +98,7 @@ func TestGraphicsLibrariesDiscoverer(t *testing.T) {
Args: []string{"nvidia-cdi-hook", "create-symlinks",
"--link", "libnvidia-vulkan-producer.so.123.45.67::/usr/lib64/libnvidia-vulkan-producer.so",
},
Env: []string{"NVIDIA_CTK_DEBUG=false"},
},
},
},
@@ -128,6 +130,7 @@ func TestGraphicsLibrariesDiscoverer(t *testing.T) {
"--link", "../libnvidia-allocator.so.1::/usr/lib64/gbm/nvidia-drm_gbm.so",
"--link", "libnvidia-vulkan-producer.so.123.45.67::/usr/lib64/libnvidia-vulkan-producer.so",
},
Env: []string{"NVIDIA_CTK_DEBUG=false"},
},
},
},

View File

@@ -17,6 +17,7 @@
package discover
import (
"fmt"
"path/filepath"
"tags.cncf.io/container-device-interface/pkg/cdi"
@@ -69,6 +70,7 @@ func (c cdiHook) Create(name string, args ...string) Hook {
Lifecycle: cdi.CreateContainerHook,
Path: string(c),
Args: append(c.requiredArgs(name), args...),
Env: []string{fmt.Sprintf("NVIDIA_CTK_DEBUG=%v", false)},
}
}
func (c cdiHook) requiredArgs(name string) []string {

View File

@@ -95,6 +95,7 @@ func TestLDCacheUpdateHook(t *testing.T) {
Path: testNvidiaCDIHookPath,
Args: tc.expectedArgs,
Lifecycle: "createContainer",
Env: []string{"NVIDIA_CTK_DEBUG=false"},
}
d, err := NewLDCacheUpdateHook(logger, mountMock, testNvidiaCDIHookPath, tc.ldconfigPath)

View File

@@ -21,26 +21,28 @@ import "fmt"
// list is a discoverer that contains a list of Discoverers. The output of the
// Mounts functions is the concatenation of the output for each of the
// elements in the list.
type list struct {
discoverers []Discover
}
type list []Discover
var _ Discover = (*list)(nil)
// Merge creates a discoverer that is the composite of a list of discoverers.
func Merge(d ...Discover) Discover {
l := list{
discoverers: d,
func Merge(discoverers ...Discover) Discover {
var l list
for _, d := range discoverers {
if d == nil {
continue
}
l = append(l, d)
}
return &l
return l
}
// Devices returns all devices from the included discoverers
func (d list) Devices() ([]Device, error) {
var allDevices []Device
for i, di := range d.discoverers {
for i, di := range d {
devices, err := di.Devices()
if err != nil {
return nil, fmt.Errorf("error discovering devices for discoverer %v: %v", i, err)
@@ -55,7 +57,7 @@ func (d list) Devices() ([]Device, error) {
func (d list) Mounts() ([]Mount, error) {
var allMounts []Mount
for i, di := range d.discoverers {
for i, di := range d {
mounts, err := di.Mounts()
if err != nil {
return nil, fmt.Errorf("error discovering mounts for discoverer %v: %v", i, err)
@@ -70,7 +72,7 @@ func (d list) Mounts() ([]Mount, error) {
func (d list) Hooks() ([]Hook, error) {
var allHooks []Hook
for i, di := range d.discoverers {
for i, di := range d {
hooks, err := di.Hooks()
if err != nil {
return nil, fmt.Errorf("error discovering hooks for discoverer %v: %v", i, err)

View File

@@ -69,8 +69,8 @@ func (d *mounts) Mounts() ([]Mount, error) {
d.Lock()
defer d.Unlock()
uniqueMounts := make(map[string]Mount)
var mounts []Mount
seen := make(map[string]bool)
for _, candidate := range d.required {
d.logger.Debugf("Locating %v", candidate)
located, err := d.lookup.Locate(candidate)
@@ -84,7 +84,7 @@ func (d *mounts) Mounts() ([]Mount, error) {
}
d.logger.Debugf("Located %v as %v", candidate, located)
for _, p := range located {
if _, ok := uniqueMounts[p]; ok {
if seen[p] {
d.logger.Debugf("Skipping duplicate mount %v", p)
continue
}
@@ -95,7 +95,7 @@ func (d *mounts) Mounts() ([]Mount, error) {
}
d.logger.Infof("Selecting %v as %v", p, r)
uniqueMounts[p] = Mount{
mount := Mount{
HostPath: p,
Path: r,
Options: []string{
@@ -105,14 +105,11 @@ func (d *mounts) Mounts() ([]Mount, error) {
"bind",
},
}
mounts = append(mounts, mount)
seen[p] = true
}
}
var mounts []Mount
for _, m := range uniqueMounts {
mounts = append(mounts, m)
}
d.cache = mounts
return d.cache, nil

View File

@@ -44,13 +44,14 @@ func TestMounts(t *testing.T) {
"bind",
}
logger, logHook := testlog.NewNullLogger()
logger, _ := testlog.NewNullLogger()
testCases := []struct {
description string
expectedError error
expectedMounts []Mount
input *mounts
repeat int
}{
{
description: "nill lookup returns error",
@@ -159,31 +160,68 @@ func TestMounts(t *testing.T) {
{Path: "/located", HostPath: "/some/root/located", Options: mountOptions},
},
},
{
description: "multiple mounts ordering",
input: &mounts{
lookup: &lookup.LocatorMock{
LocateFunc: func(s string) ([]string, error) {
return []string{
"first",
"second",
"third",
"fourth",
"second",
"second",
"second",
"fifth",
"sixth"}, nil
},
},
required: []string{""},
},
expectedMounts: []Mount{
{Path: "first", HostPath: "first", Options: mountOptions},
{Path: "second", HostPath: "second", Options: mountOptions},
{Path: "third", HostPath: "third", Options: mountOptions},
{Path: "fourth", HostPath: "fourth", Options: mountOptions},
{Path: "fifth", HostPath: "fifth", Options: mountOptions},
{Path: "sixth", HostPath: "sixth", Options: mountOptions},
},
repeat: 10,
},
}
for _, tc := range testCases {
logHook.Reset()
t.Run(tc.description, func(t *testing.T) {
tc.input.logger = logger
mounts, err := tc.input.Mounts()
if tc.expectedError != nil {
require.Error(t, err)
} else {
require.NoError(t, err)
for i := 1; ; i++ {
test_name := tc.description
if tc.repeat > 1 {
test_name += fmt.Sprintf("/%d", i)
}
require.ElementsMatch(t, tc.expectedMounts, mounts)
success := t.Run(test_name, func(t *testing.T) {
tc.input.logger = logger
mounts, err := tc.input.Mounts()
// We check that the mock is called for each element of required
if tc.input.lookup != nil {
mock := tc.input.lookup.(*lookup.LocatorMock)
require.Len(t, mock.LocateCalls(), len(tc.input.required))
var args []string
for _, c := range mock.LocateCalls() {
args = append(args, c.S)
if tc.expectedError != nil {
require.Error(t, err)
} else {
require.NoError(t, err)
}
require.EqualValues(t, args, tc.input.required)
require.EqualValues(t, tc.expectedMounts, mounts)
// We check that the mock is called for each element of required
if i == 1 && tc.input.lookup != nil {
mock := tc.input.lookup.(*lookup.LocatorMock)
require.Len(t, mock.LocateCalls(), len(tc.input.required))
var args []string
for _, c := range mock.LocateCalls() {
args = append(args, c.S)
}
require.EqualValues(t, args, tc.input.required)
}
})
if !success || i >= tc.repeat {
break
}
})
}
}
}

View File

@@ -115,6 +115,7 @@ func TestWithWithDriverDotSoSymlinks(t *testing.T) {
Lifecycle: "createContainer",
Path: "/path/to/nvidia-cdi-hook",
Args: []string{"nvidia-cdi-hook", "create-symlinks", "--link", "libcuda.so.1::/usr/lib/libcuda.so"},
Env: []string{"NVIDIA_CTK_DEBUG=false"},
},
},
},
@@ -147,6 +148,7 @@ func TestWithWithDriverDotSoSymlinks(t *testing.T) {
Lifecycle: "createContainer",
Path: "/path/to/nvidia-cdi-hook",
Args: []string{"nvidia-cdi-hook", "create-symlinks", "--link", "libcuda.so.1::/usr/lib/libcuda.so"},
Env: []string{"NVIDIA_CTK_DEBUG=false"},
},
},
},
@@ -178,6 +180,7 @@ func TestWithWithDriverDotSoSymlinks(t *testing.T) {
Lifecycle: "createContainer",
Path: "/path/to/nvidia-cdi-hook",
Args: []string{"nvidia-cdi-hook", "create-symlinks", "--link", "libcuda.so.1::/usr/lib/libcuda.so"},
Env: []string{"NVIDIA_CTK_DEBUG=false"},
},
},
},
@@ -247,6 +250,7 @@ func TestWithWithDriverDotSoSymlinks(t *testing.T) {
Lifecycle: "createContainer",
Path: "/path/to/nvidia-cdi-hook",
Args: []string{"nvidia-cdi-hook", "create-symlinks", "--link", "libcuda.so.1::/usr/lib/libcuda.so"},
Env: []string{"NVIDIA_CTK_DEBUG=false"},
},
},
},
@@ -301,6 +305,7 @@ func TestWithWithDriverDotSoSymlinks(t *testing.T) {
"--link", "libGLX_nvidia.so.1.2.3::/usr/lib/libGLX_indirect.so.0",
"--link", "libnvidia-opticalflow.so.1::/usr/lib/libnvidia-opticalflow.so",
},
Env: []string{"NVIDIA_CTK_DEBUG=false"},
},
},
},

View File

@@ -38,10 +38,15 @@ func (d hook) toEdits() *cdi.ContainerEdits {
// toSpec converts a discovered Hook to a CDI Spec Hook. Note
// that missing info is filled in when edits are applied by querying the Hook node.
func (d hook) toSpec() *specs.Hook {
env := d.Env
if env == nil {
env = []string{"NVIDIA_CTK_DEBUG=false"}
}
s := specs.Hook{
HookName: d.Lifecycle,
Path: d.Path,
Args: d.Args,
Env: env,
}
return &s

View File

@@ -216,7 +216,7 @@ func TestResolveAutoMode(t *testing.T) {
HasTegraFilesFunc: func() (bool, string) {
return tc.info["tegra"], "tegra"
},
UsesOnlyNVGPUModuleFunc: func() (bool, string) {
HasOnlyIntegratedGPUsFunc: func() (bool, string) {
return tc.info["nvgpu"], "nvgpu"
},
}

View File

@@ -47,6 +47,11 @@ const (
flagArchX8664 = 0x0300
flagArchX32 = 0x0800
flagArchPpc64le = 0x0500
// flagArch_ARM_LIBHF is the flag value for 32-bit ARM libs using hard-float.
flagArch_ARM_LIBHF = 0x0900
// flagArch_AARCH64_LIB64 is the flag value for 64-bit ARM libs.
flagArch_AARCH64_LIB64 = 0x0a00
)
var errInvalidCache = errors.New("invalid ld.so.cache file")
@@ -195,10 +200,14 @@ func (c *ldcache) getEntries() []entry {
switch e.Flags & flagArchMask {
case flagArchX8664:
fallthrough
case flagArch_AARCH64_LIB64:
fallthrough
case flagArchPpc64le:
bits = 64
case flagArchX32:
fallthrough
case flagArch_ARM_LIBHF:
fallthrough
case flagArchI386:
bits = 32
default:

View File

@@ -78,12 +78,14 @@ func TestDiscoverModifier(t *testing.T) {
{
Path: "/hook/a",
Args: []string{"/hook/a", "arga"},
Env: []string{"NVIDIA_CTK_DEBUG=false"},
},
},
CreateContainer: []specs.Hook{
{
Path: "/hook/b",
Args: []string{"/hook/b", "argb"},
Env: []string{"NVIDIA_CTK_DEBUG=false"},
},
},
},
@@ -123,6 +125,7 @@ func TestDiscoverModifier(t *testing.T) {
{
Path: "/hook/b",
Args: []string{"/hook/b", "argb"},
Env: []string{"NVIDIA_CTK_DEBUG=false"},
},
},
},

View File

@@ -79,24 +79,41 @@ func NewFeatureGatedModifier(logger logger.Interface, cfg *config.Config, image
discoverers = append(discoverers, d)
}
if !cfg.Features.AllowCUDACompatLibsFromContainer.IsEnabled() && !cfg.Features.DisableCUDACompatLibHook.IsEnabled() {
compatLibHookDiscoverer := discover.NewCUDACompatHookDiscoverer(logger, cfg.NVIDIACTKConfig.Path, driver)
discoverers = append(discoverers, compatLibHookDiscoverer)
// For legacy mode, we also need to inject a hook to update the LDCache
// after we have modifed the configuration.
if cfg.NVIDIAContainerRuntimeConfig.Mode == "legacy" {
ldcacheUpdateHookDiscoverer, err := discover.NewLDCacheUpdateHook(
logger,
discover.None{},
cfg.NVIDIACTKConfig.Path,
"",
)
if err != nil {
return nil, fmt.Errorf("failed to construct ldcache update discoverer: %w", err)
}
discoverers = append(discoverers, ldcacheUpdateHookDiscoverer)
// If the feature flag has explicitly been toggled, we don't make any modification.
if !cfg.Features.DisableCUDACompatLibHook.IsEnabled() {
cudaCompatDiscoverer, err := getCudaCompatModeDiscoverer(logger, cfg, driver)
if err != nil {
return nil, fmt.Errorf("failed to construct CUDA Compat discoverer: %w", err)
}
discoverers = append(discoverers, cudaCompatDiscoverer)
}
return NewModifierFromDiscoverer(logger, discover.Merge(discoverers...))
}
func getCudaCompatModeDiscoverer(logger logger.Interface, cfg *config.Config, driver *root.Driver) (discover.Discover, error) {
// For legacy mode, we only include the enable-cuda-compat hook if cuda-compat-mode is set to hook.
if cfg.NVIDIAContainerRuntimeConfig.Mode == "legacy" && cfg.NVIDIAContainerRuntimeConfig.Modes.Legacy.CUDACompatMode != config.CUDACompatModeHook {
return nil, nil
}
compatLibHookDiscoverer := discover.NewCUDACompatHookDiscoverer(logger, cfg.NVIDIACTKConfig.Path, driver)
// For non-legacy modes we return the hook as is. These modes *should* already include the update-ldcache hook.
if cfg.NVIDIAContainerRuntimeConfig.Mode != "legacy" {
return compatLibHookDiscoverer, nil
}
// For legacy mode, we also need to inject a hook to update the LDCache
// after we have modifed the configuration.
ldcacheUpdateHookDiscoverer, err := discover.NewLDCacheUpdateHook(
logger,
discover.None{},
cfg.NVIDIACTKConfig.Path,
"",
)
if err != nil {
return nil, fmt.Errorf("failed to construct ldcache update discoverer: %w", err)
}
return discover.Merge(compatLibHookDiscoverer, ldcacheUpdateHookDiscoverer), nil
}

View File

@@ -97,6 +97,7 @@ func TestDiscovererFromCSVFiles(t *testing.T) {
"--link",
"/usr/lib/aarch64-linux-gnu/tegra/libv4l2_nvargus.so::/usr/lib/aarch64-linux-gnu/libv4l/plugins/nv/libv4l2_nvargus.so",
},
Env: []string{"NVIDIA_CTK_DEBUG=false"},
},
},
},
@@ -153,6 +154,7 @@ func TestDiscovererFromCSVFiles(t *testing.T) {
"--link",
"/usr/lib/aarch64-linux-gnu/tegra/libv4l2_nvargus.so::/usr/lib/aarch64-linux-gnu/libv4l/plugins/nv/libv4l2_nvargus.so",
},
Env: []string{"NVIDIA_CTK_DEBUG=false"},
},
},
},

View File

@@ -162,8 +162,11 @@ func (c *Config) GetRuntimeConfig(name string) (engine.RuntimeConfig, error) {
}
// CommandLineSource returns the CLI-based containerd config loader
func CommandLineSource(hostRoot string) toml.Loader {
return toml.FromCommandLine(chrootIfRequired(hostRoot, "containerd", "config", "dump")...)
func CommandLineSource(hostRoot string, executablePath string) toml.Loader {
if executablePath == "" {
executablePath = "containerd"
}
return toml.FromCommandLine(chrootIfRequired(hostRoot, executablePath, "config", "dump")...)
}
func chrootIfRequired(hostRoot string, commandLine ...string) []string {

View File

@@ -157,9 +157,12 @@ func (c *Config) GetRuntimeConfig(name string) (engine.RuntimeConfig, error) {
func (c *Config) EnableCDI() {}
// CommandLineSource returns the CLI-based crio config loader
func CommandLineSource(hostRoot string) toml.Loader {
func CommandLineSource(hostRoot string, executablePath string) toml.Loader {
if executablePath == "" {
executablePath = "crio"
}
return toml.LoadFirst(
toml.FromCommandLine(chrootIfRequired(hostRoot, "crio", "status", "config")...),
toml.FromCommandLine(chrootIfRequired(hostRoot, executablePath, "status", "config")...),
toml.FromCommandLine(chrootIfRequired(hostRoot, "crio-status", "config")...),
)
}

View File

@@ -95,6 +95,7 @@ func TestNvidiaSMISymlinkHook(t *testing.T) {
Path: "nvidia-cdi-hook",
Args: []string{"nvidia-cdi-hook", "create-symlinks",
"--link", "nvidia-smi::/usr/bin/nvidia-smi"},
Env: []string{"NVIDIA_CTK_DEBUG=false"},
},
},
},
@@ -115,6 +116,7 @@ func TestNvidiaSMISymlinkHook(t *testing.T) {
Path: "nvidia-cdi-hook",
Args: []string{"nvidia-cdi-hook", "create-symlinks",
"--link", "/some/path/nvidia-smi::/usr/bin/nvidia-smi"},
Env: []string{"NVIDIA_CTK_DEBUG=false"},
},
},
},
@@ -135,6 +137,7 @@ func TestNvidiaSMISymlinkHook(t *testing.T) {
Path: "nvidia-cdi-hook",
Args: []string{"nvidia-cdi-hook", "create-symlinks",
"--link", "/some/path/nvidia-smi::/usr/bin/nvidia-smi"},
Env: []string{"NVIDIA_CTK_DEBUG=false"},
},
},
},

View File

@@ -1,7 +1,3 @@
FROM quay.io/centos/centos:stream8
RUN sed -i -e "s|mirrorlist=|#mirrorlist=|g" \
-e "s|#baseurl=http://mirror.centos.org|baseurl=http://vault.centos.org|g" \
/etc/yum.repos.d/CentOS-Stream-*
FROM quay.io/centos/centos:stream9
RUN yum install -y createrepo rpm-sign pinentry

View File

@@ -38,6 +38,11 @@ const (
type Options struct {
Config string
Socket string
// ExecutablePath specifies the path to the container runtime executable.
// This is used to extract the current config, for example.
// If a HostRootMount is specified, this path is relative to the host root
// mount.
ExecutablePath string
// EnabledCDI indicates whether CDI should be enabled.
EnableCDI bool
RuntimeName string

View File

@@ -136,7 +136,7 @@ func validateFlags(c *cli.Context, o *options) error {
if err := toolkit.ValidateOptions(&o.toolkitOptions, o.toolkitRoot()); err != nil {
return err
}
if err := runtime.ValidateOptions(c, &o.runtimeOptions, o.runtime, o.toolkitRoot(), &o.toolkitOptions); err != nil {
if err := o.runtimeOptions.Validate(c, o.runtime, o.toolkitRoot(), &o.toolkitOptions); err != nil {
return err
}
return nil

View File

@@ -173,7 +173,7 @@ func getRuntimeConfig(o *container.Options, co *Options) (engine.Interface, erro
containerd.WithPath(o.Config),
containerd.WithConfigSource(
toml.LoadFirst(
containerd.CommandLineSource(o.HostRootMount),
containerd.CommandLineSource(o.HostRootMount, o.ExecutablePath),
toml.FromFile(o.Config),
),
),

View File

@@ -202,7 +202,7 @@ func getRuntimeConfig(o *container.Options) (engine.Interface, error) {
crio.WithPath(o.Config),
crio.WithConfigSource(
toml.LoadFirst(
crio.CommandLineSource(o.HostRootMount),
crio.CommandLineSource(o.HostRootMount, o.ExecutablePath),
toml.FromFile(o.Config),
),
),

View File

@@ -19,6 +19,7 @@ package runtime
import (
"fmt"
log "github.com/sirupsen/logrus"
"github.com/urfave/cli/v2"
"github.com/NVIDIA/nvidia-container-toolkit/tools/container"
@@ -53,6 +54,12 @@ func Flags(opts *Options) []cli.Flag {
Destination: &opts.Config,
EnvVars: []string{"RUNTIME_CONFIG", "CONTAINERD_CONFIG", "DOCKER_CONFIG"},
},
&cli.StringFlag{
Name: "executable-path",
Usage: "The path to the runtime executable. This is used to extract the current config",
Destination: &opts.ExecutablePath,
EnvVars: []string{"RUNTIME_EXECUTABLE_PATH"},
},
&cli.StringFlag{
Name: "socket",
Usage: "Path to the runtime socket file",
@@ -104,8 +111,8 @@ func Flags(opts *Options) []cli.Flag {
return flags
}
// ValidateOptions checks whether the specified options are valid
func ValidateOptions(c *cli.Context, opts *Options, runtime string, toolkitRoot string, to *toolkit.Options) error {
// Validate checks whether the specified options are valid
func (opts *Options) Validate(c *cli.Context, runtime string, toolkitRoot string, to *toolkit.Options) error {
// We set this option here to ensure that it is available in future calls.
opts.RuntimeDir = toolkitRoot
@@ -113,6 +120,11 @@ func ValidateOptions(c *cli.Context, opts *Options, runtime string, toolkitRoot
opts.EnableCDI = to.CDI.Enabled
}
if opts.ExecutablePath != "" && opts.RuntimeName == docker.Name {
log.Warningf("Ignoring executable-path=%q flag for %v", opts.ExecutablePath, opts.RuntimeName)
opts.ExecutablePath = ""
}
// Apply the runtime-specific config changes.
switch runtime {
case containerd.Name:

View File

@@ -32,6 +32,7 @@ type Device interface {
GetMigDevices() ([]MigDevice, error)
GetMigProfiles() ([]MigProfile, error)
GetPCIBusID() (string, error)
IsFabricAttached() (bool, error)
IsMigCapable() (bool, error)
IsMigEnabled() (bool, error)
VisitMigDevices(func(j int, m MigDevice) error) error
@@ -85,9 +86,13 @@ func (d *device) GetArchitectureAsString() (string, error) {
case nvml.DEVICE_ARCH_AMPERE:
return "Ampere", nil
case nvml.DEVICE_ARCH_ADA:
return "Ada", nil
return "Ada Lovelace", nil
case nvml.DEVICE_ARCH_HOPPER:
return "Hopper", nil
case nvml.DEVICE_ARCH_BLACKWELL:
return "Blackwell", nil
case nvml.DEVICE_ARCH_T23X:
return "Orin", nil
case nvml.DEVICE_ARCH_UNKNOWN:
return "Unknown", nil
}
@@ -124,7 +129,7 @@ func (d *device) GetBrandAsString() (string, error) {
case nvml.BRAND_NVIDIA_VWS:
return "NvidiaVWS", nil
// Deprecated in favor of nvml.BRAND_NVIDIA_CLOUD_GAMING
//case nvml.BRAND_NVIDIA_VGAMING:
// case nvml.BRAND_NVIDIA_VGAMING:
// return "VGaming", nil
case nvml.BRAND_NVIDIA_CLOUD_GAMING:
return "NvidiaCloudGaming", nil
@@ -208,6 +213,53 @@ func (d *device) IsMigEnabled() (bool, error) {
return (mode == nvml.DEVICE_MIG_ENABLE), nil
}
// IsFabricAttached checks if a device is attached to a GPU fabric.
func (d *device) IsFabricAttached() (bool, error) {
if d.lib.hasSymbol("nvmlDeviceGetGpuFabricInfo") {
info, ret := d.GetGpuFabricInfo()
if ret == nvml.ERROR_NOT_SUPPORTED {
return false, nil
}
if ret != nvml.SUCCESS {
return false, fmt.Errorf("error getting GPU Fabric Info: %v", ret)
}
if info.State != nvml.GPU_FABRIC_STATE_COMPLETED {
return false, nil
}
if info.ClusterUuid == [16]uint8{} {
return false, nil
}
if nvml.Return(info.Status) != nvml.SUCCESS {
return false, nil
}
return true, nil
}
if d.lib.hasSymbol("nvmlDeviceGetGpuFabricInfoV") {
info, ret := d.GetGpuFabricInfoV().V2()
if ret == nvml.ERROR_NOT_SUPPORTED {
return false, nil
}
if ret != nvml.SUCCESS {
return false, fmt.Errorf("error getting GPU Fabric Info: %v", ret)
}
if info.State != nvml.GPU_FABRIC_STATE_COMPLETED {
return false, nil
}
if info.ClusterUuid == [16]uint8{} {
return false, nil
}
if nvml.Return(info.Status) != nvml.SUCCESS {
return false, nil
}
return true, nil
}
return false, nil
}
// VisitMigDevices walks a top-level device and invokes a callback function for each MIG device configured on it.
func (d *device) VisitMigDevices(visit func(int, MigDevice) error) error {
capable, err := d.IsMigCapable()

View File

@@ -63,7 +63,7 @@ func (m *migdevice) GetProfile() (MigProfile, error) {
return m.profile, nil
}
parent, ret := m.Device.GetDeviceHandleFromMigDeviceHandle()
parent, ret := m.GetDeviceHandleFromMigDeviceHandle()
if ret != nvml.SUCCESS {
return nil, fmt.Errorf("error getting parent device handle: %v", ret)
}
@@ -73,17 +73,17 @@ func (m *migdevice) GetProfile() (MigProfile, error) {
return nil, fmt.Errorf("error getting parent memory info: %v", ret)
}
attributes, ret := m.Device.GetAttributes()
attributes, ret := m.GetAttributes()
if ret != nvml.SUCCESS {
return nil, fmt.Errorf("error getting MIG device attributes: %v", ret)
}
giID, ret := m.Device.GetGpuInstanceId()
giID, ret := m.GetGpuInstanceId()
if ret != nvml.SUCCESS {
return nil, fmt.Errorf("error getting MIG device GPU Instance ID: %v", ret)
}
ciID, ret := m.Device.GetComputeInstanceId()
ciID, ret := m.GetComputeInstanceId()
if ret != nvml.SUCCESS {
return nil, fmt.Errorf("error getting MIG device Compute Instance ID: %v", ret)
}

View File

@@ -30,12 +30,14 @@ type PlatformResolver interface {
// PropertyExtractor provides a set of functions to query capabilities of the
// system.
//
//go:generate moq -rm -out property-extractor_mock.go . PropertyExtractor
//go:generate moq -rm -fmt=goimports -out property-extractor_mock.go . PropertyExtractor
type PropertyExtractor interface {
HasDXCore() (bool, string)
HasNvml() (bool, string)
HasTegraFiles() (bool, string)
// Deprecated: Use HasTegraFiles instead.
IsTegraSystem() (bool, string)
// Deprecated: Use HasOnlyIntegratedGPUs
UsesOnlyNVGPUModule() (bool, string)
HasOnlyIntegratedGPUs() (bool, string)
}

View File

@@ -90,16 +90,24 @@ func (i *propertyExtractor) HasTegraFiles() (bool, string) {
}
// UsesOnlyNVGPUModule checks whether the only the nvgpu module is used.
// This kernel module is used on Tegra-based systems when using the iGPU.
// Since some of these systems also support NVML, we use the device name
// reported by NVML to determine whether the system is an iGPU system.
//
// Devices that use the nvgpu module have their device names as:
// Deprecated: UsesOnlyNVGPUModule is deprecated, use HasOnlyIntegratedGPUs instead.
func (i *propertyExtractor) UsesOnlyNVGPUModule() (uses bool, reason string) {
return i.HasOnlyIntegratedGPUs()
}
// HasOnlyIntegratedGPUs checks whether all GPUs are iGPUs that use NVML.
//
// As of Orin-based systems iGPUs also support limited NVML queries.
// In the absence of a robust API, we rely on heuristics to make this decision.
//
// The following device names are checked:
//
// GPU 0: Orin (nvgpu) (UUID: 54d0709b-558d-5a59-9c65-0c5fc14a21a4)
// GPU 0: NVIDIA Thor (UUID: 54d0709b-558d-5a59-9c65-0c5fc14a21a4)
//
// This function returns true if ALL devices use the nvgpu module.
func (i *propertyExtractor) UsesOnlyNVGPUModule() (uses bool, reason string) {
// This function returns true if ALL devices are detected as iGPUs.
func (i *propertyExtractor) HasOnlyIntegratedGPUs() (uses bool, reason string) {
// We ensure that this function never panics
defer func() {
if err := recover(); err != nil {
@@ -135,9 +143,19 @@ func (i *propertyExtractor) UsesOnlyNVGPUModule() (uses bool, reason string) {
}
for _, name := range names {
if !strings.Contains(name, "(nvgpu)") {
if !isIntegratedGPUName(name) {
return false, fmt.Sprintf("device %q does not use nvgpu module", name)
}
}
return true, "all devices use nvgpu module"
}
func isIntegratedGPUName(name string) bool {
if strings.Contains(name, "(nvgpu)") {
return true
}
if strings.Contains(name, "NVIDIA Thor") {
return true
}
return false
}

View File

@@ -23,6 +23,9 @@ var _ PropertyExtractor = &PropertyExtractorMock{}
// HasNvmlFunc: func() (bool, string) {
// panic("mock out the HasNvml method")
// },
// HasOnlyIntegratedGPUsFunc: func() (bool, string) {
// panic("mock out the HasOnlyIntegratedGPUs method")
// },
// HasTegraFilesFunc: func() (bool, string) {
// panic("mock out the HasTegraFiles method")
// },
@@ -45,6 +48,9 @@ type PropertyExtractorMock struct {
// HasNvmlFunc mocks the HasNvml method.
HasNvmlFunc func() (bool, string)
// HasOnlyIntegratedGPUsFunc mocks the HasOnlyIntegratedGPUs method.
HasOnlyIntegratedGPUsFunc func() (bool, string)
// HasTegraFilesFunc mocks the HasTegraFiles method.
HasTegraFilesFunc func() (bool, string)
@@ -62,6 +68,9 @@ type PropertyExtractorMock struct {
// HasNvml holds details about calls to the HasNvml method.
HasNvml []struct {
}
// HasOnlyIntegratedGPUs holds details about calls to the HasOnlyIntegratedGPUs method.
HasOnlyIntegratedGPUs []struct {
}
// HasTegraFiles holds details about calls to the HasTegraFiles method.
HasTegraFiles []struct {
}
@@ -72,11 +81,12 @@ type PropertyExtractorMock struct {
UsesOnlyNVGPUModule []struct {
}
}
lockHasDXCore sync.RWMutex
lockHasNvml sync.RWMutex
lockHasTegraFiles sync.RWMutex
lockIsTegraSystem sync.RWMutex
lockUsesOnlyNVGPUModule sync.RWMutex
lockHasDXCore sync.RWMutex
lockHasNvml sync.RWMutex
lockHasOnlyIntegratedGPUs sync.RWMutex
lockHasTegraFiles sync.RWMutex
lockIsTegraSystem sync.RWMutex
lockUsesOnlyNVGPUModule sync.RWMutex
}
// HasDXCore calls HasDXCoreFunc.
@@ -133,6 +143,33 @@ func (mock *PropertyExtractorMock) HasNvmlCalls() []struct {
return calls
}
// HasOnlyIntegratedGPUs calls HasOnlyIntegratedGPUsFunc.
func (mock *PropertyExtractorMock) HasOnlyIntegratedGPUs() (bool, string) {
if mock.HasOnlyIntegratedGPUsFunc == nil {
panic("PropertyExtractorMock.HasOnlyIntegratedGPUsFunc: method is nil but PropertyExtractor.HasOnlyIntegratedGPUs was just called")
}
callInfo := struct {
}{}
mock.lockHasOnlyIntegratedGPUs.Lock()
mock.calls.HasOnlyIntegratedGPUs = append(mock.calls.HasOnlyIntegratedGPUs, callInfo)
mock.lockHasOnlyIntegratedGPUs.Unlock()
return mock.HasOnlyIntegratedGPUsFunc()
}
// HasOnlyIntegratedGPUsCalls gets all the calls that were made to HasOnlyIntegratedGPUs.
// Check the length with:
//
// len(mockedPropertyExtractor.HasOnlyIntegratedGPUsCalls())
func (mock *PropertyExtractorMock) HasOnlyIntegratedGPUsCalls() []struct {
} {
var calls []struct {
}
mock.lockHasOnlyIntegratedGPUs.RLock()
calls = mock.calls.HasOnlyIntegratedGPUs
mock.lockHasOnlyIntegratedGPUs.RUnlock()
return calls
}
// HasTegraFiles calls HasTegraFilesFunc.
func (mock *PropertyExtractorMock) HasTegraFiles() (bool, string) {
if mock.HasTegraFilesFunc == nil {

View File

@@ -48,13 +48,13 @@ func (p platformResolver) ResolvePlatform() Platform {
hasNVML, reason := p.propertyExtractor.HasNvml()
p.logger.Debugf("Is NVML-based system? %v: %v", hasNVML, reason)
usesOnlyNVGPUModule, reason := p.propertyExtractor.UsesOnlyNVGPUModule()
p.logger.Debugf("Uses nvgpu kernel module? %v: %v", usesOnlyNVGPUModule, reason)
hasOnlyIntegratedGPUs, reason := p.propertyExtractor.HasOnlyIntegratedGPUs()
p.logger.Debugf("Has only integrated GPUs? %v: %v", hasOnlyIntegratedGPUs, reason)
switch {
case hasDXCore:
return PlatformWSL
case (hasTegraFiles && !hasNVML), usesOnlyNVGPUModule:
case (hasTegraFiles && !hasNVML), hasOnlyIntegratedGPUs:
return PlatformTegra
case hasNVML:
return PlatformNVML

View File

@@ -107,7 +107,7 @@ func (m *mmio) BigEndian() Mmio {
}
func (m *mmio) Close() error {
err := syscall.Munmap(*m.Bytes.Raw())
err := syscall.Munmap(*m.Raw())
if err != nil {
return fmt.Errorf("failed to munmap file: %v", err)
}
@@ -117,7 +117,7 @@ func (m *mmio) Close() error {
func (m *mmio) Sync() error {
_, _, errno := syscall.Syscall(
syscall.SYS_MSYNC,
uintptr(unsafe.Pointer(&(*m.Bytes.Raw())[0])),
uintptr(unsafe.Pointer(&(*m.Raw())[0])),
uintptr(m.Len()),
uintptr(syscall.MS_SYNC|syscall.MS_INVALIDATE))
if errno != 0 {

View File

@@ -70,8 +70,8 @@ func (m *mockMmio) Sync() error {
if !m.rw {
return fmt.Errorf("opened read-only")
}
for i := range *m.Bytes.Raw() {
(*m.source)[m.offset+i] = (*m.Bytes.Raw())[i]
for i := range *m.Raw() {
(*m.source)[m.offset+i] = (*m.Raw())[i]
}
return nil
}

View File

@@ -98,7 +98,7 @@ func (m *MockNvpci) AddMockA100(address string, numaNode int, sriov *SriovInfo)
if err != nil {
return err
}
_, err = numa.WriteString(fmt.Sprintf("%v", numaNode))
_, err = fmt.Fprintf(numa, "%v", numaNode)
if err != nil {
return err
}
@@ -137,7 +137,7 @@ func createNVIDIAgpuFiles(deviceDir string) error {
if err != nil {
return err
}
_, err = vendor.WriteString(fmt.Sprintf("0x%x", PCINvidiaVendorID))
_, err = fmt.Fprintf(vendor, "0x%x", PCINvidiaVendorID)
if err != nil {
return err
}
@@ -146,7 +146,7 @@ func createNVIDIAgpuFiles(deviceDir string) error {
if err != nil {
return err
}
_, err = class.WriteString(fmt.Sprintf("0x%x", PCI3dControllerClass))
_, err = fmt.Fprintf(class, "0x%x", PCI3dControllerClass)
if err != nil {
return err
}
@@ -188,7 +188,7 @@ func createNVIDIAgpuFiles(deviceDir string) error {
if err != nil {
return err
}
_, err = resource.WriteString(fmt.Sprintf("0x%x 0x%x 0x%x", bar0[0], bar0[1], bar0[2]))
_, err = fmt.Fprintf(resource, "0x%x 0x%x 0x%x", bar0[0], bar0[1], bar0[2])
if err != nil {
return err
}
@@ -246,7 +246,7 @@ func (m *MockNvpci) createVf(pfAddress string, id, iommu_group, numaNode int) er
if err != nil {
return err
}
_, err = numa.WriteString(fmt.Sprintf("%v", numaNode))
_, err = fmt.Fprintf(numa, "%v", numaNode)
if err != nil {
return err
}

View File

@@ -280,27 +280,14 @@ func (p *nvpci) getGPUByPciBusID(address string, cache map[string]*NvidiaPCIDevi
return nil, fmt.Errorf("unable to convert device string to uint16: %v", deviceStr)
}
driver, err := filepath.EvalSymlinks(path.Join(devicePath, "driver"))
if err == nil {
driver = filepath.Base(driver)
} else if os.IsNotExist(err) {
driver = ""
} else {
return nil, fmt.Errorf("unable to detect driver for %s: %v", address, err)
driver, err := getDriver(devicePath)
if err != nil {
return nil, fmt.Errorf("unable to detect driver for %s: %w", address, err)
}
var iommuGroup int64
iommu, err := filepath.EvalSymlinks(path.Join(devicePath, "iommu_group"))
if err == nil {
iommuGroupStr := strings.TrimSpace(filepath.Base(iommu))
iommuGroup, err = strconv.ParseInt(iommuGroupStr, 0, 64)
if err != nil {
return nil, fmt.Errorf("unable to convert iommu_group string to int64: %v", iommuGroupStr)
}
} else if os.IsNotExist(err) {
iommuGroup = -1
} else {
return nil, fmt.Errorf("unable to detect iommu_group for %s: %v", address, err)
iommuGroup, err := getIOMMUGroup(devicePath)
if err != nil {
return nil, fmt.Errorf("unable to detect IOMMU group for %s: %w", address, err)
}
numa, err := os.ReadFile(path.Join(devicePath, "numa_node"))
@@ -359,7 +346,8 @@ func (p *nvpci) getGPUByPciBusID(address string, cache map[string]*NvidiaPCIDevi
var sriovInfo SriovInfo
// Device is a virtual function (VF) if "physfn" symlink exists.
physFnAddress, err := filepath.EvalSymlinks(path.Join(devicePath, "physfn"))
if err == nil {
switch {
case err == nil:
physFn, err := p.getGPUByPciBusID(filepath.Base(physFnAddress), cache)
if err != nil {
return nil, fmt.Errorf("unable to detect physfn for %s: %v", address, err)
@@ -369,12 +357,12 @@ func (p *nvpci) getGPUByPciBusID(address string, cache map[string]*NvidiaPCIDevi
PhysicalFunction: physFn,
},
}
} else if os.IsNotExist(err) {
case os.IsNotExist(err):
sriovInfo, err = p.getSriovInfoForPhysicalFunction(devicePath)
if err != nil {
return nil, fmt.Errorf("unable to read SRIOV physical function details for %s: %v", devicePath, err)
}
} else {
default:
return nil, fmt.Errorf("unable to read %s: %v", path.Join(devicePath, "physfn"), err)
}
@@ -521,3 +509,31 @@ func (p *nvpci) getSriovInfoForPhysicalFunction(devicePath string) (sriovInfo Sr
}
return sriovInfo, nil
}
func getDriver(devicePath string) (string, error) {
driver, err := filepath.EvalSymlinks(path.Join(devicePath, "driver"))
switch {
case os.IsNotExist(err):
return "", nil
case err == nil:
return filepath.Base(driver), nil
}
return "", err
}
func getIOMMUGroup(devicePath string) (int64, error) {
var iommuGroup int64
iommu, err := filepath.EvalSymlinks(path.Join(devicePath, "iommu_group"))
switch {
case os.IsNotExist(err):
return -1, nil
case err == nil:
iommuGroupStr := strings.TrimSpace(filepath.Base(iommu))
iommuGroup, err = strconv.ParseInt(iommuGroupStr, 0, 64)
if err != nil {
return 0, fmt.Errorf("unable to convert iommu_group string to int64: %v", iommuGroupStr)
}
return iommuGroup, nil
}
return 0, err
}

View File

@@ -112,7 +112,7 @@ func (mrs MemoryResources) GetTotalAddressableMemory(roundUp bool) (uint64, uint
if key >= pciIOVNumBAR || numBAR == pciIOVNumBAR {
break
}
numBAR = numBAR + 1
numBAR++
region := mrs[key]
@@ -123,10 +123,10 @@ func (mrs MemoryResources) GetTotalAddressableMemory(roundUp bool) (uint64, uint
memSize := (region.End - region.Start) + 1
if memType32bit {
memSize32bit = memSize32bit + uint64(memSize)
memSize32bit += uint64(memSize)
}
if memType64bit {
memSize64bit = memSize64bit + uint64(memSize)
memSize64bit += uint64(memSize)
}
}

File diff suppressed because it is too large Load Diff

View File

@@ -396,7 +396,7 @@ func (p *parser) parse() Interface {
hkClass = db.classes[uint32(id)]
hkFullID = uint32(id) << 16
hkFullID = hkFullID & 0xFFFF0000
hkFullID &= 0xFFFF0000
hkFullName[0] = fmt.Sprintf("%s (%02x)", lit.name, id)
}
@@ -408,11 +408,11 @@ func (p *parser) parse() Interface {
}
hkSubClass = hkClass.subClasses[uint32(id)]
// Clear the last detected sub class.
hkFullID = hkFullID & 0xFFFF0000
hkFullID = hkFullID | uint32(id)<<8
// Clear the last detected subclass.
hkFullID &= 0xFFFF0000
hkFullID |= uint32(id) << 8
// Clear the last detected prog iface.
hkFullID = hkFullID & 0xFFFFFF00
hkFullID &= 0xFFFFFF00
hkFullName[1] = fmt.Sprintf("%s (%02x)", lit.name, id)
db.classes[uint32(hkFullID)] = class{

View File

@@ -52,6 +52,10 @@ const (
MAX_PHYSICAL_BRIDGE = 128
// MAX_THERMAL_SENSORS_PER_GPU as defined in nvml/nvml.h
MAX_THERMAL_SENSORS_PER_GPU = 3
// DEVICE_UUID_ASCII_LEN as defined in nvml/nvml.h
DEVICE_UUID_ASCII_LEN = 41
// DEVICE_UUID_BINARY_LEN as defined in nvml/nvml.h
DEVICE_UUID_BINARY_LEN = 16
// FlagDefault as defined in nvml/nvml.h
FlagDefault = 0
// FlagForce as defined in nvml/nvml.h
@@ -62,54 +66,8 @@ const (
DOUBLE_BIT_ECC = 0
// MAX_GPU_PERF_PSTATES as defined in nvml/nvml.h
MAX_GPU_PERF_PSTATES = 16
// GRID_LICENSE_EXPIRY_NOT_AVAILABLE as defined in nvml/nvml.h
GRID_LICENSE_EXPIRY_NOT_AVAILABLE = 0
// GRID_LICENSE_EXPIRY_INVALID as defined in nvml/nvml.h
GRID_LICENSE_EXPIRY_INVALID = 1
// GRID_LICENSE_EXPIRY_VALID as defined in nvml/nvml.h
GRID_LICENSE_EXPIRY_VALID = 2
// GRID_LICENSE_EXPIRY_NOT_APPLICABLE as defined in nvml/nvml.h
GRID_LICENSE_EXPIRY_NOT_APPLICABLE = 3
// GRID_LICENSE_EXPIRY_PERMANENT as defined in nvml/nvml.h
GRID_LICENSE_EXPIRY_PERMANENT = 4
// GRID_LICENSE_BUFFER_SIZE as defined in nvml/nvml.h
GRID_LICENSE_BUFFER_SIZE = 128
// VGPU_NAME_BUFFER_SIZE as defined in nvml/nvml.h
VGPU_NAME_BUFFER_SIZE = 64
// GRID_LICENSE_FEATURE_MAX_COUNT as defined in nvml/nvml.h
GRID_LICENSE_FEATURE_MAX_COUNT = 3
// INVALID_VGPU_PLACEMENT_ID as defined in nvml/nvml.h
INVALID_VGPU_PLACEMENT_ID = 65535
// VGPU_SCHEDULER_POLICY_UNKNOWN as defined in nvml/nvml.h
VGPU_SCHEDULER_POLICY_UNKNOWN = 0
// VGPU_SCHEDULER_POLICY_BEST_EFFORT as defined in nvml/nvml.h
VGPU_SCHEDULER_POLICY_BEST_EFFORT = 1
// VGPU_SCHEDULER_POLICY_EQUAL_SHARE as defined in nvml/nvml.h
VGPU_SCHEDULER_POLICY_EQUAL_SHARE = 2
// VGPU_SCHEDULER_POLICY_FIXED_SHARE as defined in nvml/nvml.h
VGPU_SCHEDULER_POLICY_FIXED_SHARE = 3
// SUPPORTED_VGPU_SCHEDULER_POLICY_COUNT as defined in nvml/nvml.h
SUPPORTED_VGPU_SCHEDULER_POLICY_COUNT = 3
// SCHEDULER_SW_MAX_LOG_ENTRIES as defined in nvml/nvml.h
SCHEDULER_SW_MAX_LOG_ENTRIES = 200
// VGPU_SCHEDULER_ARR_DEFAULT as defined in nvml/nvml.h
VGPU_SCHEDULER_ARR_DEFAULT = 0
// VGPU_SCHEDULER_ARR_DISABLE as defined in nvml/nvml.h
VGPU_SCHEDULER_ARR_DISABLE = 1
// VGPU_SCHEDULER_ARR_ENABLE as defined in nvml/nvml.h
VGPU_SCHEDULER_ARR_ENABLE = 2
// GRID_LICENSE_STATE_UNKNOWN as defined in nvml/nvml.h
GRID_LICENSE_STATE_UNKNOWN = 0
// GRID_LICENSE_STATE_UNINITIALIZED as defined in nvml/nvml.h
GRID_LICENSE_STATE_UNINITIALIZED = 1
// GRID_LICENSE_STATE_UNLICENSED_UNRESTRICTED as defined in nvml/nvml.h
GRID_LICENSE_STATE_UNLICENSED_UNRESTRICTED = 2
// GRID_LICENSE_STATE_UNLICENSED_RESTRICTED as defined in nvml/nvml.h
GRID_LICENSE_STATE_UNLICENSED_RESTRICTED = 3
// GRID_LICENSE_STATE_UNLICENSED as defined in nvml/nvml.h
GRID_LICENSE_STATE_UNLICENSED = 4
// GRID_LICENSE_STATE_LICENSED as defined in nvml/nvml.h
GRID_LICENSE_STATE_LICENSED = 5
// PERF_MODES_BUFFER_SIZE as defined in nvml/nvml.h
PERF_MODES_BUFFER_SIZE = 2048
// GSP_FIRMWARE_VERSION_BUF_SIZE as defined in nvml/nvml.h
GSP_FIRMWARE_VERSION_BUF_SIZE = 64
// DEVICE_ARCH_KEPLER as defined in nvml/nvml.h
@@ -128,6 +86,10 @@ const (
DEVICE_ARCH_ADA = 8
// DEVICE_ARCH_HOPPER as defined in nvml/nvml.h
DEVICE_ARCH_HOPPER = 9
// DEVICE_ARCH_BLACKWELL as defined in nvml/nvml.h
DEVICE_ARCH_BLACKWELL = 10
// DEVICE_ARCH_T23X as defined in nvml/nvml.h
DEVICE_ARCH_T23X = 11
// DEVICE_ARCH_UNKNOWN as defined in nvml/nvml.h
DEVICE_ARCH_UNKNOWN = 4294967295
// BUS_TYPE_UNKNOWN as defined in nvml/nvml.h
@@ -170,6 +132,82 @@ const (
ADAPTIVE_CLOCKING_INFO_STATUS_ENABLED = 1
// MAX_GPU_UTILIZATIONS as defined in nvml/nvml.h
MAX_GPU_UTILIZATIONS = 8
// PCIE_ATOMICS_CAP_FETCHADD32 as defined in nvml/nvml.h
PCIE_ATOMICS_CAP_FETCHADD32 = 1
// PCIE_ATOMICS_CAP_FETCHADD64 as defined in nvml/nvml.h
PCIE_ATOMICS_CAP_FETCHADD64 = 2
// PCIE_ATOMICS_CAP_SWAP32 as defined in nvml/nvml.h
PCIE_ATOMICS_CAP_SWAP32 = 4
// PCIE_ATOMICS_CAP_SWAP64 as defined in nvml/nvml.h
PCIE_ATOMICS_CAP_SWAP64 = 8
// PCIE_ATOMICS_CAP_CAS32 as defined in nvml/nvml.h
PCIE_ATOMICS_CAP_CAS32 = 16
// PCIE_ATOMICS_CAP_CAS64 as defined in nvml/nvml.h
PCIE_ATOMICS_CAP_CAS64 = 32
// PCIE_ATOMICS_CAP_CAS128 as defined in nvml/nvml.h
PCIE_ATOMICS_CAP_CAS128 = 64
// PCIE_ATOMICS_OPS_MAX as defined in nvml/nvml.h
PCIE_ATOMICS_OPS_MAX = 7
// POWER_SCOPE_GPU as defined in nvml/nvml.h
POWER_SCOPE_GPU = 0
// POWER_SCOPE_MODULE as defined in nvml/nvml.h
POWER_SCOPE_MODULE = 1
// POWER_SCOPE_MEMORY as defined in nvml/nvml.h
POWER_SCOPE_MEMORY = 2
// GRID_LICENSE_EXPIRY_NOT_AVAILABLE as defined in nvml/nvml.h
GRID_LICENSE_EXPIRY_NOT_AVAILABLE = 0
// GRID_LICENSE_EXPIRY_INVALID as defined in nvml/nvml.h
GRID_LICENSE_EXPIRY_INVALID = 1
// GRID_LICENSE_EXPIRY_VALID as defined in nvml/nvml.h
GRID_LICENSE_EXPIRY_VALID = 2
// GRID_LICENSE_EXPIRY_NOT_APPLICABLE as defined in nvml/nvml.h
GRID_LICENSE_EXPIRY_NOT_APPLICABLE = 3
// GRID_LICENSE_EXPIRY_PERMANENT as defined in nvml/nvml.h
GRID_LICENSE_EXPIRY_PERMANENT = 4
// GRID_LICENSE_BUFFER_SIZE as defined in nvml/nvml.h
GRID_LICENSE_BUFFER_SIZE = 128
// VGPU_NAME_BUFFER_SIZE as defined in nvml/nvml.h
VGPU_NAME_BUFFER_SIZE = 64
// GRID_LICENSE_FEATURE_MAX_COUNT as defined in nvml/nvml.h
GRID_LICENSE_FEATURE_MAX_COUNT = 3
// INVALID_VGPU_PLACEMENT_ID as defined in nvml/nvml.h
INVALID_VGPU_PLACEMENT_ID = 65535
// VGPU_PGPU_HETEROGENEOUS_MODE as defined in nvml/nvml.h
VGPU_PGPU_HETEROGENEOUS_MODE = 0
// VGPU_PGPU_HOMOGENEOUS_MODE as defined in nvml/nvml.h
VGPU_PGPU_HOMOGENEOUS_MODE = 1
// VGPU_SCHEDULER_POLICY_UNKNOWN as defined in nvml/nvml.h
VGPU_SCHEDULER_POLICY_UNKNOWN = 0
// VGPU_SCHEDULER_POLICY_BEST_EFFORT as defined in nvml/nvml.h
VGPU_SCHEDULER_POLICY_BEST_EFFORT = 1
// VGPU_SCHEDULER_POLICY_EQUAL_SHARE as defined in nvml/nvml.h
VGPU_SCHEDULER_POLICY_EQUAL_SHARE = 2
// VGPU_SCHEDULER_POLICY_FIXED_SHARE as defined in nvml/nvml.h
VGPU_SCHEDULER_POLICY_FIXED_SHARE = 3
// SUPPORTED_VGPU_SCHEDULER_POLICY_COUNT as defined in nvml/nvml.h
SUPPORTED_VGPU_SCHEDULER_POLICY_COUNT = 3
// SCHEDULER_SW_MAX_LOG_ENTRIES as defined in nvml/nvml.h
SCHEDULER_SW_MAX_LOG_ENTRIES = 200
// VGPU_SCHEDULER_ARR_DEFAULT as defined in nvml/nvml.h
VGPU_SCHEDULER_ARR_DEFAULT = 0
// VGPU_SCHEDULER_ARR_DISABLE as defined in nvml/nvml.h
VGPU_SCHEDULER_ARR_DISABLE = 1
// VGPU_SCHEDULER_ARR_ENABLE as defined in nvml/nvml.h
VGPU_SCHEDULER_ARR_ENABLE = 2
// VGPU_SCHEDULER_ENGINE_TYPE_GRAPHICS as defined in nvml/nvml.h
VGPU_SCHEDULER_ENGINE_TYPE_GRAPHICS = 1
// GRID_LICENSE_STATE_UNKNOWN as defined in nvml/nvml.h
GRID_LICENSE_STATE_UNKNOWN = 0
// GRID_LICENSE_STATE_UNINITIALIZED as defined in nvml/nvml.h
GRID_LICENSE_STATE_UNINITIALIZED = 1
// GRID_LICENSE_STATE_UNLICENSED_UNRESTRICTED as defined in nvml/nvml.h
GRID_LICENSE_STATE_UNLICENSED_UNRESTRICTED = 2
// GRID_LICENSE_STATE_UNLICENSED_RESTRICTED as defined in nvml/nvml.h
GRID_LICENSE_STATE_UNLICENSED_RESTRICTED = 3
// GRID_LICENSE_STATE_UNLICENSED as defined in nvml/nvml.h
GRID_LICENSE_STATE_UNLICENSED = 4
// GRID_LICENSE_STATE_LICENSED as defined in nvml/nvml.h
GRID_LICENSE_STATE_LICENSED = 5
// FI_DEV_ECC_CURRENT as defined in nvml/nvml.h
FI_DEV_ECC_CURRENT = 1
// FI_DEV_ECC_PENDING as defined in nvml/nvml.h
@@ -562,10 +600,188 @@ const (
FI_DEV_TEMPERATURE_MEM_MAX_TLIMIT = 195
// FI_DEV_TEMPERATURE_GPU_MAX_TLIMIT as defined in nvml/nvml.h
FI_DEV_TEMPERATURE_GPU_MAX_TLIMIT = 196
// FI_DEV_PCIE_COUNT_TX_BYTES as defined in nvml/nvml.h
FI_DEV_PCIE_COUNT_TX_BYTES = 197
// FI_DEV_PCIE_COUNT_RX_BYTES as defined in nvml/nvml.h
FI_DEV_PCIE_COUNT_RX_BYTES = 198
// FI_DEV_IS_MIG_MODE_INDEPENDENT_MIG_QUERY_CAPABLE as defined in nvml/nvml.h
FI_DEV_IS_MIG_MODE_INDEPENDENT_MIG_QUERY_CAPABLE = 199
// FI_DEV_NVLINK_GET_POWER_THRESHOLD_MAX as defined in nvml/nvml.h
FI_DEV_NVLINK_GET_POWER_THRESHOLD_MAX = 200
// FI_DEV_NVLINK_COUNT_XMIT_PACKETS as defined in nvml/nvml.h
FI_DEV_NVLINK_COUNT_XMIT_PACKETS = 201
// FI_DEV_NVLINK_COUNT_XMIT_BYTES as defined in nvml/nvml.h
FI_DEV_NVLINK_COUNT_XMIT_BYTES = 202
// FI_DEV_NVLINK_COUNT_RCV_PACKETS as defined in nvml/nvml.h
FI_DEV_NVLINK_COUNT_RCV_PACKETS = 203
// FI_DEV_NVLINK_COUNT_RCV_BYTES as defined in nvml/nvml.h
FI_DEV_NVLINK_COUNT_RCV_BYTES = 204
// FI_DEV_NVLINK_COUNT_VL15_DROPPED as defined in nvml/nvml.h
FI_DEV_NVLINK_COUNT_VL15_DROPPED = 205
// FI_DEV_NVLINK_COUNT_MALFORMED_PACKET_ERRORS as defined in nvml/nvml.h
FI_DEV_NVLINK_COUNT_MALFORMED_PACKET_ERRORS = 206
// FI_DEV_NVLINK_COUNT_BUFFER_OVERRUN_ERRORS as defined in nvml/nvml.h
FI_DEV_NVLINK_COUNT_BUFFER_OVERRUN_ERRORS = 207
// FI_DEV_NVLINK_COUNT_RCV_ERRORS as defined in nvml/nvml.h
FI_DEV_NVLINK_COUNT_RCV_ERRORS = 208
// FI_DEV_NVLINK_COUNT_RCV_REMOTE_ERRORS as defined in nvml/nvml.h
FI_DEV_NVLINK_COUNT_RCV_REMOTE_ERRORS = 209
// FI_DEV_NVLINK_COUNT_RCV_GENERAL_ERRORS as defined in nvml/nvml.h
FI_DEV_NVLINK_COUNT_RCV_GENERAL_ERRORS = 210
// FI_DEV_NVLINK_COUNT_LOCAL_LINK_INTEGRITY_ERRORS as defined in nvml/nvml.h
FI_DEV_NVLINK_COUNT_LOCAL_LINK_INTEGRITY_ERRORS = 211
// FI_DEV_NVLINK_COUNT_XMIT_DISCARDS as defined in nvml/nvml.h
FI_DEV_NVLINK_COUNT_XMIT_DISCARDS = 212
// FI_DEV_NVLINK_COUNT_LINK_RECOVERY_SUCCESSFUL_EVENTS as defined in nvml/nvml.h
FI_DEV_NVLINK_COUNT_LINK_RECOVERY_SUCCESSFUL_EVENTS = 213
// FI_DEV_NVLINK_COUNT_LINK_RECOVERY_FAILED_EVENTS as defined in nvml/nvml.h
FI_DEV_NVLINK_COUNT_LINK_RECOVERY_FAILED_EVENTS = 214
// FI_DEV_NVLINK_COUNT_LINK_RECOVERY_EVENTS as defined in nvml/nvml.h
FI_DEV_NVLINK_COUNT_LINK_RECOVERY_EVENTS = 215
// FI_DEV_NVLINK_COUNT_RAW_BER_LANE0 as defined in nvml/nvml.h
FI_DEV_NVLINK_COUNT_RAW_BER_LANE0 = 216
// FI_DEV_NVLINK_COUNT_RAW_BER_LANE1 as defined in nvml/nvml.h
FI_DEV_NVLINK_COUNT_RAW_BER_LANE1 = 217
// FI_DEV_NVLINK_COUNT_RAW_BER as defined in nvml/nvml.h
FI_DEV_NVLINK_COUNT_RAW_BER = 218
// FI_DEV_NVLINK_COUNT_EFFECTIVE_ERRORS as defined in nvml/nvml.h
FI_DEV_NVLINK_COUNT_EFFECTIVE_ERRORS = 219
// FI_DEV_NVLINK_COUNT_EFFECTIVE_BER as defined in nvml/nvml.h
FI_DEV_NVLINK_COUNT_EFFECTIVE_BER = 220
// FI_DEV_NVLINK_COUNT_SYMBOL_ERRORS as defined in nvml/nvml.h
FI_DEV_NVLINK_COUNT_SYMBOL_ERRORS = 221
// FI_DEV_NVLINK_COUNT_SYMBOL_BER as defined in nvml/nvml.h
FI_DEV_NVLINK_COUNT_SYMBOL_BER = 222
// FI_DEV_NVLINK_GET_POWER_THRESHOLD_MIN as defined in nvml/nvml.h
FI_DEV_NVLINK_GET_POWER_THRESHOLD_MIN = 223
// FI_DEV_NVLINK_GET_POWER_THRESHOLD_UNITS as defined in nvml/nvml.h
FI_DEV_NVLINK_GET_POWER_THRESHOLD_UNITS = 224
// FI_DEV_NVLINK_GET_POWER_THRESHOLD_SUPPORTED as defined in nvml/nvml.h
FI_DEV_NVLINK_GET_POWER_THRESHOLD_SUPPORTED = 225
// FI_DEV_RESET_STATUS as defined in nvml/nvml.h
FI_DEV_RESET_STATUS = 226
// FI_DEV_DRAIN_AND_RESET_STATUS as defined in nvml/nvml.h
FI_DEV_DRAIN_AND_RESET_STATUS = 227
// FI_DEV_PCIE_OUTBOUND_ATOMICS_MASK as defined in nvml/nvml.h
FI_DEV_PCIE_OUTBOUND_ATOMICS_MASK = 228
// FI_DEV_PCIE_INBOUND_ATOMICS_MASK as defined in nvml/nvml.h
FI_DEV_PCIE_INBOUND_ATOMICS_MASK = 229
// FI_DEV_GET_GPU_RECOVERY_ACTION as defined in nvml/nvml.h
FI_DEV_GET_GPU_RECOVERY_ACTION = 230
// FI_DEV_C2C_LINK_ERROR_INTR as defined in nvml/nvml.h
FI_DEV_C2C_LINK_ERROR_INTR = 231
// FI_DEV_C2C_LINK_ERROR_REPLAY as defined in nvml/nvml.h
FI_DEV_C2C_LINK_ERROR_REPLAY = 232
// FI_DEV_C2C_LINK_ERROR_REPLAY_B2B as defined in nvml/nvml.h
FI_DEV_C2C_LINK_ERROR_REPLAY_B2B = 233
// FI_DEV_C2C_LINK_POWER_STATE as defined in nvml/nvml.h
FI_DEV_C2C_LINK_POWER_STATE = 234
// FI_DEV_NVLINK_COUNT_FEC_HISTORY_0 as defined in nvml/nvml.h
FI_DEV_NVLINK_COUNT_FEC_HISTORY_0 = 235
// FI_DEV_NVLINK_COUNT_FEC_HISTORY_1 as defined in nvml/nvml.h
FI_DEV_NVLINK_COUNT_FEC_HISTORY_1 = 236
// FI_DEV_NVLINK_COUNT_FEC_HISTORY_2 as defined in nvml/nvml.h
FI_DEV_NVLINK_COUNT_FEC_HISTORY_2 = 237
// FI_DEV_NVLINK_COUNT_FEC_HISTORY_3 as defined in nvml/nvml.h
FI_DEV_NVLINK_COUNT_FEC_HISTORY_3 = 238
// FI_DEV_NVLINK_COUNT_FEC_HISTORY_4 as defined in nvml/nvml.h
FI_DEV_NVLINK_COUNT_FEC_HISTORY_4 = 239
// FI_DEV_NVLINK_COUNT_FEC_HISTORY_5 as defined in nvml/nvml.h
FI_DEV_NVLINK_COUNT_FEC_HISTORY_5 = 240
// FI_DEV_NVLINK_COUNT_FEC_HISTORY_6 as defined in nvml/nvml.h
FI_DEV_NVLINK_COUNT_FEC_HISTORY_6 = 241
// FI_DEV_NVLINK_COUNT_FEC_HISTORY_7 as defined in nvml/nvml.h
FI_DEV_NVLINK_COUNT_FEC_HISTORY_7 = 242
// FI_DEV_NVLINK_COUNT_FEC_HISTORY_8 as defined in nvml/nvml.h
FI_DEV_NVLINK_COUNT_FEC_HISTORY_8 = 243
// FI_DEV_NVLINK_COUNT_FEC_HISTORY_9 as defined in nvml/nvml.h
FI_DEV_NVLINK_COUNT_FEC_HISTORY_9 = 244
// FI_DEV_NVLINK_COUNT_FEC_HISTORY_10 as defined in nvml/nvml.h
FI_DEV_NVLINK_COUNT_FEC_HISTORY_10 = 245
// FI_DEV_NVLINK_COUNT_FEC_HISTORY_11 as defined in nvml/nvml.h
FI_DEV_NVLINK_COUNT_FEC_HISTORY_11 = 246
// FI_DEV_NVLINK_COUNT_FEC_HISTORY_12 as defined in nvml/nvml.h
FI_DEV_NVLINK_COUNT_FEC_HISTORY_12 = 247
// FI_DEV_NVLINK_COUNT_FEC_HISTORY_13 as defined in nvml/nvml.h
FI_DEV_NVLINK_COUNT_FEC_HISTORY_13 = 248
// FI_DEV_NVLINK_COUNT_FEC_HISTORY_14 as defined in nvml/nvml.h
FI_DEV_NVLINK_COUNT_FEC_HISTORY_14 = 249
// FI_DEV_NVLINK_COUNT_FEC_HISTORY_15 as defined in nvml/nvml.h
FI_DEV_NVLINK_COUNT_FEC_HISTORY_15 = 250
// FI_DEV_CLOCKS_EVENT_REASON_SW_POWER_CAP as defined in nvml/nvml.h
FI_DEV_CLOCKS_EVENT_REASON_SW_POWER_CAP = 74
// FI_DEV_CLOCKS_EVENT_REASON_SYNC_BOOST as defined in nvml/nvml.h
FI_DEV_CLOCKS_EVENT_REASON_SYNC_BOOST = 76
// FI_DEV_CLOCKS_EVENT_REASON_SW_THERM_SLOWDOWN as defined in nvml/nvml.h
FI_DEV_CLOCKS_EVENT_REASON_SW_THERM_SLOWDOWN = 251
// FI_DEV_CLOCKS_EVENT_REASON_HW_THERM_SLOWDOWN as defined in nvml/nvml.h
FI_DEV_CLOCKS_EVENT_REASON_HW_THERM_SLOWDOWN = 252
// FI_DEV_CLOCKS_EVENT_REASON_HW_POWER_BRAKE_SLOWDOWN as defined in nvml/nvml.h
FI_DEV_CLOCKS_EVENT_REASON_HW_POWER_BRAKE_SLOWDOWN = 253
// FI_DEV_POWER_SYNC_BALANCING_FREQ as defined in nvml/nvml.h
FI_DEV_POWER_SYNC_BALANCING_FREQ = 254
// FI_DEV_POWER_SYNC_BALANCING_AF as defined in nvml/nvml.h
FI_DEV_POWER_SYNC_BALANCING_AF = 255
// FI_PWR_SMOOTHING_ENABLED as defined in nvml/nvml.h
FI_PWR_SMOOTHING_ENABLED = 256
// FI_PWR_SMOOTHING_PRIV_LVL as defined in nvml/nvml.h
FI_PWR_SMOOTHING_PRIV_LVL = 257
// FI_PWR_SMOOTHING_IMM_RAMP_DOWN_ENABLED as defined in nvml/nvml.h
FI_PWR_SMOOTHING_IMM_RAMP_DOWN_ENABLED = 258
// FI_PWR_SMOOTHING_APPLIED_TMP_CEIL as defined in nvml/nvml.h
FI_PWR_SMOOTHING_APPLIED_TMP_CEIL = 259
// FI_PWR_SMOOTHING_APPLIED_TMP_FLOOR as defined in nvml/nvml.h
FI_PWR_SMOOTHING_APPLIED_TMP_FLOOR = 260
// FI_PWR_SMOOTHING_MAX_PERCENT_TMP_FLOOR_SETTING as defined in nvml/nvml.h
FI_PWR_SMOOTHING_MAX_PERCENT_TMP_FLOOR_SETTING = 261
// FI_PWR_SMOOTHING_MIN_PERCENT_TMP_FLOOR_SETTING as defined in nvml/nvml.h
FI_PWR_SMOOTHING_MIN_PERCENT_TMP_FLOOR_SETTING = 262
// FI_PWR_SMOOTHING_HW_CIRCUITRY_PERCENT_LIFETIME_REMAINING as defined in nvml/nvml.h
FI_PWR_SMOOTHING_HW_CIRCUITRY_PERCENT_LIFETIME_REMAINING = 263
// FI_PWR_SMOOTHING_MAX_NUM_PRESET_PROFILES as defined in nvml/nvml.h
FI_PWR_SMOOTHING_MAX_NUM_PRESET_PROFILES = 264
// FI_PWR_SMOOTHING_PROFILE_PERCENT_TMP_FLOOR as defined in nvml/nvml.h
FI_PWR_SMOOTHING_PROFILE_PERCENT_TMP_FLOOR = 265
// FI_PWR_SMOOTHING_PROFILE_RAMP_UP_RATE as defined in nvml/nvml.h
FI_PWR_SMOOTHING_PROFILE_RAMP_UP_RATE = 266
// FI_PWR_SMOOTHING_PROFILE_RAMP_DOWN_RATE as defined in nvml/nvml.h
FI_PWR_SMOOTHING_PROFILE_RAMP_DOWN_RATE = 267
// FI_PWR_SMOOTHING_PROFILE_RAMP_DOWN_HYST_VAL as defined in nvml/nvml.h
FI_PWR_SMOOTHING_PROFILE_RAMP_DOWN_HYST_VAL = 268
// FI_PWR_SMOOTHING_ACTIVE_PRESET_PROFILE as defined in nvml/nvml.h
FI_PWR_SMOOTHING_ACTIVE_PRESET_PROFILE = 269
// FI_PWR_SMOOTHING_ADMIN_OVERRIDE_PERCENT_TMP_FLOOR as defined in nvml/nvml.h
FI_PWR_SMOOTHING_ADMIN_OVERRIDE_PERCENT_TMP_FLOOR = 270
// FI_PWR_SMOOTHING_ADMIN_OVERRIDE_RAMP_UP_RATE as defined in nvml/nvml.h
FI_PWR_SMOOTHING_ADMIN_OVERRIDE_RAMP_UP_RATE = 271
// FI_PWR_SMOOTHING_ADMIN_OVERRIDE_RAMP_DOWN_RATE as defined in nvml/nvml.h
FI_PWR_SMOOTHING_ADMIN_OVERRIDE_RAMP_DOWN_RATE = 272
// FI_PWR_SMOOTHING_ADMIN_OVERRIDE_RAMP_DOWN_HYST_VAL as defined in nvml/nvml.h
FI_PWR_SMOOTHING_ADMIN_OVERRIDE_RAMP_DOWN_HYST_VAL = 273
// FI_MAX as defined in nvml/nvml.h
FI_MAX = 200
FI_MAX = 274
// NVLINK_LOW_POWER_THRESHOLD_UNIT_100US as defined in nvml/nvml.h
NVLINK_LOW_POWER_THRESHOLD_UNIT_100US = 0
// NVLINK_LOW_POWER_THRESHOLD_UNIT_50US as defined in nvml/nvml.h
NVLINK_LOW_POWER_THRESHOLD_UNIT_50US = 1
// NVLINK_POWER_STATE_HIGH_SPEED as defined in nvml/nvml.h
NVLINK_POWER_STATE_HIGH_SPEED = 0
// NVLINK_POWER_STATE_LOW as defined in nvml/nvml.h
NVLINK_POWER_STATE_LOW = 1
// NVLINK_LOW_POWER_THRESHOLD_MIN as defined in nvml/nvml.h
NVLINK_LOW_POWER_THRESHOLD_MIN = 1
// NVLINK_LOW_POWER_THRESHOLD_MAX as defined in nvml/nvml.h
NVLINK_LOW_POWER_THRESHOLD_MAX = 8191
// NVLINK_LOW_POWER_THRESHOLD_RESET as defined in nvml/nvml.h
NVLINK_LOW_POWER_THRESHOLD_RESET = 4294967295
// NVLINK_LOW_POWER_THRESHOLD_DEFAULT as defined in nvml/nvml.h
NVLINK_LOW_POWER_THRESHOLD_DEFAULT = 4294967295
// C2C_POWER_STATE_FULL_POWER as defined in nvml/nvml.h
C2C_POWER_STATE_FULL_POWER = 0
// C2C_POWER_STATE_LOW_POWER as defined in nvml/nvml.h
C2C_POWER_STATE_LOW_POWER = 1
// EventTypeNone as defined in nvml/nvml.h
EventTypeNone = 0
// EventTypeSingleBitEccError as defined in nvml/nvml.h
EventTypeSingleBitEccError = 1
// EventTypeDoubleBitEccError as defined in nvml/nvml.h
@@ -580,10 +796,28 @@ const (
EventTypePowerSourceChange = 128
// EventMigConfigChange as defined in nvml/nvml.h
EventMigConfigChange = 256
// EventTypeNone as defined in nvml/nvml.h
EventTypeNone = 0
// EventTypeSingleBitEccErrorStorm as defined in nvml/nvml.h
EventTypeSingleBitEccErrorStorm = 512
// EventTypeDramRetirementEvent as defined in nvml/nvml.h
EventTypeDramRetirementEvent = 1024
// EventTypeDramRetirementFailure as defined in nvml/nvml.h
EventTypeDramRetirementFailure = 2048
// EventTypeNonFatalPoisonError as defined in nvml/nvml.h
EventTypeNonFatalPoisonError = 4096
// EventTypeFatalPoisonError as defined in nvml/nvml.h
EventTypeFatalPoisonError = 8192
// EventTypeGpuUnavailableError as defined in nvml/nvml.h
EventTypeGpuUnavailableError = 16384
// EventTypeGpuRecoveryAction as defined in nvml/nvml.h
EventTypeGpuRecoveryAction = 32768
// EventTypeAll as defined in nvml/nvml.h
EventTypeAll = 415
EventTypeAll = 65439
// SystemEventTypeGpuDriverUnbind as defined in nvml/nvml.h
SystemEventTypeGpuDriverUnbind = 1
// SystemEventTypeGpuDriverBind as defined in nvml/nvml.h
SystemEventTypeGpuDriverBind = 2
// SystemEventTypeCount as defined in nvml/nvml.h
SystemEventTypeCount = 2
// ClocksEventReasonGpuIdle as defined in nvml/nvml.h
ClocksEventReasonGpuIdle = 1
// ClocksEventReasonApplicationsClocksSetting as defined in nvml/nvml.h
@@ -640,6 +874,10 @@ const (
CC_SYSTEM_CPU_CAPS_AMD_SEV = 1
// CC_SYSTEM_CPU_CAPS_INTEL_TDX as defined in nvml/nvml.h
CC_SYSTEM_CPU_CAPS_INTEL_TDX = 2
// CC_SYSTEM_CPU_CAPS_AMD_SEV_SNP as defined in nvml/nvml.h
CC_SYSTEM_CPU_CAPS_AMD_SEV_SNP = 3
// CC_SYSTEM_CPU_CAPS_AMD_SNP_VTOM as defined in nvml/nvml.h
CC_SYSTEM_CPU_CAPS_AMD_SNP_VTOM = 4
// CC_SYSTEM_GPUS_CC_NOT_CAPABLE as defined in nvml/nvml.h
CC_SYSTEM_GPUS_CC_NOT_CAPABLE = 0
// CC_SYSTEM_GPUS_CC_CAPABLE as defined in nvml/nvml.h
@@ -683,7 +921,7 @@ const (
// CC_KEY_ROTATION_THRESHOLD_ATTACKER_ADVANTAGE_MIN as defined in nvml/nvml.h
CC_KEY_ROTATION_THRESHOLD_ATTACKER_ADVANTAGE_MIN = 50
// CC_KEY_ROTATION_THRESHOLD_ATTACKER_ADVANTAGE_MAX as defined in nvml/nvml.h
CC_KEY_ROTATION_THRESHOLD_ATTACKER_ADVANTAGE_MAX = 75
CC_KEY_ROTATION_THRESHOLD_ATTACKER_ADVANTAGE_MAX = 65
// GPU_FABRIC_UUID_LEN as defined in nvml/nvml.h
GPU_FABRIC_UUID_LEN = 16
// GPU_FABRIC_STATE_NOT_SUPPORTED as defined in nvml/nvml.h
@@ -703,13 +941,37 @@ const (
// GPU_FABRIC_HEALTH_MASK_SHIFT_DEGRADED_BW as defined in nvml/nvml.h
GPU_FABRIC_HEALTH_MASK_SHIFT_DEGRADED_BW = 0
// GPU_FABRIC_HEALTH_MASK_WIDTH_DEGRADED_BW as defined in nvml/nvml.h
GPU_FABRIC_HEALTH_MASK_WIDTH_DEGRADED_BW = 17
// POWER_SCOPE_GPU as defined in nvml/nvml.h
POWER_SCOPE_GPU = 0
// POWER_SCOPE_MODULE as defined in nvml/nvml.h
POWER_SCOPE_MODULE = 1
// POWER_SCOPE_MEMORY as defined in nvml/nvml.h
POWER_SCOPE_MEMORY = 2
GPU_FABRIC_HEALTH_MASK_WIDTH_DEGRADED_BW = 3
// GPU_FABRIC_HEALTH_MASK_ROUTE_RECOVERY_NOT_SUPPORTED as defined in nvml/nvml.h
GPU_FABRIC_HEALTH_MASK_ROUTE_RECOVERY_NOT_SUPPORTED = 0
// GPU_FABRIC_HEALTH_MASK_ROUTE_RECOVERY_TRUE as defined in nvml/nvml.h
GPU_FABRIC_HEALTH_MASK_ROUTE_RECOVERY_TRUE = 1
// GPU_FABRIC_HEALTH_MASK_ROUTE_RECOVERY_FALSE as defined in nvml/nvml.h
GPU_FABRIC_HEALTH_MASK_ROUTE_RECOVERY_FALSE = 2
// GPU_FABRIC_HEALTH_MASK_SHIFT_ROUTE_RECOVERY as defined in nvml/nvml.h
GPU_FABRIC_HEALTH_MASK_SHIFT_ROUTE_RECOVERY = 2
// GPU_FABRIC_HEALTH_MASK_WIDTH_ROUTE_RECOVERY as defined in nvml/nvml.h
GPU_FABRIC_HEALTH_MASK_WIDTH_ROUTE_RECOVERY = 3
// GPU_FABRIC_HEALTH_MASK_ROUTE_UNHEALTHY_NOT_SUPPORTED as defined in nvml/nvml.h
GPU_FABRIC_HEALTH_MASK_ROUTE_UNHEALTHY_NOT_SUPPORTED = 0
// GPU_FABRIC_HEALTH_MASK_ROUTE_UNHEALTHY_TRUE as defined in nvml/nvml.h
GPU_FABRIC_HEALTH_MASK_ROUTE_UNHEALTHY_TRUE = 1
// GPU_FABRIC_HEALTH_MASK_ROUTE_UNHEALTHY_FALSE as defined in nvml/nvml.h
GPU_FABRIC_HEALTH_MASK_ROUTE_UNHEALTHY_FALSE = 2
// GPU_FABRIC_HEALTH_MASK_SHIFT_ROUTE_UNHEALTHY as defined in nvml/nvml.h
GPU_FABRIC_HEALTH_MASK_SHIFT_ROUTE_UNHEALTHY = 4
// GPU_FABRIC_HEALTH_MASK_WIDTH_ROUTE_UNHEALTHY as defined in nvml/nvml.h
GPU_FABRIC_HEALTH_MASK_WIDTH_ROUTE_UNHEALTHY = 3
// GPU_FABRIC_HEALTH_MASK_ACCESS_TIMEOUT_RECOVERY_NOT_SUPPORTED as defined in nvml/nvml.h
GPU_FABRIC_HEALTH_MASK_ACCESS_TIMEOUT_RECOVERY_NOT_SUPPORTED = 0
// GPU_FABRIC_HEALTH_MASK_ACCESS_TIMEOUT_RECOVERY_TRUE as defined in nvml/nvml.h
GPU_FABRIC_HEALTH_MASK_ACCESS_TIMEOUT_RECOVERY_TRUE = 1
// GPU_FABRIC_HEALTH_MASK_ACCESS_TIMEOUT_RECOVERY_FALSE as defined in nvml/nvml.h
GPU_FABRIC_HEALTH_MASK_ACCESS_TIMEOUT_RECOVERY_FALSE = 2
// GPU_FABRIC_HEALTH_MASK_SHIFT_ACCESS_TIMEOUT_RECOVERY as defined in nvml/nvml.h
GPU_FABRIC_HEALTH_MASK_SHIFT_ACCESS_TIMEOUT_RECOVERY = 6
// GPU_FABRIC_HEALTH_MASK_WIDTH_ACCESS_TIMEOUT_RECOVERY as defined in nvml/nvml.h
GPU_FABRIC_HEALTH_MASK_WIDTH_ACCESS_TIMEOUT_RECOVERY = 3
// INIT_FLAG_NO_GPUS as defined in nvml/nvml.h
INIT_FLAG_NO_GPUS = 1
// INIT_FLAG_NO_ATTACH as defined in nvml/nvml.h
@@ -738,6 +1000,22 @@ const (
AFFINITY_SCOPE_NODE = 0
// AFFINITY_SCOPE_SOCKET as defined in nvml/nvml.h
AFFINITY_SCOPE_SOCKET = 1
// NVLINK_BER_MANTISSA_SHIFT as defined in nvml/nvml.h
NVLINK_BER_MANTISSA_SHIFT = 8
// NVLINK_BER_MANTISSA_WIDTH as defined in nvml/nvml.h
NVLINK_BER_MANTISSA_WIDTH = 15
// NVLINK_BER_EXP_SHIFT as defined in nvml/nvml.h
NVLINK_BER_EXP_SHIFT = 0
// NVLINK_BER_EXP_WIDTH as defined in nvml/nvml.h
NVLINK_BER_EXP_WIDTH = 255
// NVLINK_STATE_INACTIVE as defined in nvml/nvml.h
NVLINK_STATE_INACTIVE = 0
// NVLINK_STATE_ACTIVE as defined in nvml/nvml.h
NVLINK_STATE_ACTIVE = 1
// NVLINK_STATE_SLEEP as defined in nvml/nvml.h
NVLINK_STATE_SLEEP = 2
// NVLINK_TOTAL_SUPPORTED_BW_MODES as defined in nvml/nvml.h
NVLINK_TOTAL_SUPPORTED_BW_MODES = 23
// DEVICE_MIG_DISABLE as defined in nvml/nvml.h
DEVICE_MIG_DISABLE = 0
// DEVICE_MIG_ENABLE as defined in nvml/nvml.h
@@ -762,10 +1040,30 @@ const (
GPU_INSTANCE_PROFILE_2_SLICE_REV1 = 8
// GPU_INSTANCE_PROFILE_1_SLICE_REV2 as defined in nvml/nvml.h
GPU_INSTANCE_PROFILE_1_SLICE_REV2 = 9
// GPU_INSTANCE_PROFILE_1_SLICE_GFX as defined in nvml/nvml.h
GPU_INSTANCE_PROFILE_1_SLICE_GFX = 10
// GPU_INSTANCE_PROFILE_2_SLICE_GFX as defined in nvml/nvml.h
GPU_INSTANCE_PROFILE_2_SLICE_GFX = 11
// GPU_INSTANCE_PROFILE_4_SLICE_GFX as defined in nvml/nvml.h
GPU_INSTANCE_PROFILE_4_SLICE_GFX = 12
// GPU_INSTANCE_PROFILE_1_SLICE_NO_ME as defined in nvml/nvml.h
GPU_INSTANCE_PROFILE_1_SLICE_NO_ME = 13
// GPU_INSTANCE_PROFILE_2_SLICE_NO_ME as defined in nvml/nvml.h
GPU_INSTANCE_PROFILE_2_SLICE_NO_ME = 14
// GPU_INSTANCE_PROFILE_1_SLICE_ALL_ME as defined in nvml/nvml.h
GPU_INSTANCE_PROFILE_1_SLICE_ALL_ME = 15
// GPU_INSTANCE_PROFILE_2_SLICE_ALL_ME as defined in nvml/nvml.h
GPU_INSTANCE_PROFILE_2_SLICE_ALL_ME = 16
// GPU_INSTANCE_PROFILE_COUNT as defined in nvml/nvml.h
GPU_INSTANCE_PROFILE_COUNT = 10
GPU_INSTANCE_PROFILE_COUNT = 17
// GPU_INSTANCE_PROFILE_CAPS_P2P as defined in nvml/nvml.h
GPU_INSTANCE_PROFILE_CAPS_P2P = 1
// GPU_INTSTANCE_PROFILE_CAPS_P2P as defined in nvml/nvml.h
GPU_INTSTANCE_PROFILE_CAPS_P2P = 1
// GPU_INSTANCE_PROFILE_CAPS_GFX as defined in nvml/nvml.h
GPU_INSTANCE_PROFILE_CAPS_GFX = 2
// COMPUTE_INSTANCE_PROFILE_CAPS_GFX as defined in nvml/nvml.h
COMPUTE_INSTANCE_PROFILE_CAPS_GFX = 1
// COMPUTE_INSTANCE_PROFILE_1_SLICE as defined in nvml/nvml.h
COMPUTE_INSTANCE_PROFILE_1_SLICE = 0
// COMPUTE_INSTANCE_PROFILE_2_SLICE as defined in nvml/nvml.h
@@ -792,16 +1090,24 @@ const (
GPM_METRICS_GET_VERSION = 1
// GPM_SUPPORT_VERSION as defined in nvml/nvml.h
GPM_SUPPORT_VERSION = 1
// NVLINK_POWER_STATE_HIGH_SPEED as defined in nvml/nvml.h
NVLINK_POWER_STATE_HIGH_SPEED = 0
// NVLINK_POWER_STATE_LOW as defined in nvml/nvml.h
NVLINK_POWER_STATE_LOW = 1
// NVLINK_LOW_POWER_THRESHOLD_MIN as defined in nvml/nvml.h
NVLINK_LOW_POWER_THRESHOLD_MIN = 1
// NVLINK_LOW_POWER_THRESHOLD_MAX as defined in nvml/nvml.h
NVLINK_LOW_POWER_THRESHOLD_MAX = 8191
// NVLINK_LOW_POWER_THRESHOLD_RESET as defined in nvml/nvml.h
NVLINK_LOW_POWER_THRESHOLD_RESET = 4294967295
// DEV_CAP_EGM as defined in nvml/nvml.h
DEV_CAP_EGM = 1
// WORKLOAD_POWER_MAX_PROFILES as defined in nvml/nvml.h
WORKLOAD_POWER_MAX_PROFILES = 255
// POWER_SMOOTHING_MAX_NUM_PROFILES as defined in nvml/nvml.h
POWER_SMOOTHING_MAX_NUM_PROFILES = 5
// POWER_SMOOTHING_NUM_PROFILE_PARAMS as defined in nvml/nvml.h
POWER_SMOOTHING_NUM_PROFILE_PARAMS = 4
// POWER_SMOOTHING_ADMIN_OVERRIDE_NOT_SET as defined in nvml/nvml.h
POWER_SMOOTHING_ADMIN_OVERRIDE_NOT_SET = 4294967295
// POWER_SMOOTHING_PROFILE_PARAM_PERCENT_TMP_FLOOR as defined in nvml/nvml.h
POWER_SMOOTHING_PROFILE_PARAM_PERCENT_TMP_FLOOR = 0
// POWER_SMOOTHING_PROFILE_PARAM_RAMP_UP_RATE as defined in nvml/nvml.h
POWER_SMOOTHING_PROFILE_PARAM_RAMP_UP_RATE = 1
// POWER_SMOOTHING_PROFILE_PARAM_RAMP_DOWN_RATE as defined in nvml/nvml.h
POWER_SMOOTHING_PROFILE_PARAM_RAMP_DOWN_RATE = 2
// POWER_SMOOTHING_PROFILE_PARAM_RAMP_DOWN_HYSTERESIS as defined in nvml/nvml.h
POWER_SMOOTHING_PROFILE_PARAM_RAMP_DOWN_HYSTERESIS = 3
)
// BridgeChipType as declared in nvml/nvml.h
@@ -960,7 +1266,8 @@ const (
VALUE_TYPE_UNSIGNED_LONG_LONG ValueType = 3
VALUE_TYPE_SIGNED_LONG_LONG ValueType = 4
VALUE_TYPE_SIGNED_INT ValueType = 5
VALUE_TYPE_COUNT ValueType = 6
VALUE_TYPE_UNSIGNED_SHORT ValueType = 6
VALUE_TYPE_COUNT ValueType = 7
)
// PerfPolicyType as declared in nvml/nvml.h
@@ -979,6 +1286,29 @@ const (
PERF_POLICY_COUNT PerfPolicyType = 12
)
// CoolerControl as declared in nvml/nvml.h
type CoolerControl int32
// CoolerControl enumeration from nvml/nvml.h
const (
THERMAL_COOLER_SIGNAL_NONE CoolerControl = iota
THERMAL_COOLER_SIGNAL_TOGGLE CoolerControl = 1
THERMAL_COOLER_SIGNAL_VARIABLE CoolerControl = 2
THERMAL_COOLER_SIGNAL_COUNT CoolerControl = 3
)
// CoolerTarget as declared in nvml/nvml.h
type CoolerTarget int32
// CoolerTarget enumeration from nvml/nvml.h
const (
THERMAL_COOLER_TARGET_NONE CoolerTarget = 1
THERMAL_COOLER_TARGET_GPU CoolerTarget = 2
THERMAL_COOLER_TARGET_MEMORY CoolerTarget = 4
THERMAL_COOLER_TARGET_POWER_SUPPLY CoolerTarget = 8
THERMAL_COOLER_TARGET_GPU_RELATED CoolerTarget = 14
)
// EnableState as declared in nvml/nvml.h
type EnableState int32
@@ -1026,7 +1356,8 @@ const (
TEMPERATURE_THRESHOLD_ACOUSTIC_MIN TemperatureThresholds = 4
TEMPERATURE_THRESHOLD_ACOUSTIC_CURR TemperatureThresholds = 5
TEMPERATURE_THRESHOLD_ACOUSTIC_MAX TemperatureThresholds = 6
TEMPERATURE_THRESHOLD_COUNT TemperatureThresholds = 7
TEMPERATURE_THRESHOLD_GPS_CURR TemperatureThresholds = 7
TEMPERATURE_THRESHOLD_COUNT TemperatureThresholds = 8
)
// TemperatureSensors as declared in nvml/nvml.h
@@ -1060,6 +1391,21 @@ const (
MEMORY_ERROR_TYPE_COUNT MemoryErrorType = 2
)
// NvlinkVersion as declared in nvml/nvml.h
type NvlinkVersion int32
// NvlinkVersion enumeration from nvml/nvml.h
const (
NVLINK_VERSION_INVALID NvlinkVersion = iota
NVLINK_VERSION_1_0 NvlinkVersion = 1
NVLINK_VERSION_2_0 NvlinkVersion = 2
NVLINK_VERSION_2_2 NvlinkVersion = 3
NVLINK_VERSION_3_0 NvlinkVersion = 4
NVLINK_VERSION_3_1 NvlinkVersion = 5
NVLINK_VERSION_4_0 NvlinkVersion = 6
NVLINK_VERSION_5_0 NvlinkVersion = 7
)
// EccCounterType as declared in nvml/nvml.h
type EccCounterType int32
@@ -1101,6 +1447,7 @@ type DriverModel int32
const (
DRIVER_WDDM DriverModel = iota
DRIVER_WDM DriverModel = 1
DRIVER_MCDM DriverModel = 2
)
// Pstates as declared in nvml/nvml.h
@@ -1145,7 +1492,8 @@ const (
INFOROM_OEM InforomObject = iota
INFOROM_ECC InforomObject = 1
INFOROM_POWER InforomObject = 2
INFOROM_COUNT InforomObject = 3
INFOROM_DEN InforomObject = 3
INFOROM_COUNT InforomObject = 4
)
// Return as declared in nvml/nvml.h
@@ -1223,6 +1571,17 @@ const (
RESTRICTED_API_COUNT RestrictedAPI = 2
)
// GpuUtilizationDomainId as declared in nvml/nvml.h
type GpuUtilizationDomainId int32
// GpuUtilizationDomainId enumeration from nvml/nvml.h
const (
GPU_UTILIZATION_DOMAIN_GPU GpuUtilizationDomainId = iota
GPU_UTILIZATION_DOMAIN_FB GpuUtilizationDomainId = 1
GPU_UTILIZATION_DOMAIN_VID GpuUtilizationDomainId = 2
GPU_UTILIZATION_DOMAIN_BUS GpuUtilizationDomainId = 3
)
// GpuVirtualizationMode as declared in nvml/nvml.h
type GpuVirtualizationMode int32
@@ -1281,7 +1640,8 @@ type VgpuDriverCapability int32
// VgpuDriverCapability enumeration from nvml/nvml.h
const (
VGPU_DRIVER_CAP_HETEROGENEOUS_MULTI_VGPU VgpuDriverCapability = iota
VGPU_DRIVER_CAP_COUNT VgpuDriverCapability = 1
VGPU_DRIVER_CAP_WARM_UPDATE VgpuDriverCapability = 1
VGPU_DRIVER_CAP_COUNT VgpuDriverCapability = 2
)
// DeviceVgpuCapability as declared in nvml/nvml.h
@@ -1297,18 +1657,23 @@ const (
DEVICE_VGPU_CAP_DEVICE_STREAMING DeviceVgpuCapability = 5
DEVICE_VGPU_CAP_MINI_QUARTER_GPU DeviceVgpuCapability = 6
DEVICE_VGPU_CAP_COMPUTE_MEDIA_ENGINE_GPU DeviceVgpuCapability = 7
DEVICE_VGPU_CAP_COUNT DeviceVgpuCapability = 8
DEVICE_VGPU_CAP_WARM_UPDATE DeviceVgpuCapability = 8
DEVICE_VGPU_CAP_HOMOGENEOUS_PLACEMENTS DeviceVgpuCapability = 9
DEVICE_VGPU_CAP_MIG_TIMESLICING_SUPPORTED DeviceVgpuCapability = 10
DEVICE_VGPU_CAP_MIG_TIMESLICING_ENABLED DeviceVgpuCapability = 11
DEVICE_VGPU_CAP_COUNT DeviceVgpuCapability = 12
)
// GpuUtilizationDomainId as declared in nvml/nvml.h
type GpuUtilizationDomainId int32
// DeviceGpuRecoveryAction as declared in nvml/nvml.h
type DeviceGpuRecoveryAction int32
// GpuUtilizationDomainId enumeration from nvml/nvml.h
// DeviceGpuRecoveryAction enumeration from nvml/nvml.h
const (
GPU_UTILIZATION_DOMAIN_GPU GpuUtilizationDomainId = iota
GPU_UTILIZATION_DOMAIN_FB GpuUtilizationDomainId = 1
GPU_UTILIZATION_DOMAIN_VID GpuUtilizationDomainId = 2
GPU_UTILIZATION_DOMAIN_BUS GpuUtilizationDomainId = 3
GPU_RECOVERY_ACTION_NONE DeviceGpuRecoveryAction = iota
GPU_RECOVERY_ACTION_GPU_RESET DeviceGpuRecoveryAction = 1
GPU_RECOVERY_ACTION_NODE_REBOOT DeviceGpuRecoveryAction = 2
GPU_RECOVERY_ACTION_DRAIN_P2P DeviceGpuRecoveryAction = 3
GPU_RECOVERY_ACTION_DRAIN_AND_RESET DeviceGpuRecoveryAction = 4
)
// FanState as declared in nvml/nvml.h
@@ -1447,6 +1812,16 @@ const (
THERMAL_CONTROLLER_UNKNOWN ThermalController = -1
)
// UUIDType as declared in nvml/nvml.h
type UUIDType int32
// UUIDType enumeration from nvml/nvml.h
const (
UUID_TYPE_NONE UUIDType = iota
UUID_TYPE_ASCII UUIDType = 1
UUID_TYPE_BINARY UUIDType = 2
)
// GridLicenseFeatureCode as declared in nvml/nvml.h
type GridLicenseFeatureCode int32
@@ -1465,74 +1840,208 @@ type GpmMetricId int32
// GpmMetricId enumeration from nvml/nvml.h
const (
GPM_METRIC_GRAPHICS_UTIL GpmMetricId = 1
GPM_METRIC_SM_UTIL GpmMetricId = 2
GPM_METRIC_SM_OCCUPANCY GpmMetricId = 3
GPM_METRIC_INTEGER_UTIL GpmMetricId = 4
GPM_METRIC_ANY_TENSOR_UTIL GpmMetricId = 5
GPM_METRIC_DFMA_TENSOR_UTIL GpmMetricId = 6
GPM_METRIC_HMMA_TENSOR_UTIL GpmMetricId = 7
GPM_METRIC_IMMA_TENSOR_UTIL GpmMetricId = 9
GPM_METRIC_DRAM_BW_UTIL GpmMetricId = 10
GPM_METRIC_FP64_UTIL GpmMetricId = 11
GPM_METRIC_FP32_UTIL GpmMetricId = 12
GPM_METRIC_FP16_UTIL GpmMetricId = 13
GPM_METRIC_PCIE_TX_PER_SEC GpmMetricId = 20
GPM_METRIC_PCIE_RX_PER_SEC GpmMetricId = 21
GPM_METRIC_NVDEC_0_UTIL GpmMetricId = 30
GPM_METRIC_NVDEC_1_UTIL GpmMetricId = 31
GPM_METRIC_NVDEC_2_UTIL GpmMetricId = 32
GPM_METRIC_NVDEC_3_UTIL GpmMetricId = 33
GPM_METRIC_NVDEC_4_UTIL GpmMetricId = 34
GPM_METRIC_NVDEC_5_UTIL GpmMetricId = 35
GPM_METRIC_NVDEC_6_UTIL GpmMetricId = 36
GPM_METRIC_NVDEC_7_UTIL GpmMetricId = 37
GPM_METRIC_NVJPG_0_UTIL GpmMetricId = 40
GPM_METRIC_NVJPG_1_UTIL GpmMetricId = 41
GPM_METRIC_NVJPG_2_UTIL GpmMetricId = 42
GPM_METRIC_NVJPG_3_UTIL GpmMetricId = 43
GPM_METRIC_NVJPG_4_UTIL GpmMetricId = 44
GPM_METRIC_NVJPG_5_UTIL GpmMetricId = 45
GPM_METRIC_NVJPG_6_UTIL GpmMetricId = 46
GPM_METRIC_NVJPG_7_UTIL GpmMetricId = 47
GPM_METRIC_NVOFA_0_UTIL GpmMetricId = 50
GPM_METRIC_NVLINK_TOTAL_RX_PER_SEC GpmMetricId = 60
GPM_METRIC_NVLINK_TOTAL_TX_PER_SEC GpmMetricId = 61
GPM_METRIC_NVLINK_L0_RX_PER_SEC GpmMetricId = 62
GPM_METRIC_NVLINK_L0_TX_PER_SEC GpmMetricId = 63
GPM_METRIC_NVLINK_L1_RX_PER_SEC GpmMetricId = 64
GPM_METRIC_NVLINK_L1_TX_PER_SEC GpmMetricId = 65
GPM_METRIC_NVLINK_L2_RX_PER_SEC GpmMetricId = 66
GPM_METRIC_NVLINK_L2_TX_PER_SEC GpmMetricId = 67
GPM_METRIC_NVLINK_L3_RX_PER_SEC GpmMetricId = 68
GPM_METRIC_NVLINK_L3_TX_PER_SEC GpmMetricId = 69
GPM_METRIC_NVLINK_L4_RX_PER_SEC GpmMetricId = 70
GPM_METRIC_NVLINK_L4_TX_PER_SEC GpmMetricId = 71
GPM_METRIC_NVLINK_L5_RX_PER_SEC GpmMetricId = 72
GPM_METRIC_NVLINK_L5_TX_PER_SEC GpmMetricId = 73
GPM_METRIC_NVLINK_L6_RX_PER_SEC GpmMetricId = 74
GPM_METRIC_NVLINK_L6_TX_PER_SEC GpmMetricId = 75
GPM_METRIC_NVLINK_L7_RX_PER_SEC GpmMetricId = 76
GPM_METRIC_NVLINK_L7_TX_PER_SEC GpmMetricId = 77
GPM_METRIC_NVLINK_L8_RX_PER_SEC GpmMetricId = 78
GPM_METRIC_NVLINK_L8_TX_PER_SEC GpmMetricId = 79
GPM_METRIC_NVLINK_L9_RX_PER_SEC GpmMetricId = 80
GPM_METRIC_NVLINK_L9_TX_PER_SEC GpmMetricId = 81
GPM_METRIC_NVLINK_L10_RX_PER_SEC GpmMetricId = 82
GPM_METRIC_NVLINK_L10_TX_PER_SEC GpmMetricId = 83
GPM_METRIC_NVLINK_L11_RX_PER_SEC GpmMetricId = 84
GPM_METRIC_NVLINK_L11_TX_PER_SEC GpmMetricId = 85
GPM_METRIC_NVLINK_L12_RX_PER_SEC GpmMetricId = 86
GPM_METRIC_NVLINK_L12_TX_PER_SEC GpmMetricId = 87
GPM_METRIC_NVLINK_L13_RX_PER_SEC GpmMetricId = 88
GPM_METRIC_NVLINK_L13_TX_PER_SEC GpmMetricId = 89
GPM_METRIC_NVLINK_L14_RX_PER_SEC GpmMetricId = 90
GPM_METRIC_NVLINK_L14_TX_PER_SEC GpmMetricId = 91
GPM_METRIC_NVLINK_L15_RX_PER_SEC GpmMetricId = 92
GPM_METRIC_NVLINK_L15_TX_PER_SEC GpmMetricId = 93
GPM_METRIC_NVLINK_L16_RX_PER_SEC GpmMetricId = 94
GPM_METRIC_NVLINK_L16_TX_PER_SEC GpmMetricId = 95
GPM_METRIC_NVLINK_L17_RX_PER_SEC GpmMetricId = 96
GPM_METRIC_NVLINK_L17_TX_PER_SEC GpmMetricId = 97
GPM_METRIC_MAX GpmMetricId = 98
GPM_METRIC_GRAPHICS_UTIL GpmMetricId = 1
GPM_METRIC_SM_UTIL GpmMetricId = 2
GPM_METRIC_SM_OCCUPANCY GpmMetricId = 3
GPM_METRIC_INTEGER_UTIL GpmMetricId = 4
GPM_METRIC_ANY_TENSOR_UTIL GpmMetricId = 5
GPM_METRIC_DFMA_TENSOR_UTIL GpmMetricId = 6
GPM_METRIC_HMMA_TENSOR_UTIL GpmMetricId = 7
GPM_METRIC_IMMA_TENSOR_UTIL GpmMetricId = 9
GPM_METRIC_DRAM_BW_UTIL GpmMetricId = 10
GPM_METRIC_FP64_UTIL GpmMetricId = 11
GPM_METRIC_FP32_UTIL GpmMetricId = 12
GPM_METRIC_FP16_UTIL GpmMetricId = 13
GPM_METRIC_PCIE_TX_PER_SEC GpmMetricId = 20
GPM_METRIC_PCIE_RX_PER_SEC GpmMetricId = 21
GPM_METRIC_NVDEC_0_UTIL GpmMetricId = 30
GPM_METRIC_NVDEC_1_UTIL GpmMetricId = 31
GPM_METRIC_NVDEC_2_UTIL GpmMetricId = 32
GPM_METRIC_NVDEC_3_UTIL GpmMetricId = 33
GPM_METRIC_NVDEC_4_UTIL GpmMetricId = 34
GPM_METRIC_NVDEC_5_UTIL GpmMetricId = 35
GPM_METRIC_NVDEC_6_UTIL GpmMetricId = 36
GPM_METRIC_NVDEC_7_UTIL GpmMetricId = 37
GPM_METRIC_NVJPG_0_UTIL GpmMetricId = 40
GPM_METRIC_NVJPG_1_UTIL GpmMetricId = 41
GPM_METRIC_NVJPG_2_UTIL GpmMetricId = 42
GPM_METRIC_NVJPG_3_UTIL GpmMetricId = 43
GPM_METRIC_NVJPG_4_UTIL GpmMetricId = 44
GPM_METRIC_NVJPG_5_UTIL GpmMetricId = 45
GPM_METRIC_NVJPG_6_UTIL GpmMetricId = 46
GPM_METRIC_NVJPG_7_UTIL GpmMetricId = 47
GPM_METRIC_NVOFA_0_UTIL GpmMetricId = 50
GPM_METRIC_NVOFA_1_UTIL GpmMetricId = 51
GPM_METRIC_NVLINK_TOTAL_RX_PER_SEC GpmMetricId = 60
GPM_METRIC_NVLINK_TOTAL_TX_PER_SEC GpmMetricId = 61
GPM_METRIC_NVLINK_L0_RX_PER_SEC GpmMetricId = 62
GPM_METRIC_NVLINK_L0_TX_PER_SEC GpmMetricId = 63
GPM_METRIC_NVLINK_L1_RX_PER_SEC GpmMetricId = 64
GPM_METRIC_NVLINK_L1_TX_PER_SEC GpmMetricId = 65
GPM_METRIC_NVLINK_L2_RX_PER_SEC GpmMetricId = 66
GPM_METRIC_NVLINK_L2_TX_PER_SEC GpmMetricId = 67
GPM_METRIC_NVLINK_L3_RX_PER_SEC GpmMetricId = 68
GPM_METRIC_NVLINK_L3_TX_PER_SEC GpmMetricId = 69
GPM_METRIC_NVLINK_L4_RX_PER_SEC GpmMetricId = 70
GPM_METRIC_NVLINK_L4_TX_PER_SEC GpmMetricId = 71
GPM_METRIC_NVLINK_L5_RX_PER_SEC GpmMetricId = 72
GPM_METRIC_NVLINK_L5_TX_PER_SEC GpmMetricId = 73
GPM_METRIC_NVLINK_L6_RX_PER_SEC GpmMetricId = 74
GPM_METRIC_NVLINK_L6_TX_PER_SEC GpmMetricId = 75
GPM_METRIC_NVLINK_L7_RX_PER_SEC GpmMetricId = 76
GPM_METRIC_NVLINK_L7_TX_PER_SEC GpmMetricId = 77
GPM_METRIC_NVLINK_L8_RX_PER_SEC GpmMetricId = 78
GPM_METRIC_NVLINK_L8_TX_PER_SEC GpmMetricId = 79
GPM_METRIC_NVLINK_L9_RX_PER_SEC GpmMetricId = 80
GPM_METRIC_NVLINK_L9_TX_PER_SEC GpmMetricId = 81
GPM_METRIC_NVLINK_L10_RX_PER_SEC GpmMetricId = 82
GPM_METRIC_NVLINK_L10_TX_PER_SEC GpmMetricId = 83
GPM_METRIC_NVLINK_L11_RX_PER_SEC GpmMetricId = 84
GPM_METRIC_NVLINK_L11_TX_PER_SEC GpmMetricId = 85
GPM_METRIC_NVLINK_L12_RX_PER_SEC GpmMetricId = 86
GPM_METRIC_NVLINK_L12_TX_PER_SEC GpmMetricId = 87
GPM_METRIC_NVLINK_L13_RX_PER_SEC GpmMetricId = 88
GPM_METRIC_NVLINK_L13_TX_PER_SEC GpmMetricId = 89
GPM_METRIC_NVLINK_L14_RX_PER_SEC GpmMetricId = 90
GPM_METRIC_NVLINK_L14_TX_PER_SEC GpmMetricId = 91
GPM_METRIC_NVLINK_L15_RX_PER_SEC GpmMetricId = 92
GPM_METRIC_NVLINK_L15_TX_PER_SEC GpmMetricId = 93
GPM_METRIC_NVLINK_L16_RX_PER_SEC GpmMetricId = 94
GPM_METRIC_NVLINK_L16_TX_PER_SEC GpmMetricId = 95
GPM_METRIC_NVLINK_L17_RX_PER_SEC GpmMetricId = 96
GPM_METRIC_NVLINK_L17_TX_PER_SEC GpmMetricId = 97
GPM_METRIC_C2C_TOTAL_TX_PER_SEC GpmMetricId = 100
GPM_METRIC_C2C_TOTAL_RX_PER_SEC GpmMetricId = 101
GPM_METRIC_C2C_DATA_TX_PER_SEC GpmMetricId = 102
GPM_METRIC_C2C_DATA_RX_PER_SEC GpmMetricId = 103
GPM_METRIC_C2C_LINK0_TOTAL_TX_PER_SEC GpmMetricId = 104
GPM_METRIC_C2C_LINK0_TOTAL_RX_PER_SEC GpmMetricId = 105
GPM_METRIC_C2C_LINK0_DATA_TX_PER_SEC GpmMetricId = 106
GPM_METRIC_C2C_LINK0_DATA_RX_PER_SEC GpmMetricId = 107
GPM_METRIC_C2C_LINK1_TOTAL_TX_PER_SEC GpmMetricId = 108
GPM_METRIC_C2C_LINK1_TOTAL_RX_PER_SEC GpmMetricId = 109
GPM_METRIC_C2C_LINK1_DATA_TX_PER_SEC GpmMetricId = 110
GPM_METRIC_C2C_LINK1_DATA_RX_PER_SEC GpmMetricId = 111
GPM_METRIC_C2C_LINK2_TOTAL_TX_PER_SEC GpmMetricId = 112
GPM_METRIC_C2C_LINK2_TOTAL_RX_PER_SEC GpmMetricId = 113
GPM_METRIC_C2C_LINK2_DATA_TX_PER_SEC GpmMetricId = 114
GPM_METRIC_C2C_LINK2_DATA_RX_PER_SEC GpmMetricId = 115
GPM_METRIC_C2C_LINK3_TOTAL_TX_PER_SEC GpmMetricId = 116
GPM_METRIC_C2C_LINK3_TOTAL_RX_PER_SEC GpmMetricId = 117
GPM_METRIC_C2C_LINK3_DATA_TX_PER_SEC GpmMetricId = 118
GPM_METRIC_C2C_LINK3_DATA_RX_PER_SEC GpmMetricId = 119
GPM_METRIC_C2C_LINK4_TOTAL_TX_PER_SEC GpmMetricId = 120
GPM_METRIC_C2C_LINK4_TOTAL_RX_PER_SEC GpmMetricId = 121
GPM_METRIC_C2C_LINK4_DATA_TX_PER_SEC GpmMetricId = 122
GPM_METRIC_C2C_LINK4_DATA_RX_PER_SEC GpmMetricId = 123
GPM_METRIC_C2C_LINK5_TOTAL_TX_PER_SEC GpmMetricId = 124
GPM_METRIC_C2C_LINK5_TOTAL_RX_PER_SEC GpmMetricId = 125
GPM_METRIC_C2C_LINK5_DATA_TX_PER_SEC GpmMetricId = 126
GPM_METRIC_C2C_LINK5_DATA_RX_PER_SEC GpmMetricId = 127
GPM_METRIC_C2C_LINK6_TOTAL_TX_PER_SEC GpmMetricId = 128
GPM_METRIC_C2C_LINK6_TOTAL_RX_PER_SEC GpmMetricId = 129
GPM_METRIC_C2C_LINK6_DATA_TX_PER_SEC GpmMetricId = 130
GPM_METRIC_C2C_LINK6_DATA_RX_PER_SEC GpmMetricId = 131
GPM_METRIC_C2C_LINK7_TOTAL_TX_PER_SEC GpmMetricId = 132
GPM_METRIC_C2C_LINK7_TOTAL_RX_PER_SEC GpmMetricId = 133
GPM_METRIC_C2C_LINK7_DATA_TX_PER_SEC GpmMetricId = 134
GPM_METRIC_C2C_LINK7_DATA_RX_PER_SEC GpmMetricId = 135
GPM_METRIC_C2C_LINK8_TOTAL_TX_PER_SEC GpmMetricId = 136
GPM_METRIC_C2C_LINK8_TOTAL_RX_PER_SEC GpmMetricId = 137
GPM_METRIC_C2C_LINK8_DATA_TX_PER_SEC GpmMetricId = 138
GPM_METRIC_C2C_LINK8_DATA_RX_PER_SEC GpmMetricId = 139
GPM_METRIC_C2C_LINK9_TOTAL_TX_PER_SEC GpmMetricId = 140
GPM_METRIC_C2C_LINK9_TOTAL_RX_PER_SEC GpmMetricId = 141
GPM_METRIC_C2C_LINK9_DATA_TX_PER_SEC GpmMetricId = 142
GPM_METRIC_C2C_LINK9_DATA_RX_PER_SEC GpmMetricId = 143
GPM_METRIC_C2C_LINK10_TOTAL_TX_PER_SEC GpmMetricId = 144
GPM_METRIC_C2C_LINK10_TOTAL_RX_PER_SEC GpmMetricId = 145
GPM_METRIC_C2C_LINK10_DATA_TX_PER_SEC GpmMetricId = 146
GPM_METRIC_C2C_LINK10_DATA_RX_PER_SEC GpmMetricId = 147
GPM_METRIC_C2C_LINK11_TOTAL_TX_PER_SEC GpmMetricId = 148
GPM_METRIC_C2C_LINK11_TOTAL_RX_PER_SEC GpmMetricId = 149
GPM_METRIC_C2C_LINK11_DATA_TX_PER_SEC GpmMetricId = 150
GPM_METRIC_C2C_LINK11_DATA_RX_PER_SEC GpmMetricId = 151
GPM_METRIC_C2C_LINK12_TOTAL_TX_PER_SEC GpmMetricId = 152
GPM_METRIC_C2C_LINK12_TOTAL_RX_PER_SEC GpmMetricId = 153
GPM_METRIC_C2C_LINK12_DATA_TX_PER_SEC GpmMetricId = 154
GPM_METRIC_C2C_LINK12_DATA_RX_PER_SEC GpmMetricId = 155
GPM_METRIC_C2C_LINK13_TOTAL_TX_PER_SEC GpmMetricId = 156
GPM_METRIC_C2C_LINK13_TOTAL_RX_PER_SEC GpmMetricId = 157
GPM_METRIC_C2C_LINK13_DATA_TX_PER_SEC GpmMetricId = 158
GPM_METRIC_C2C_LINK13_DATA_RX_PER_SEC GpmMetricId = 159
GPM_METRIC_HOSTMEM_CACHE_HIT GpmMetricId = 160
GPM_METRIC_HOSTMEM_CACHE_MISS GpmMetricId = 161
GPM_METRIC_PEERMEM_CACHE_HIT GpmMetricId = 162
GPM_METRIC_PEERMEM_CACHE_MISS GpmMetricId = 163
GPM_METRIC_DRAM_CACHE_HIT GpmMetricId = 164
GPM_METRIC_DRAM_CACHE_MISS GpmMetricId = 165
GPM_METRIC_NVENC_0_UTIL GpmMetricId = 166
GPM_METRIC_NVENC_1_UTIL GpmMetricId = 167
GPM_METRIC_NVENC_2_UTIL GpmMetricId = 168
GPM_METRIC_NVENC_3_UTIL GpmMetricId = 169
GPM_METRIC_GR0_CTXSW_CYCLES_ELAPSED GpmMetricId = 170
GPM_METRIC_GR0_CTXSW_CYCLES_ACTIVE GpmMetricId = 171
GPM_METRIC_GR0_CTXSW_REQUESTS GpmMetricId = 172
GPM_METRIC_GR0_CTXSW_CYCLES_PER_REQ GpmMetricId = 173
GPM_METRIC_GR0_CTXSW_ACTIVE_PCT GpmMetricId = 174
GPM_METRIC_GR1_CTXSW_CYCLES_ELAPSED GpmMetricId = 175
GPM_METRIC_GR1_CTXSW_CYCLES_ACTIVE GpmMetricId = 176
GPM_METRIC_GR1_CTXSW_REQUESTS GpmMetricId = 177
GPM_METRIC_GR1_CTXSW_CYCLES_PER_REQ GpmMetricId = 178
GPM_METRIC_GR1_CTXSW_ACTIVE_PCT GpmMetricId = 179
GPM_METRIC_GR2_CTXSW_CYCLES_ELAPSED GpmMetricId = 180
GPM_METRIC_GR2_CTXSW_CYCLES_ACTIVE GpmMetricId = 181
GPM_METRIC_GR2_CTXSW_REQUESTS GpmMetricId = 182
GPM_METRIC_GR2_CTXSW_CYCLES_PER_REQ GpmMetricId = 183
GPM_METRIC_GR2_CTXSW_ACTIVE_PCT GpmMetricId = 184
GPM_METRIC_GR3_CTXSW_CYCLES_ELAPSED GpmMetricId = 185
GPM_METRIC_GR3_CTXSW_CYCLES_ACTIVE GpmMetricId = 186
GPM_METRIC_GR3_CTXSW_REQUESTS GpmMetricId = 187
GPM_METRIC_GR3_CTXSW_CYCLES_PER_REQ GpmMetricId = 188
GPM_METRIC_GR3_CTXSW_ACTIVE_PCT GpmMetricId = 189
GPM_METRIC_GR4_CTXSW_CYCLES_ELAPSED GpmMetricId = 190
GPM_METRIC_GR4_CTXSW_CYCLES_ACTIVE GpmMetricId = 191
GPM_METRIC_GR4_CTXSW_REQUESTS GpmMetricId = 192
GPM_METRIC_GR4_CTXSW_CYCLES_PER_REQ GpmMetricId = 193
GPM_METRIC_GR4_CTXSW_ACTIVE_PCT GpmMetricId = 194
GPM_METRIC_GR5_CTXSW_CYCLES_ELAPSED GpmMetricId = 195
GPM_METRIC_GR5_CTXSW_CYCLES_ACTIVE GpmMetricId = 196
GPM_METRIC_GR5_CTXSW_REQUESTS GpmMetricId = 197
GPM_METRIC_GR5_CTXSW_CYCLES_PER_REQ GpmMetricId = 198
GPM_METRIC_GR5_CTXSW_ACTIVE_PCT GpmMetricId = 199
GPM_METRIC_GR6_CTXSW_CYCLES_ELAPSED GpmMetricId = 200
GPM_METRIC_GR6_CTXSW_CYCLES_ACTIVE GpmMetricId = 201
GPM_METRIC_GR6_CTXSW_REQUESTS GpmMetricId = 202
GPM_METRIC_GR6_CTXSW_CYCLES_PER_REQ GpmMetricId = 203
GPM_METRIC_GR6_CTXSW_ACTIVE_PCT GpmMetricId = 204
GPM_METRIC_GR7_CTXSW_CYCLES_ELAPSED GpmMetricId = 205
GPM_METRIC_GR7_CTXSW_CYCLES_ACTIVE GpmMetricId = 206
GPM_METRIC_GR7_CTXSW_REQUESTS GpmMetricId = 207
GPM_METRIC_GR7_CTXSW_CYCLES_PER_REQ GpmMetricId = 208
GPM_METRIC_GR7_CTXSW_ACTIVE_PCT GpmMetricId = 209
GPM_METRIC_MAX GpmMetricId = 210
)
// PowerProfileType as declared in nvml/nvml.h
type PowerProfileType int32
// PowerProfileType enumeration from nvml/nvml.h
const (
POWER_PROFILE_MAX_P PowerProfileType = iota
POWER_PROFILE_MAX_Q PowerProfileType = 1
POWER_PROFILE_COMPUTE PowerProfileType = 2
POWER_PROFILE_MEMORY_BOUND PowerProfileType = 3
POWER_PROFILE_NETWORK PowerProfileType = 4
POWER_PROFILE_BALANCED PowerProfileType = 5
POWER_PROFILE_LLM_INFERENCE PowerProfileType = 6
POWER_PROFILE_LLM_TRAINING PowerProfileType = 7
POWER_PROFILE_RBM PowerProfileType = 8
POWER_PROFILE_DCPCIE PowerProfileType = 9
POWER_PROFILE_HMMA_SPARSE PowerProfileType = 10
POWER_PROFILE_HMMA_DENSE PowerProfileType = 11
POWER_PROFILE_SYNC_BALANCED PowerProfileType = 12
POWER_PROFILE_HPC PowerProfileType = 13
POWER_PROFILE_MIG PowerProfileType = 14
POWER_PROFILE_MAX PowerProfileType = 15
)

View File

@@ -68,16 +68,6 @@ type GpuInstanceInfo struct {
Placement GpuInstancePlacement
}
func (g GpuInstanceInfo) convert() nvmlGpuInstanceInfo {
out := nvmlGpuInstanceInfo{
Device: g.Device.(nvmlDevice),
Id: g.Id,
ProfileId: g.ProfileId,
Placement: g.Placement,
}
return out
}
func (g nvmlGpuInstanceInfo) convert() GpuInstanceInfo {
out := GpuInstanceInfo{
Device: g.Device,
@@ -97,17 +87,6 @@ type ComputeInstanceInfo struct {
Placement ComputeInstancePlacement
}
func (c ComputeInstanceInfo) convert() nvmlComputeInstanceInfo {
out := nvmlComputeInstanceInfo{
Device: c.Device.(nvmlDevice),
GpuInstance: c.GpuInstance.(nvmlGpuInstance),
Id: c.Id,
ProfileId: c.ProfileId,
Placement: c.Placement,
}
return out
}
func (c nvmlComputeInstanceInfo) convert() ComputeInstanceInfo {
out := ComputeInstanceInfo{
Device: c.Device,
@@ -147,6 +126,13 @@ func (l *library) DeviceGetHandleByUUID(uuid string) (Device, Return) {
return device, ret
}
// nvml.DeviceGetHandleByUUIDV()
func (l *library) DeviceGetHandleByUUIDV(uuid *UUID) (Device, Return) {
var device nvmlDevice
ret := nvmlDeviceGetHandleByUUIDV(uuid, &device)
return device, ret
}
// nvml.DeviceGetHandleByPciBusId()
func (l *library) DeviceGetHandleByPciBusId(pciBusId string) (Device, Return) {
var device nvmlDevice
@@ -2101,6 +2087,13 @@ func (handler GpuInstanceProfileInfoHandler) V2() (GpuInstanceProfileInfo_v2, Re
return info, ret
}
func (handler GpuInstanceProfileInfoHandler) V3() (GpuInstanceProfileInfo_v3, Return) {
var info GpuInstanceProfileInfo_v3
info.Version = STRUCT_VERSION(info, 3)
ret := nvmlDeviceGetGpuInstanceProfileInfoV(handler.device, uint32(handler.profile), (*GpuInstanceProfileInfo_v2)(unsafe.Pointer(&info)))
return info, ret
}
func (l *library) DeviceGetGpuInstanceProfileInfoV(device Device, profile int) GpuInstanceProfileInfoHandler {
return device.GetGpuInstanceProfileInfoV(profile)
}
@@ -2191,7 +2184,7 @@ func (device nvmlDevice) GetGpuInstances(info *GpuInstanceProfileInfo) ([]GpuIns
if info == nil {
return nil, ERROR_INVALID_ARGUMENT
}
var count uint32 = info.InstanceCount
var count = info.InstanceCount
gpuInstances := make([]nvmlGpuInstance, count)
ret := nvmlDeviceGetGpuInstances(device, info.Id, &gpuInstances[0], &count)
return convertSlice[nvmlGpuInstance, GpuInstance](gpuInstances[:count]), ret
@@ -2248,6 +2241,13 @@ func (handler ComputeInstanceProfileInfoHandler) V2() (ComputeInstanceProfileInf
return info, ret
}
func (handler ComputeInstanceProfileInfoHandler) V3() (ComputeInstanceProfileInfo_v3, Return) {
var info ComputeInstanceProfileInfo_v3
info.Version = STRUCT_VERSION(info, 3)
ret := nvmlGpuInstanceGetComputeInstanceProfileInfoV(handler.gpuInstance, uint32(handler.profile), uint32(handler.engProfile), (*ComputeInstanceProfileInfo_v2)(unsafe.Pointer(&info)))
return info, ret
}
func (l *library) GpuInstanceGetComputeInstanceProfileInfoV(gpuInstance GpuInstance, profile int, engProfile int) ComputeInstanceProfileInfoHandler {
return gpuInstance.GetComputeInstanceProfileInfoV(profile, engProfile)
}
@@ -2302,7 +2302,7 @@ func (gpuInstance nvmlGpuInstance) GetComputeInstances(info *ComputeInstanceProf
if info == nil {
return nil, ERROR_INVALID_ARGUMENT
}
var count uint32 = info.InstanceCount
var count = info.InstanceCount
computeInstances := make([]nvmlComputeInstance, count)
ret := nvmlGpuInstanceGetComputeInstances(gpuInstance, info.Id, &computeInstances[0], &count)
return convertSlice[nvmlComputeInstance, ComputeInstance](computeInstances[:count]), ret
@@ -3062,3 +3062,353 @@ func (device nvmlDevice) GetSramEccErrorStatus() (EccSramErrorStatus, Return) {
ret := nvmlDeviceGetSramEccErrorStatus(device, &status)
return status, ret
}
// nvml.DeviceGetClockOffsets()
func (l *library) DeviceGetClockOffsets(device Device) (ClockOffset, Return) {
return device.GetClockOffsets()
}
func (device nvmlDevice) GetClockOffsets() (ClockOffset, Return) {
var info ClockOffset
info.Version = STRUCT_VERSION(info, 1)
ret := nvmlDeviceGetClockOffsets(device, &info)
return info, ret
}
// nvml.DeviceSetClockOffsets()
func (l *library) DeviceSetClockOffsets(device Device, info ClockOffset) Return {
return device.SetClockOffsets(info)
}
func (device nvmlDevice) SetClockOffsets(info ClockOffset) Return {
return nvmlDeviceSetClockOffsets(device, &info)
}
// nvml.DeviceGetDriverModel_v2()
func (l *library) DeviceGetDriverModel_v2(device Device) (DriverModel, DriverModel, Return) {
return device.GetDriverModel_v2()
}
func (device nvmlDevice) GetDriverModel_v2() (DriverModel, DriverModel, Return) {
var current, pending DriverModel
ret := nvmlDeviceGetDriverModel_v2(device, &current, &pending)
return current, pending, ret
}
// nvml.DeviceGetCapabilities()
func (l *library) DeviceGetCapabilities(device Device) (DeviceCapabilities, Return) {
return device.GetCapabilities()
}
func (device nvmlDevice) GetCapabilities() (DeviceCapabilities, Return) {
var caps DeviceCapabilities
caps.Version = STRUCT_VERSION(caps, 1)
ret := nvmlDeviceGetCapabilities(device, &caps)
return caps, ret
}
// nvml.DeviceGetFanSpeedRPM()
func (l *library) DeviceGetFanSpeedRPM(device Device) (FanSpeedInfo, Return) {
return device.GetFanSpeedRPM()
}
func (device nvmlDevice) GetFanSpeedRPM() (FanSpeedInfo, Return) {
var fanSpeed FanSpeedInfo
fanSpeed.Version = STRUCT_VERSION(fanSpeed, 1)
ret := nvmlDeviceGetFanSpeedRPM(device, &fanSpeed)
return fanSpeed, ret
}
// nvml.DeviceGetCoolerInfo()
func (l *library) DeviceGetCoolerInfo(device Device) (CoolerInfo, Return) {
return device.GetCoolerInfo()
}
func (device nvmlDevice) GetCoolerInfo() (CoolerInfo, Return) {
var coolerInfo CoolerInfo
coolerInfo.Version = STRUCT_VERSION(coolerInfo, 1)
ret := nvmlDeviceGetCoolerInfo(device, &coolerInfo)
return coolerInfo, ret
}
// nvml.DeviceGetTemperatureV()
type TemperatureHandler struct {
device nvmlDevice
}
func (handler TemperatureHandler) V1() (Temperature, Return) {
var temperature Temperature
temperature.Version = STRUCT_VERSION(temperature, 1)
ret := nvmlDeviceGetTemperatureV(handler.device, &temperature)
return temperature, ret
}
func (l *library) DeviceGetTemperatureV(device Device) TemperatureHandler {
return device.GetTemperatureV()
}
func (device nvmlDevice) GetTemperatureV() TemperatureHandler {
return TemperatureHandler{device}
}
// nvml.DeviceGetMarginTemperature()
func (l *library) DeviceGetMarginTemperature(device Device) (MarginTemperature, Return) {
return device.GetMarginTemperature()
}
func (device nvmlDevice) GetMarginTemperature() (MarginTemperature, Return) {
var marginTemp MarginTemperature
marginTemp.Version = STRUCT_VERSION(marginTemp, 1)
ret := nvmlDeviceGetMarginTemperature(device, &marginTemp)
return marginTemp, ret
}
// nvml.DeviceGetPerformanceModes()
func (l *library) DeviceGetPerformanceModes(device Device) (DevicePerfModes, Return) {
return device.GetPerformanceModes()
}
func (device nvmlDevice) GetPerformanceModes() (DevicePerfModes, Return) {
var perfModes DevicePerfModes
perfModes.Version = STRUCT_VERSION(perfModes, 1)
ret := nvmlDeviceGetPerformanceModes(device, &perfModes)
return perfModes, ret
}
// nvml.DeviceGetCurrentClockFreqs()
func (l *library) DeviceGetCurrentClockFreqs(device Device) (DeviceCurrentClockFreqs, Return) {
return device.GetCurrentClockFreqs()
}
func (device nvmlDevice) GetCurrentClockFreqs() (DeviceCurrentClockFreqs, Return) {
var currentClockFreqs DeviceCurrentClockFreqs
currentClockFreqs.Version = STRUCT_VERSION(currentClockFreqs, 1)
ret := nvmlDeviceGetCurrentClockFreqs(device, &currentClockFreqs)
return currentClockFreqs, ret
}
// nvml.DeviceGetDramEncryptionMode()
func (l *library) DeviceGetDramEncryptionMode(device Device) (DramEncryptionInfo, DramEncryptionInfo, Return) {
return device.GetDramEncryptionMode()
}
func (device nvmlDevice) GetDramEncryptionMode() (DramEncryptionInfo, DramEncryptionInfo, Return) {
var current, pending DramEncryptionInfo
current.Version = STRUCT_VERSION(current, 1)
pending.Version = STRUCT_VERSION(pending, 1)
ret := nvmlDeviceGetDramEncryptionMode(device, &current, &pending)
return current, pending, ret
}
// nvml.DeviceSetDramEncryptionMode()
func (l *library) DeviceSetDramEncryptionMode(device Device, dramEncryption *DramEncryptionInfo) Return {
return device.SetDramEncryptionMode(dramEncryption)
}
func (device nvmlDevice) SetDramEncryptionMode(dramEncryption *DramEncryptionInfo) Return {
return nvmlDeviceSetDramEncryptionMode(device, dramEncryption)
}
// nvml.DeviceGetPlatformInfo()
func (l *library) DeviceGetPlatformInfo(device Device) (PlatformInfo, Return) {
return device.GetPlatformInfo()
}
func (device nvmlDevice) GetPlatformInfo() (PlatformInfo, Return) {
var platformInfo PlatformInfo
platformInfo.Version = STRUCT_VERSION(platformInfo, 1)
ret := nvmlDeviceGetPlatformInfo(device, &platformInfo)
return platformInfo, ret
}
// nvml.DeviceGetNvlinkSupportedBwModes()
func (l *library) DeviceGetNvlinkSupportedBwModes(device Device) (NvlinkSupportedBwModes, Return) {
return device.GetNvlinkSupportedBwModes()
}
func (device nvmlDevice) GetNvlinkSupportedBwModes() (NvlinkSupportedBwModes, Return) {
var supportedBwMode NvlinkSupportedBwModes
supportedBwMode.Version = STRUCT_VERSION(supportedBwMode, 1)
ret := nvmlDeviceGetNvlinkSupportedBwModes(device, &supportedBwMode)
return supportedBwMode, ret
}
// nvml.DeviceGetNvlinkBwMode()
func (l *library) DeviceGetNvlinkBwMode(device Device) (NvlinkGetBwMode, Return) {
return device.GetNvlinkBwMode()
}
func (device nvmlDevice) GetNvlinkBwMode() (NvlinkGetBwMode, Return) {
var getBwMode NvlinkGetBwMode
getBwMode.Version = STRUCT_VERSION(getBwMode, 1)
ret := nvmlDeviceGetNvlinkBwMode(device, &getBwMode)
return getBwMode, ret
}
// nvml.DeviceSetNvlinkBwMode()
func (l *library) DeviceSetNvlinkBwMode(device Device, setBwMode *NvlinkSetBwMode) Return {
return device.SetNvlinkBwMode(setBwMode)
}
func (device nvmlDevice) SetNvlinkBwMode(setBwMode *NvlinkSetBwMode) Return {
return nvmlDeviceSetNvlinkBwMode(device, setBwMode)
}
// nvml.DeviceWorkloadPowerProfileGetProfilesInfo()
func (l *library) DeviceWorkloadPowerProfileGetProfilesInfo(device Device) (WorkloadPowerProfileProfilesInfo, Return) {
return device.WorkloadPowerProfileGetProfilesInfo()
}
func (device nvmlDevice) WorkloadPowerProfileGetProfilesInfo() (WorkloadPowerProfileProfilesInfo, Return) {
var profilesInfo WorkloadPowerProfileProfilesInfo
profilesInfo.Version = STRUCT_VERSION(profilesInfo, 1)
ret := nvmlDeviceWorkloadPowerProfileGetProfilesInfo(device, &profilesInfo)
return profilesInfo, ret
}
// nvml.DeviceWorkloadPowerProfileGetCurrentProfiles()
func (l *library) DeviceWorkloadPowerProfileGetCurrentProfiles(device Device) (WorkloadPowerProfileCurrentProfiles, Return) {
return device.WorkloadPowerProfileGetCurrentProfiles()
}
func (device nvmlDevice) WorkloadPowerProfileGetCurrentProfiles() (WorkloadPowerProfileCurrentProfiles, Return) {
var currentProfiles WorkloadPowerProfileCurrentProfiles
currentProfiles.Version = STRUCT_VERSION(currentProfiles, 1)
ret := nvmlDeviceWorkloadPowerProfileGetCurrentProfiles(device, &currentProfiles)
return currentProfiles, ret
}
// nvml.DeviceWorkloadPowerProfileSetRequestedProfiles()
func (l *library) DeviceWorkloadPowerProfileSetRequestedProfiles(device Device, requestedProfiles *WorkloadPowerProfileRequestedProfiles) Return {
return device.WorkloadPowerProfileSetRequestedProfiles(requestedProfiles)
}
func (device nvmlDevice) WorkloadPowerProfileSetRequestedProfiles(requestedProfiles *WorkloadPowerProfileRequestedProfiles) Return {
return nvmlDeviceWorkloadPowerProfileSetRequestedProfiles(device, requestedProfiles)
}
// nvml.DeviceWorkloadPowerProfileClearRequestedProfiles()
func (l *library) DeviceWorkloadPowerProfileClearRequestedProfiles(device Device, requestedProfiles *WorkloadPowerProfileRequestedProfiles) Return {
return device.WorkloadPowerProfileClearRequestedProfiles(requestedProfiles)
}
func (device nvmlDevice) WorkloadPowerProfileClearRequestedProfiles(requestedProfiles *WorkloadPowerProfileRequestedProfiles) Return {
return nvmlDeviceWorkloadPowerProfileClearRequestedProfiles(device, requestedProfiles)
}
// nvml.DevicePowerSmoothingActivatePresetProfile()
func (l *library) DevicePowerSmoothingActivatePresetProfile(device Device, profile *PowerSmoothingProfile) Return {
return device.PowerSmoothingActivatePresetProfile(profile)
}
func (device nvmlDevice) PowerSmoothingActivatePresetProfile(profile *PowerSmoothingProfile) Return {
return nvmlDevicePowerSmoothingActivatePresetProfile(device, profile)
}
// nvml.DevicePowerSmoothingUpdatePresetProfileParam()
func (l *library) DevicePowerSmoothingUpdatePresetProfileParam(device Device, profile *PowerSmoothingProfile) Return {
return device.PowerSmoothingUpdatePresetProfileParam(profile)
}
func (device nvmlDevice) PowerSmoothingUpdatePresetProfileParam(profile *PowerSmoothingProfile) Return {
return nvmlDevicePowerSmoothingUpdatePresetProfileParam(device, profile)
}
// nvml.DevicePowerSmoothingSetState()
func (l *library) DevicePowerSmoothingSetState(device Device, state *PowerSmoothingState) Return {
return device.PowerSmoothingSetState(state)
}
func (device nvmlDevice) PowerSmoothingSetState(state *PowerSmoothingState) Return {
return nvmlDevicePowerSmoothingSetState(device, state)
}
// nvml.GpuInstanceGetCreatableVgpus()
func (l *library) GpuInstanceGetCreatableVgpus(gpuInstance GpuInstance) (VgpuTypeIdInfo, Return) {
return gpuInstance.GetCreatableVgpus()
}
func (gpuInstance nvmlGpuInstance) GetCreatableVgpus() (VgpuTypeIdInfo, Return) {
var vgpuTypeIdInfo VgpuTypeIdInfo
vgpuTypeIdInfo.Version = STRUCT_VERSION(vgpuTypeIdInfo, 1)
ret := nvmlGpuInstanceGetCreatableVgpus(gpuInstance, &vgpuTypeIdInfo)
return vgpuTypeIdInfo, ret
}
// nvml.GpuInstanceGetActiveVgpus()
func (l *library) GpuInstanceGetActiveVgpus(gpuInstance GpuInstance) (ActiveVgpuInstanceInfo, Return) {
return gpuInstance.GetActiveVgpus()
}
func (gpuInstance nvmlGpuInstance) GetActiveVgpus() (ActiveVgpuInstanceInfo, Return) {
var activeVgpuInstanceInfo ActiveVgpuInstanceInfo
activeVgpuInstanceInfo.Version = STRUCT_VERSION(activeVgpuInstanceInfo, 1)
ret := nvmlGpuInstanceGetActiveVgpus(gpuInstance, &activeVgpuInstanceInfo)
return activeVgpuInstanceInfo, ret
}
// nvml.GpuInstanceSetVgpuSchedulerState()
func (l *library) GpuInstanceSetVgpuSchedulerState(gpuInstance GpuInstance, scheduler *VgpuSchedulerState) Return {
return gpuInstance.SetVgpuSchedulerState(scheduler)
}
func (gpuInstance nvmlGpuInstance) SetVgpuSchedulerState(scheduler *VgpuSchedulerState) Return {
return nvmlGpuInstanceSetVgpuSchedulerState(gpuInstance, scheduler)
}
// nvml.GpuInstanceGetVgpuSchedulerState()
func (l *library) GpuInstanceGetVgpuSchedulerState(gpuInstance GpuInstance) (VgpuSchedulerStateInfo, Return) {
return gpuInstance.GetVgpuSchedulerState()
}
func (gpuInstance nvmlGpuInstance) GetVgpuSchedulerState() (VgpuSchedulerStateInfo, Return) {
var schedulerStateInfo VgpuSchedulerStateInfo
schedulerStateInfo.Version = STRUCT_VERSION(schedulerStateInfo, 1)
ret := nvmlGpuInstanceGetVgpuSchedulerState(gpuInstance, &schedulerStateInfo)
return schedulerStateInfo, ret
}
// nvml.GpuInstanceGetVgpuSchedulerLog()
func (l *library) GpuInstanceGetVgpuSchedulerLog(gpuInstance GpuInstance) (VgpuSchedulerLogInfo, Return) {
return gpuInstance.GetVgpuSchedulerLog()
}
func (gpuInstance nvmlGpuInstance) GetVgpuSchedulerLog() (VgpuSchedulerLogInfo, Return) {
var schedulerLogInfo VgpuSchedulerLogInfo
schedulerLogInfo.Version = STRUCT_VERSION(schedulerLogInfo, 1)
ret := nvmlGpuInstanceGetVgpuSchedulerLog(gpuInstance, &schedulerLogInfo)
return schedulerLogInfo, ret
}
// nvml.GpuInstanceGetVgpuTypeCreatablePlacements()
func (l *library) GpuInstanceGetVgpuTypeCreatablePlacements(gpuInstance GpuInstance) (VgpuCreatablePlacementInfo, Return) {
return gpuInstance.GetVgpuTypeCreatablePlacements()
}
func (gpuInstance nvmlGpuInstance) GetVgpuTypeCreatablePlacements() (VgpuCreatablePlacementInfo, Return) {
var creatablePlacementInfo VgpuCreatablePlacementInfo
creatablePlacementInfo.Version = STRUCT_VERSION(creatablePlacementInfo, 1)
ret := nvmlGpuInstanceGetVgpuTypeCreatablePlacements(gpuInstance, &creatablePlacementInfo)
return creatablePlacementInfo, ret
}
// nvml.GpuInstanceGetVgpuHeterogeneousMode()
func (l *library) GpuInstanceGetVgpuHeterogeneousMode(gpuInstance GpuInstance) (VgpuHeterogeneousMode, Return) {
return gpuInstance.GetVgpuHeterogeneousMode()
}
func (gpuInstance nvmlGpuInstance) GetVgpuHeterogeneousMode() (VgpuHeterogeneousMode, Return) {
var heterogeneousMode VgpuHeterogeneousMode
heterogeneousMode.Version = STRUCT_VERSION(heterogeneousMode, 1)
ret := nvmlGpuInstanceGetVgpuHeterogeneousMode(gpuInstance, &heterogeneousMode)
return heterogeneousMode, ret
}
// nvml.GpuInstanceSetVgpuHeterogeneousMode()
func (l *library) GpuInstanceSetVgpuHeterogeneousMode(gpuInstance GpuInstance, heterogeneousMode *VgpuHeterogeneousMode) Return {
return gpuInstance.SetVgpuHeterogeneousMode(heterogeneousMode)
}
func (gpuInstance nvmlGpuInstance) SetVgpuHeterogeneousMode(heterogeneousMode *VgpuHeterogeneousMode) Return {
return nvmlGpuInstanceSetVgpuHeterogeneousMode(gpuInstance, heterogeneousMode)
}

View File

@@ -23,17 +23,6 @@ type EventData struct {
ComputeInstanceId uint32
}
func (e EventData) convert() nvmlEventData {
out := nvmlEventData{
Device: e.Device.(nvmlDevice),
EventType: e.EventType,
EventData: e.EventData,
GpuInstanceId: e.GpuInstanceId,
ComputeInstanceId: e.ComputeInstanceId,
}
return out
}
func (e nvmlEventData) convert() EventData {
out := EventData{
Device: e.Device,
@@ -71,3 +60,23 @@ func (l *library) EventSetFree(set EventSet) Return {
func (set nvmlEventSet) Free() Return {
return nvmlEventSetFree(set)
}
// nvml.SystemEventSetCreate()
func (l *library) SystemEventSetCreate(request *SystemEventSetCreateRequest) Return {
return nvmlSystemEventSetCreate(request)
}
// nvml.SystemEventSetFree()
func (l *library) SystemEventSetFree(request *SystemEventSetFreeRequest) Return {
return nvmlSystemEventSetFree(request)
}
// nvml.SystemRegisterEvents()
func (l *library) SystemRegisterEvents(request *SystemRegisterEventRequest) Return {
return nvmlSystemRegisterEvents(request)
}
// nvml.SystemEventSetWait()
func (l *library) SystemEventSetWait(request *SystemEventSetWaitRequest) Return {
return nvmlSystemEventSetWait(request)
}

View File

@@ -20,7 +20,7 @@ type GpmMetricsGetType struct {
NumMetrics uint32
Sample1 GpmSample
Sample2 GpmSample
Metrics [98]GpmMetric
Metrics [210]GpmMetric
}
func (g *GpmMetricsGetType) convert() *nvmlGpmMetricsGetType {
@@ -30,9 +30,8 @@ func (g *GpmMetricsGetType) convert() *nvmlGpmMetricsGetType {
Sample1: g.Sample1.(nvmlGpmSample),
Sample2: g.Sample2.(nvmlGpmSample),
}
for i := range g.Metrics {
out.Metrics[i] = g.Metrics[i]
}
copy(out.Metrics[:], g.Metrics[:])
return out
}
@@ -43,9 +42,8 @@ func (g *nvmlGpmMetricsGetType) convert() *GpmMetricsGetType {
Sample1: g.Sample1,
Sample2: g.Sample2,
}
for i := range g.Metrics {
out.Metrics[i] = g.Metrics[i]
}
copy(out.Metrics[:], g.Metrics[:])
return out
}

View File

@@ -163,6 +163,7 @@ var GetBlacklistDeviceCount = GetExcludedDeviceCount
var GetBlacklistDeviceInfoByIndex = GetExcludedDeviceInfoByIndex
var nvmlDeviceGetGpuInstancePossiblePlacements = nvmlDeviceGetGpuInstancePossiblePlacements_v1
var nvmlVgpuInstanceGetLicenseInfo = nvmlVgpuInstanceGetLicenseInfo_v1
var nvmlDeviceGetDriverModel = nvmlDeviceGetDriverModel_v1
// BlacklistDeviceInfo was replaced by ExcludedDeviceInfo
type BlacklistDeviceInfo = ExcludedDeviceInfo
@@ -288,4 +289,8 @@ func (l *library) updateVersionedSymbols() {
if err == nil {
nvmlVgpuInstanceGetLicenseInfo = nvmlVgpuInstanceGetLicenseInfo_v2
}
err = l.dl.Lookup("nvmlDeviceGetDriverModel_v2")
if err == nil {
nvmlDeviceGetDriverModel = nvmlDeviceGetDriverModel_v2
}
}

File diff suppressed because it is too large Load Diff

View File

@@ -27,6 +27,9 @@ var _ nvml.GpuInstance = &GpuInstance{}
// DestroyFunc: func() nvml.Return {
// panic("mock out the Destroy method")
// },
// GetActiveVgpusFunc: func() (nvml.ActiveVgpuInstanceInfo, nvml.Return) {
// panic("mock out the GetActiveVgpus method")
// },
// GetComputeInstanceByIdFunc: func(n int) (nvml.ComputeInstance, nvml.Return) {
// panic("mock out the GetComputeInstanceById method")
// },
@@ -45,9 +48,30 @@ var _ nvml.GpuInstance = &GpuInstance{}
// GetComputeInstancesFunc: func(computeInstanceProfileInfo *nvml.ComputeInstanceProfileInfo) ([]nvml.ComputeInstance, nvml.Return) {
// panic("mock out the GetComputeInstances method")
// },
// GetCreatableVgpusFunc: func() (nvml.VgpuTypeIdInfo, nvml.Return) {
// panic("mock out the GetCreatableVgpus method")
// },
// GetInfoFunc: func() (nvml.GpuInstanceInfo, nvml.Return) {
// panic("mock out the GetInfo method")
// },
// GetVgpuHeterogeneousModeFunc: func() (nvml.VgpuHeterogeneousMode, nvml.Return) {
// panic("mock out the GetVgpuHeterogeneousMode method")
// },
// GetVgpuSchedulerLogFunc: func() (nvml.VgpuSchedulerLogInfo, nvml.Return) {
// panic("mock out the GetVgpuSchedulerLog method")
// },
// GetVgpuSchedulerStateFunc: func() (nvml.VgpuSchedulerStateInfo, nvml.Return) {
// panic("mock out the GetVgpuSchedulerState method")
// },
// GetVgpuTypeCreatablePlacementsFunc: func() (nvml.VgpuCreatablePlacementInfo, nvml.Return) {
// panic("mock out the GetVgpuTypeCreatablePlacements method")
// },
// SetVgpuHeterogeneousModeFunc: func(vgpuHeterogeneousMode *nvml.VgpuHeterogeneousMode) nvml.Return {
// panic("mock out the SetVgpuHeterogeneousMode method")
// },
// SetVgpuSchedulerStateFunc: func(vgpuSchedulerState *nvml.VgpuSchedulerState) nvml.Return {
// panic("mock out the SetVgpuSchedulerState method")
// },
// }
//
// // use mockedGpuInstance in code that requires nvml.GpuInstance
@@ -64,6 +88,9 @@ type GpuInstance struct {
// DestroyFunc mocks the Destroy method.
DestroyFunc func() nvml.Return
// GetActiveVgpusFunc mocks the GetActiveVgpus method.
GetActiveVgpusFunc func() (nvml.ActiveVgpuInstanceInfo, nvml.Return)
// GetComputeInstanceByIdFunc mocks the GetComputeInstanceById method.
GetComputeInstanceByIdFunc func(n int) (nvml.ComputeInstance, nvml.Return)
@@ -82,9 +109,30 @@ type GpuInstance struct {
// GetComputeInstancesFunc mocks the GetComputeInstances method.
GetComputeInstancesFunc func(computeInstanceProfileInfo *nvml.ComputeInstanceProfileInfo) ([]nvml.ComputeInstance, nvml.Return)
// GetCreatableVgpusFunc mocks the GetCreatableVgpus method.
GetCreatableVgpusFunc func() (nvml.VgpuTypeIdInfo, nvml.Return)
// GetInfoFunc mocks the GetInfo method.
GetInfoFunc func() (nvml.GpuInstanceInfo, nvml.Return)
// GetVgpuHeterogeneousModeFunc mocks the GetVgpuHeterogeneousMode method.
GetVgpuHeterogeneousModeFunc func() (nvml.VgpuHeterogeneousMode, nvml.Return)
// GetVgpuSchedulerLogFunc mocks the GetVgpuSchedulerLog method.
GetVgpuSchedulerLogFunc func() (nvml.VgpuSchedulerLogInfo, nvml.Return)
// GetVgpuSchedulerStateFunc mocks the GetVgpuSchedulerState method.
GetVgpuSchedulerStateFunc func() (nvml.VgpuSchedulerStateInfo, nvml.Return)
// GetVgpuTypeCreatablePlacementsFunc mocks the GetVgpuTypeCreatablePlacements method.
GetVgpuTypeCreatablePlacementsFunc func() (nvml.VgpuCreatablePlacementInfo, nvml.Return)
// SetVgpuHeterogeneousModeFunc mocks the SetVgpuHeterogeneousMode method.
SetVgpuHeterogeneousModeFunc func(vgpuHeterogeneousMode *nvml.VgpuHeterogeneousMode) nvml.Return
// SetVgpuSchedulerStateFunc mocks the SetVgpuSchedulerState method.
SetVgpuSchedulerStateFunc func(vgpuSchedulerState *nvml.VgpuSchedulerState) nvml.Return
// calls tracks calls to the methods.
calls struct {
// CreateComputeInstance holds details about calls to the CreateComputeInstance method.
@@ -102,6 +150,9 @@ type GpuInstance struct {
// Destroy holds details about calls to the Destroy method.
Destroy []struct {
}
// GetActiveVgpus holds details about calls to the GetActiveVgpus method.
GetActiveVgpus []struct {
}
// GetComputeInstanceById holds details about calls to the GetComputeInstanceById method.
GetComputeInstanceById []struct {
// N is the n argument value.
@@ -136,20 +187,53 @@ type GpuInstance struct {
// ComputeInstanceProfileInfo is the computeInstanceProfileInfo argument value.
ComputeInstanceProfileInfo *nvml.ComputeInstanceProfileInfo
}
// GetCreatableVgpus holds details about calls to the GetCreatableVgpus method.
GetCreatableVgpus []struct {
}
// GetInfo holds details about calls to the GetInfo method.
GetInfo []struct {
}
// GetVgpuHeterogeneousMode holds details about calls to the GetVgpuHeterogeneousMode method.
GetVgpuHeterogeneousMode []struct {
}
// GetVgpuSchedulerLog holds details about calls to the GetVgpuSchedulerLog method.
GetVgpuSchedulerLog []struct {
}
// GetVgpuSchedulerState holds details about calls to the GetVgpuSchedulerState method.
GetVgpuSchedulerState []struct {
}
// GetVgpuTypeCreatablePlacements holds details about calls to the GetVgpuTypeCreatablePlacements method.
GetVgpuTypeCreatablePlacements []struct {
}
// SetVgpuHeterogeneousMode holds details about calls to the SetVgpuHeterogeneousMode method.
SetVgpuHeterogeneousMode []struct {
// VgpuHeterogeneousMode is the vgpuHeterogeneousMode argument value.
VgpuHeterogeneousMode *nvml.VgpuHeterogeneousMode
}
// SetVgpuSchedulerState holds details about calls to the SetVgpuSchedulerState method.
SetVgpuSchedulerState []struct {
// VgpuSchedulerState is the vgpuSchedulerState argument value.
VgpuSchedulerState *nvml.VgpuSchedulerState
}
}
lockCreateComputeInstance sync.RWMutex
lockCreateComputeInstanceWithPlacement sync.RWMutex
lockDestroy sync.RWMutex
lockGetActiveVgpus sync.RWMutex
lockGetComputeInstanceById sync.RWMutex
lockGetComputeInstancePossiblePlacements sync.RWMutex
lockGetComputeInstanceProfileInfo sync.RWMutex
lockGetComputeInstanceProfileInfoV sync.RWMutex
lockGetComputeInstanceRemainingCapacity sync.RWMutex
lockGetComputeInstances sync.RWMutex
lockGetCreatableVgpus sync.RWMutex
lockGetInfo sync.RWMutex
lockGetVgpuHeterogeneousMode sync.RWMutex
lockGetVgpuSchedulerLog sync.RWMutex
lockGetVgpuSchedulerState sync.RWMutex
lockGetVgpuTypeCreatablePlacements sync.RWMutex
lockSetVgpuHeterogeneousMode sync.RWMutex
lockSetVgpuSchedulerState sync.RWMutex
}
// CreateComputeInstance calls CreateComputeInstanceFunc.
@@ -247,6 +331,33 @@ func (mock *GpuInstance) DestroyCalls() []struct {
return calls
}
// GetActiveVgpus calls GetActiveVgpusFunc.
func (mock *GpuInstance) GetActiveVgpus() (nvml.ActiveVgpuInstanceInfo, nvml.Return) {
if mock.GetActiveVgpusFunc == nil {
panic("GpuInstance.GetActiveVgpusFunc: method is nil but GpuInstance.GetActiveVgpus was just called")
}
callInfo := struct {
}{}
mock.lockGetActiveVgpus.Lock()
mock.calls.GetActiveVgpus = append(mock.calls.GetActiveVgpus, callInfo)
mock.lockGetActiveVgpus.Unlock()
return mock.GetActiveVgpusFunc()
}
// GetActiveVgpusCalls gets all the calls that were made to GetActiveVgpus.
// Check the length with:
//
// len(mockedGpuInstance.GetActiveVgpusCalls())
func (mock *GpuInstance) GetActiveVgpusCalls() []struct {
} {
var calls []struct {
}
mock.lockGetActiveVgpus.RLock()
calls = mock.calls.GetActiveVgpus
mock.lockGetActiveVgpus.RUnlock()
return calls
}
// GetComputeInstanceById calls GetComputeInstanceByIdFunc.
func (mock *GpuInstance) GetComputeInstanceById(n int) (nvml.ComputeInstance, nvml.Return) {
if mock.GetComputeInstanceByIdFunc == nil {
@@ -447,6 +558,33 @@ func (mock *GpuInstance) GetComputeInstancesCalls() []struct {
return calls
}
// GetCreatableVgpus calls GetCreatableVgpusFunc.
func (mock *GpuInstance) GetCreatableVgpus() (nvml.VgpuTypeIdInfo, nvml.Return) {
if mock.GetCreatableVgpusFunc == nil {
panic("GpuInstance.GetCreatableVgpusFunc: method is nil but GpuInstance.GetCreatableVgpus was just called")
}
callInfo := struct {
}{}
mock.lockGetCreatableVgpus.Lock()
mock.calls.GetCreatableVgpus = append(mock.calls.GetCreatableVgpus, callInfo)
mock.lockGetCreatableVgpus.Unlock()
return mock.GetCreatableVgpusFunc()
}
// GetCreatableVgpusCalls gets all the calls that were made to GetCreatableVgpus.
// Check the length with:
//
// len(mockedGpuInstance.GetCreatableVgpusCalls())
func (mock *GpuInstance) GetCreatableVgpusCalls() []struct {
} {
var calls []struct {
}
mock.lockGetCreatableVgpus.RLock()
calls = mock.calls.GetCreatableVgpus
mock.lockGetCreatableVgpus.RUnlock()
return calls
}
// GetInfo calls GetInfoFunc.
func (mock *GpuInstance) GetInfo() (nvml.GpuInstanceInfo, nvml.Return) {
if mock.GetInfoFunc == nil {
@@ -473,3 +611,175 @@ func (mock *GpuInstance) GetInfoCalls() []struct {
mock.lockGetInfo.RUnlock()
return calls
}
// GetVgpuHeterogeneousMode calls GetVgpuHeterogeneousModeFunc.
func (mock *GpuInstance) GetVgpuHeterogeneousMode() (nvml.VgpuHeterogeneousMode, nvml.Return) {
if mock.GetVgpuHeterogeneousModeFunc == nil {
panic("GpuInstance.GetVgpuHeterogeneousModeFunc: method is nil but GpuInstance.GetVgpuHeterogeneousMode was just called")
}
callInfo := struct {
}{}
mock.lockGetVgpuHeterogeneousMode.Lock()
mock.calls.GetVgpuHeterogeneousMode = append(mock.calls.GetVgpuHeterogeneousMode, callInfo)
mock.lockGetVgpuHeterogeneousMode.Unlock()
return mock.GetVgpuHeterogeneousModeFunc()
}
// GetVgpuHeterogeneousModeCalls gets all the calls that were made to GetVgpuHeterogeneousMode.
// Check the length with:
//
// len(mockedGpuInstance.GetVgpuHeterogeneousModeCalls())
func (mock *GpuInstance) GetVgpuHeterogeneousModeCalls() []struct {
} {
var calls []struct {
}
mock.lockGetVgpuHeterogeneousMode.RLock()
calls = mock.calls.GetVgpuHeterogeneousMode
mock.lockGetVgpuHeterogeneousMode.RUnlock()
return calls
}
// GetVgpuSchedulerLog calls GetVgpuSchedulerLogFunc.
func (mock *GpuInstance) GetVgpuSchedulerLog() (nvml.VgpuSchedulerLogInfo, nvml.Return) {
if mock.GetVgpuSchedulerLogFunc == nil {
panic("GpuInstance.GetVgpuSchedulerLogFunc: method is nil but GpuInstance.GetVgpuSchedulerLog was just called")
}
callInfo := struct {
}{}
mock.lockGetVgpuSchedulerLog.Lock()
mock.calls.GetVgpuSchedulerLog = append(mock.calls.GetVgpuSchedulerLog, callInfo)
mock.lockGetVgpuSchedulerLog.Unlock()
return mock.GetVgpuSchedulerLogFunc()
}
// GetVgpuSchedulerLogCalls gets all the calls that were made to GetVgpuSchedulerLog.
// Check the length with:
//
// len(mockedGpuInstance.GetVgpuSchedulerLogCalls())
func (mock *GpuInstance) GetVgpuSchedulerLogCalls() []struct {
} {
var calls []struct {
}
mock.lockGetVgpuSchedulerLog.RLock()
calls = mock.calls.GetVgpuSchedulerLog
mock.lockGetVgpuSchedulerLog.RUnlock()
return calls
}
// GetVgpuSchedulerState calls GetVgpuSchedulerStateFunc.
func (mock *GpuInstance) GetVgpuSchedulerState() (nvml.VgpuSchedulerStateInfo, nvml.Return) {
if mock.GetVgpuSchedulerStateFunc == nil {
panic("GpuInstance.GetVgpuSchedulerStateFunc: method is nil but GpuInstance.GetVgpuSchedulerState was just called")
}
callInfo := struct {
}{}
mock.lockGetVgpuSchedulerState.Lock()
mock.calls.GetVgpuSchedulerState = append(mock.calls.GetVgpuSchedulerState, callInfo)
mock.lockGetVgpuSchedulerState.Unlock()
return mock.GetVgpuSchedulerStateFunc()
}
// GetVgpuSchedulerStateCalls gets all the calls that were made to GetVgpuSchedulerState.
// Check the length with:
//
// len(mockedGpuInstance.GetVgpuSchedulerStateCalls())
func (mock *GpuInstance) GetVgpuSchedulerStateCalls() []struct {
} {
var calls []struct {
}
mock.lockGetVgpuSchedulerState.RLock()
calls = mock.calls.GetVgpuSchedulerState
mock.lockGetVgpuSchedulerState.RUnlock()
return calls
}
// GetVgpuTypeCreatablePlacements calls GetVgpuTypeCreatablePlacementsFunc.
func (mock *GpuInstance) GetVgpuTypeCreatablePlacements() (nvml.VgpuCreatablePlacementInfo, nvml.Return) {
if mock.GetVgpuTypeCreatablePlacementsFunc == nil {
panic("GpuInstance.GetVgpuTypeCreatablePlacementsFunc: method is nil but GpuInstance.GetVgpuTypeCreatablePlacements was just called")
}
callInfo := struct {
}{}
mock.lockGetVgpuTypeCreatablePlacements.Lock()
mock.calls.GetVgpuTypeCreatablePlacements = append(mock.calls.GetVgpuTypeCreatablePlacements, callInfo)
mock.lockGetVgpuTypeCreatablePlacements.Unlock()
return mock.GetVgpuTypeCreatablePlacementsFunc()
}
// GetVgpuTypeCreatablePlacementsCalls gets all the calls that were made to GetVgpuTypeCreatablePlacements.
// Check the length with:
//
// len(mockedGpuInstance.GetVgpuTypeCreatablePlacementsCalls())
func (mock *GpuInstance) GetVgpuTypeCreatablePlacementsCalls() []struct {
} {
var calls []struct {
}
mock.lockGetVgpuTypeCreatablePlacements.RLock()
calls = mock.calls.GetVgpuTypeCreatablePlacements
mock.lockGetVgpuTypeCreatablePlacements.RUnlock()
return calls
}
// SetVgpuHeterogeneousMode calls SetVgpuHeterogeneousModeFunc.
func (mock *GpuInstance) SetVgpuHeterogeneousMode(vgpuHeterogeneousMode *nvml.VgpuHeterogeneousMode) nvml.Return {
if mock.SetVgpuHeterogeneousModeFunc == nil {
panic("GpuInstance.SetVgpuHeterogeneousModeFunc: method is nil but GpuInstance.SetVgpuHeterogeneousMode was just called")
}
callInfo := struct {
VgpuHeterogeneousMode *nvml.VgpuHeterogeneousMode
}{
VgpuHeterogeneousMode: vgpuHeterogeneousMode,
}
mock.lockSetVgpuHeterogeneousMode.Lock()
mock.calls.SetVgpuHeterogeneousMode = append(mock.calls.SetVgpuHeterogeneousMode, callInfo)
mock.lockSetVgpuHeterogeneousMode.Unlock()
return mock.SetVgpuHeterogeneousModeFunc(vgpuHeterogeneousMode)
}
// SetVgpuHeterogeneousModeCalls gets all the calls that were made to SetVgpuHeterogeneousMode.
// Check the length with:
//
// len(mockedGpuInstance.SetVgpuHeterogeneousModeCalls())
func (mock *GpuInstance) SetVgpuHeterogeneousModeCalls() []struct {
VgpuHeterogeneousMode *nvml.VgpuHeterogeneousMode
} {
var calls []struct {
VgpuHeterogeneousMode *nvml.VgpuHeterogeneousMode
}
mock.lockSetVgpuHeterogeneousMode.RLock()
calls = mock.calls.SetVgpuHeterogeneousMode
mock.lockSetVgpuHeterogeneousMode.RUnlock()
return calls
}
// SetVgpuSchedulerState calls SetVgpuSchedulerStateFunc.
func (mock *GpuInstance) SetVgpuSchedulerState(vgpuSchedulerState *nvml.VgpuSchedulerState) nvml.Return {
if mock.SetVgpuSchedulerStateFunc == nil {
panic("GpuInstance.SetVgpuSchedulerStateFunc: method is nil but GpuInstance.SetVgpuSchedulerState was just called")
}
callInfo := struct {
VgpuSchedulerState *nvml.VgpuSchedulerState
}{
VgpuSchedulerState: vgpuSchedulerState,
}
mock.lockSetVgpuSchedulerState.Lock()
mock.calls.SetVgpuSchedulerState = append(mock.calls.SetVgpuSchedulerState, callInfo)
mock.lockSetVgpuSchedulerState.Unlock()
return mock.SetVgpuSchedulerStateFunc(vgpuSchedulerState)
}
// SetVgpuSchedulerStateCalls gets all the calls that were made to SetVgpuSchedulerState.
// Check the length with:
//
// len(mockedGpuInstance.SetVgpuSchedulerStateCalls())
func (mock *GpuInstance) SetVgpuSchedulerStateCalls() []struct {
VgpuSchedulerState *nvml.VgpuSchedulerState
} {
var calls []struct {
VgpuSchedulerState *nvml.VgpuSchedulerState
}
mock.lockSetVgpuSchedulerState.RLock()
calls = mock.calls.SetVgpuSchedulerState
mock.lockSetVgpuSchedulerState.RUnlock()
return calls
}

File diff suppressed because it is too large Load Diff

View File

@@ -72,6 +72,9 @@ var _ nvml.VgpuInstance = &VgpuInstance{}
// GetMetadataFunc: func() (nvml.VgpuMetadata, nvml.Return) {
// panic("mock out the GetMetadata method")
// },
// GetRuntimeStateSizeFunc: func() (nvml.VgpuRuntimeState, nvml.Return) {
// panic("mock out the GetRuntimeStateSize method")
// },
// GetTypeFunc: func() (nvml.VgpuTypeId, nvml.Return) {
// panic("mock out the GetType method")
// },
@@ -148,6 +151,9 @@ type VgpuInstance struct {
// GetMetadataFunc mocks the GetMetadata method.
GetMetadataFunc func() (nvml.VgpuMetadata, nvml.Return)
// GetRuntimeStateSizeFunc mocks the GetRuntimeStateSize method.
GetRuntimeStateSizeFunc func() (nvml.VgpuRuntimeState, nvml.Return)
// GetTypeFunc mocks the GetType method.
GetTypeFunc func() (nvml.VgpuTypeId, nvml.Return)
@@ -221,6 +227,9 @@ type VgpuInstance struct {
// GetMetadata holds details about calls to the GetMetadata method.
GetMetadata []struct {
}
// GetRuntimeStateSize holds details about calls to the GetRuntimeStateSize method.
GetRuntimeStateSize []struct {
}
// GetType holds details about calls to the GetType method.
GetType []struct {
}
@@ -257,6 +266,7 @@ type VgpuInstance struct {
lockGetLicenseStatus sync.RWMutex
lockGetMdevUUID sync.RWMutex
lockGetMetadata sync.RWMutex
lockGetRuntimeStateSize sync.RWMutex
lockGetType sync.RWMutex
lockGetUUID sync.RWMutex
lockGetVmDriverVersion sync.RWMutex
@@ -755,6 +765,33 @@ func (mock *VgpuInstance) GetMetadataCalls() []struct {
return calls
}
// GetRuntimeStateSize calls GetRuntimeStateSizeFunc.
func (mock *VgpuInstance) GetRuntimeStateSize() (nvml.VgpuRuntimeState, nvml.Return) {
if mock.GetRuntimeStateSizeFunc == nil {
panic("VgpuInstance.GetRuntimeStateSizeFunc: method is nil but VgpuInstance.GetRuntimeStateSize was just called")
}
callInfo := struct {
}{}
mock.lockGetRuntimeStateSize.Lock()
mock.calls.GetRuntimeStateSize = append(mock.calls.GetRuntimeStateSize, callInfo)
mock.lockGetRuntimeStateSize.Unlock()
return mock.GetRuntimeStateSizeFunc()
}
// GetRuntimeStateSizeCalls gets all the calls that were made to GetRuntimeStateSize.
// Check the length with:
//
// len(mockedVgpuInstance.GetRuntimeStateSizeCalls())
func (mock *VgpuInstance) GetRuntimeStateSizeCalls() []struct {
} {
var calls []struct {
}
mock.lockGetRuntimeStateSize.RLock()
calls = mock.calls.GetRuntimeStateSize
mock.lockGetRuntimeStateSize.RUnlock()
return calls
}
// GetType calls GetTypeFunc.
func (mock *VgpuInstance) GetType() (nvml.VgpuTypeId, nvml.Return) {
if mock.GetTypeFunc == nil {

View File

@@ -18,6 +18,9 @@ var _ nvml.VgpuTypeId = &VgpuTypeId{}
//
// // make and configure a mocked nvml.VgpuTypeId
// mockedVgpuTypeId := &VgpuTypeId{
// GetBAR1InfoFunc: func() (nvml.VgpuTypeBar1Info, nvml.Return) {
// panic("mock out the GetBAR1Info method")
// },
// GetCapabilitiesFunc: func(vgpuCapability nvml.VgpuCapability) (bool, nvml.Return) {
// panic("mock out the GetCapabilities method")
// },
@@ -67,6 +70,9 @@ var _ nvml.VgpuTypeId = &VgpuTypeId{}
//
// }
type VgpuTypeId struct {
// GetBAR1InfoFunc mocks the GetBAR1Info method.
GetBAR1InfoFunc func() (nvml.VgpuTypeBar1Info, nvml.Return)
// GetCapabilitiesFunc mocks the GetCapabilities method.
GetCapabilitiesFunc func(vgpuCapability nvml.VgpuCapability) (bool, nvml.Return)
@@ -111,6 +117,9 @@ type VgpuTypeId struct {
// calls tracks calls to the methods.
calls struct {
// GetBAR1Info holds details about calls to the GetBAR1Info method.
GetBAR1Info []struct {
}
// GetCapabilities holds details about calls to the GetCapabilities method.
GetCapabilities []struct {
// VgpuCapability is the vgpuCapability argument value.
@@ -164,6 +173,7 @@ type VgpuTypeId struct {
Device nvml.Device
}
}
lockGetBAR1Info sync.RWMutex
lockGetCapabilities sync.RWMutex
lockGetClass sync.RWMutex
lockGetCreatablePlacements sync.RWMutex
@@ -180,6 +190,33 @@ type VgpuTypeId struct {
lockGetSupportedPlacements sync.RWMutex
}
// GetBAR1Info calls GetBAR1InfoFunc.
func (mock *VgpuTypeId) GetBAR1Info() (nvml.VgpuTypeBar1Info, nvml.Return) {
if mock.GetBAR1InfoFunc == nil {
panic("VgpuTypeId.GetBAR1InfoFunc: method is nil but VgpuTypeId.GetBAR1Info was just called")
}
callInfo := struct {
}{}
mock.lockGetBAR1Info.Lock()
mock.calls.GetBAR1Info = append(mock.calls.GetBAR1Info, callInfo)
mock.lockGetBAR1Info.Unlock()
return mock.GetBAR1InfoFunc()
}
// GetBAR1InfoCalls gets all the calls that were made to GetBAR1Info.
// Check the length with:
//
// len(mockedVgpuTypeId.GetBAR1InfoCalls())
func (mock *VgpuTypeId) GetBAR1InfoCalls() []struct {
} {
var calls []struct {
}
mock.lockGetBAR1Info.RLock()
calls = mock.calls.GetBAR1Info
mock.lockGetBAR1Info.RUnlock()
return calls
}
// GetCapabilities calls GetCapabilitiesFunc.
func (mock *VgpuTypeId) GetCapabilities(vgpuCapability nvml.VgpuCapability) (bool, nvml.Return) {
if mock.GetCapabilitiesFunc == nil {

View File

@@ -121,6 +121,15 @@ func nvmlSystemGetTopologyGpuSet(CpuNumber uint32, Count *uint32, DeviceArray *n
return __v
}
// nvmlSystemGetDriverBranch function as declared in nvml/nvml.h
func nvmlSystemGetDriverBranch(BranchInfo *SystemDriverBranchInfo, Length uint32) Return {
cBranchInfo, _ := (*C.nvmlSystemDriverBranchInfo_t)(unsafe.Pointer(BranchInfo)), cgoAllocsUnknown
cLength, _ := (C.uint)(Length), cgoAllocsUnknown
__ret := C.nvmlSystemGetDriverBranch(cBranchInfo, cLength)
__v := (Return)(__ret)
return __v
}
// nvmlUnitGetCount function as declared in nvml/nvml.h
func nvmlUnitGetCount(UnitCount *uint32) Return {
cUnitCount, _ := (*C.uint)(unsafe.Pointer(UnitCount)), cgoAllocsUnknown
@@ -238,6 +247,15 @@ func nvmlDeviceGetHandleByUUID(Uuid string, nvmlDevice *nvmlDevice) Return {
return __v
}
// nvmlDeviceGetHandleByUUIDV function as declared in nvml/nvml.h
func nvmlDeviceGetHandleByUUIDV(Uuid *UUID, nvmlDevice *nvmlDevice) Return {
cUuid, _ := (*C.nvmlUUID_t)(unsafe.Pointer(Uuid)), cgoAllocsUnknown
cnvmlDevice, _ := (*C.nvmlDevice_t)(unsafe.Pointer(nvmlDevice)), cgoAllocsUnknown
__ret := C.nvmlDeviceGetHandleByUUIDV(cUuid, cnvmlDevice)
__v := (Return)(__ret)
return __v
}
// nvmlDeviceGetHandleByPciBusId_v2 function as declared in nvml/nvml.h
func nvmlDeviceGetHandleByPciBusId_v2(PciBusId string, nvmlDevice *nvmlDevice) Return {
cPciBusId, _ := unpackPCharString(PciBusId)
@@ -698,6 +716,15 @@ func nvmlDeviceGetFanSpeed_v2(nvmlDevice nvmlDevice, Fan uint32, Speed *uint32)
return __v
}
// nvmlDeviceGetFanSpeedRPM function as declared in nvml/nvml.h
func nvmlDeviceGetFanSpeedRPM(nvmlDevice nvmlDevice, FanSpeed *FanSpeedInfo) Return {
cnvmlDevice, _ := *(*C.nvmlDevice_t)(unsafe.Pointer(&nvmlDevice)), cgoAllocsUnknown
cFanSpeed, _ := (*C.nvmlFanSpeedInfo_t)(unsafe.Pointer(FanSpeed)), cgoAllocsUnknown
__ret := C.nvmlDeviceGetFanSpeedRPM(cnvmlDevice, cFanSpeed)
__v := (Return)(__ret)
return __v
}
// nvmlDeviceGetTargetFanSpeed function as declared in nvml/nvml.h
func nvmlDeviceGetTargetFanSpeed(nvmlDevice nvmlDevice, Fan uint32, TargetSpeed *uint32) Return {
cnvmlDevice, _ := *(*C.nvmlDevice_t)(unsafe.Pointer(&nvmlDevice)), cgoAllocsUnknown
@@ -747,6 +774,24 @@ func nvmlDeviceGetTemperature(nvmlDevice nvmlDevice, SensorType TemperatureSenso
return __v
}
// nvmlDeviceGetCoolerInfo function as declared in nvml/nvml.h
func nvmlDeviceGetCoolerInfo(nvmlDevice nvmlDevice, CoolerInfo *CoolerInfo) Return {
cnvmlDevice, _ := *(*C.nvmlDevice_t)(unsafe.Pointer(&nvmlDevice)), cgoAllocsUnknown
cCoolerInfo, _ := (*C.nvmlCoolerInfo_t)(unsafe.Pointer(CoolerInfo)), cgoAllocsUnknown
__ret := C.nvmlDeviceGetCoolerInfo(cnvmlDevice, cCoolerInfo)
__v := (Return)(__ret)
return __v
}
// nvmlDeviceGetTemperatureV function as declared in nvml/nvml.h
func nvmlDeviceGetTemperatureV(nvmlDevice nvmlDevice, Temperature *Temperature) Return {
cnvmlDevice, _ := *(*C.nvmlDevice_t)(unsafe.Pointer(&nvmlDevice)), cgoAllocsUnknown
cTemperature, _ := (*C.nvmlTemperature_t)(unsafe.Pointer(Temperature)), cgoAllocsUnknown
__ret := C.nvmlDeviceGetTemperatureV(cnvmlDevice, cTemperature)
__v := (Return)(__ret)
return __v
}
// nvmlDeviceGetTemperatureThreshold function as declared in nvml/nvml.h
func nvmlDeviceGetTemperatureThreshold(nvmlDevice nvmlDevice, ThresholdType TemperatureThresholds, Temp *uint32) Return {
cnvmlDevice, _ := *(*C.nvmlDevice_t)(unsafe.Pointer(&nvmlDevice)), cgoAllocsUnknown
@@ -757,6 +802,15 @@ func nvmlDeviceGetTemperatureThreshold(nvmlDevice nvmlDevice, ThresholdType Temp
return __v
}
// nvmlDeviceGetMarginTemperature function as declared in nvml/nvml.h
func nvmlDeviceGetMarginTemperature(nvmlDevice nvmlDevice, MarginTempInfo *MarginTemperature) Return {
cnvmlDevice, _ := *(*C.nvmlDevice_t)(unsafe.Pointer(&nvmlDevice)), cgoAllocsUnknown
cMarginTempInfo, _ := (*C.nvmlMarginTemperature_t)(unsafe.Pointer(MarginTempInfo)), cgoAllocsUnknown
__ret := C.nvmlDeviceGetMarginTemperature(cnvmlDevice, cMarginTempInfo)
__v := (Return)(__ret)
return __v
}
// nvmlDeviceGetThermalSettings function as declared in nvml/nvml.h
func nvmlDeviceGetThermalSettings(nvmlDevice nvmlDevice, SensorIndex uint32, PThermalSettings *GpuThermalSettings) Return {
cnvmlDevice, _ := *(*C.nvmlDevice_t)(unsafe.Pointer(&nvmlDevice)), cgoAllocsUnknown
@@ -881,6 +935,42 @@ func nvmlDeviceGetMemClkMinMaxVfOffset(nvmlDevice nvmlDevice, MinOffset *int32,
return __v
}
// nvmlDeviceGetClockOffsets function as declared in nvml/nvml.h
func nvmlDeviceGetClockOffsets(nvmlDevice nvmlDevice, Info *ClockOffset) Return {
cnvmlDevice, _ := *(*C.nvmlDevice_t)(unsafe.Pointer(&nvmlDevice)), cgoAllocsUnknown
cInfo, _ := (*C.nvmlClockOffset_t)(unsafe.Pointer(Info)), cgoAllocsUnknown
__ret := C.nvmlDeviceGetClockOffsets(cnvmlDevice, cInfo)
__v := (Return)(__ret)
return __v
}
// nvmlDeviceSetClockOffsets function as declared in nvml/nvml.h
func nvmlDeviceSetClockOffsets(nvmlDevice nvmlDevice, Info *ClockOffset) Return {
cnvmlDevice, _ := *(*C.nvmlDevice_t)(unsafe.Pointer(&nvmlDevice)), cgoAllocsUnknown
cInfo, _ := (*C.nvmlClockOffset_t)(unsafe.Pointer(Info)), cgoAllocsUnknown
__ret := C.nvmlDeviceSetClockOffsets(cnvmlDevice, cInfo)
__v := (Return)(__ret)
return __v
}
// nvmlDeviceGetPerformanceModes function as declared in nvml/nvml.h
func nvmlDeviceGetPerformanceModes(nvmlDevice nvmlDevice, PerfModes *DevicePerfModes) Return {
cnvmlDevice, _ := *(*C.nvmlDevice_t)(unsafe.Pointer(&nvmlDevice)), cgoAllocsUnknown
cPerfModes, _ := (*C.nvmlDevicePerfModes_t)(unsafe.Pointer(PerfModes)), cgoAllocsUnknown
__ret := C.nvmlDeviceGetPerformanceModes(cnvmlDevice, cPerfModes)
__v := (Return)(__ret)
return __v
}
// nvmlDeviceGetCurrentClockFreqs function as declared in nvml/nvml.h
func nvmlDeviceGetCurrentClockFreqs(nvmlDevice nvmlDevice, CurrentClockFreqs *DeviceCurrentClockFreqs) Return {
cnvmlDevice, _ := *(*C.nvmlDevice_t)(unsafe.Pointer(&nvmlDevice)), cgoAllocsUnknown
cCurrentClockFreqs, _ := (*C.nvmlDeviceCurrentClockFreqs_t)(unsafe.Pointer(CurrentClockFreqs)), cgoAllocsUnknown
__ret := C.nvmlDeviceGetCurrentClockFreqs(cnvmlDevice, cCurrentClockFreqs)
__v := (Return)(__ret)
return __v
}
// nvmlDeviceGetPowerManagementMode function as declared in nvml/nvml.h
func nvmlDeviceGetPowerManagementMode(nvmlDevice nvmlDevice, Mode *EnableState) Return {
cnvmlDevice, _ := *(*C.nvmlDevice_t)(unsafe.Pointer(&nvmlDevice)), cgoAllocsUnknown
@@ -992,6 +1082,25 @@ func nvmlDeviceGetCudaComputeCapability(nvmlDevice nvmlDevice, Major *int32, Min
return __v
}
// nvmlDeviceGetDramEncryptionMode function as declared in nvml/nvml.h
func nvmlDeviceGetDramEncryptionMode(nvmlDevice nvmlDevice, Current *DramEncryptionInfo, Pending *DramEncryptionInfo) Return {
cnvmlDevice, _ := *(*C.nvmlDevice_t)(unsafe.Pointer(&nvmlDevice)), cgoAllocsUnknown
cCurrent, _ := (*C.nvmlDramEncryptionInfo_t)(unsafe.Pointer(Current)), cgoAllocsUnknown
cPending, _ := (*C.nvmlDramEncryptionInfo_t)(unsafe.Pointer(Pending)), cgoAllocsUnknown
__ret := C.nvmlDeviceGetDramEncryptionMode(cnvmlDevice, cCurrent, cPending)
__v := (Return)(__ret)
return __v
}
// nvmlDeviceSetDramEncryptionMode function as declared in nvml/nvml.h
func nvmlDeviceSetDramEncryptionMode(nvmlDevice nvmlDevice, DramEncryption *DramEncryptionInfo) Return {
cnvmlDevice, _ := *(*C.nvmlDevice_t)(unsafe.Pointer(&nvmlDevice)), cgoAllocsUnknown
cDramEncryption, _ := (*C.nvmlDramEncryptionInfo_t)(unsafe.Pointer(DramEncryption)), cgoAllocsUnknown
__ret := C.nvmlDeviceSetDramEncryptionMode(cnvmlDevice, cDramEncryption)
__v := (Return)(__ret)
return __v
}
// nvmlDeviceGetEccMode function as declared in nvml/nvml.h
func nvmlDeviceGetEccMode(nvmlDevice nvmlDevice, Current *EnableState, Pending *EnableState) Return {
cnvmlDevice, _ := *(*C.nvmlDevice_t)(unsafe.Pointer(&nvmlDevice)), cgoAllocsUnknown
@@ -1162,12 +1271,12 @@ func nvmlDeviceGetFBCSessions(nvmlDevice nvmlDevice, SessionCount *uint32, Sessi
return __v
}
// nvmlDeviceGetDriverModel function as declared in nvml/nvml.h
func nvmlDeviceGetDriverModel(nvmlDevice nvmlDevice, Current *DriverModel, Pending *DriverModel) Return {
// nvmlDeviceGetDriverModel_v2 function as declared in nvml/nvml.h
func nvmlDeviceGetDriverModel_v2(nvmlDevice nvmlDevice, Current *DriverModel, Pending *DriverModel) Return {
cnvmlDevice, _ := *(*C.nvmlDevice_t)(unsafe.Pointer(&nvmlDevice)), cgoAllocsUnknown
cCurrent, _ := (*C.nvmlDriverModel_t)(unsafe.Pointer(Current)), cgoAllocsUnknown
cPending, _ := (*C.nvmlDriverModel_t)(unsafe.Pointer(Pending)), cgoAllocsUnknown
__ret := C.nvmlDeviceGetDriverModel(cnvmlDevice, cCurrent, cPending)
__ret := C.nvmlDeviceGetDriverModel_v2(cnvmlDevice, cCurrent, cPending)
__v := (Return)(__ret)
return __v
}
@@ -1440,6 +1549,31 @@ func nvmlSystemGetConfComputeKeyRotationThresholdInfo(PKeyRotationThrInfo *ConfC
return __v
}
// nvmlDeviceSetConfComputeUnprotectedMemSize function as declared in nvml/nvml.h
func nvmlDeviceSetConfComputeUnprotectedMemSize(nvmlDevice nvmlDevice, SizeKiB uint64) Return {
cnvmlDevice, _ := *(*C.nvmlDevice_t)(unsafe.Pointer(&nvmlDevice)), cgoAllocsUnknown
cSizeKiB, _ := (C.ulonglong)(SizeKiB), cgoAllocsUnknown
__ret := C.nvmlDeviceSetConfComputeUnprotectedMemSize(cnvmlDevice, cSizeKiB)
__v := (Return)(__ret)
return __v
}
// nvmlSystemSetConfComputeGpusReadyState function as declared in nvml/nvml.h
func nvmlSystemSetConfComputeGpusReadyState(IsAcceptingWork uint32) Return {
cIsAcceptingWork, _ := (C.uint)(IsAcceptingWork), cgoAllocsUnknown
__ret := C.nvmlSystemSetConfComputeGpusReadyState(cIsAcceptingWork)
__v := (Return)(__ret)
return __v
}
// nvmlSystemSetConfComputeKeyRotationThresholdInfo function as declared in nvml/nvml.h
func nvmlSystemSetConfComputeKeyRotationThresholdInfo(PKeyRotationThrInfo *ConfComputeSetKeyRotationThresholdInfo) Return {
cPKeyRotationThrInfo, _ := (*C.nvmlConfComputeSetKeyRotationThresholdInfo_t)(unsafe.Pointer(PKeyRotationThrInfo)), cgoAllocsUnknown
__ret := C.nvmlSystemSetConfComputeKeyRotationThresholdInfo(cPKeyRotationThrInfo)
__v := (Return)(__ret)
return __v
}
// nvmlSystemGetConfComputeSettings function as declared in nvml/nvml.h
func nvmlSystemGetConfComputeSettings(Settings *SystemConfComputeSettings) Return {
cSettings, _ := (*C.nvmlSystemConfComputeSettings_t)(unsafe.Pointer(Settings)), cgoAllocsUnknown
@@ -1467,6 +1601,15 @@ func nvmlDeviceGetGspFirmwareMode(nvmlDevice nvmlDevice, IsEnabled *uint32, Defa
return __v
}
// nvmlDeviceGetSramEccErrorStatus function as declared in nvml/nvml.h
func nvmlDeviceGetSramEccErrorStatus(nvmlDevice nvmlDevice, Status *EccSramErrorStatus) Return {
cnvmlDevice, _ := *(*C.nvmlDevice_t)(unsafe.Pointer(&nvmlDevice)), cgoAllocsUnknown
cStatus, _ := (*C.nvmlEccSramErrorStatus_t)(unsafe.Pointer(Status)), cgoAllocsUnknown
__ret := C.nvmlDeviceGetSramEccErrorStatus(cnvmlDevice, cStatus)
__v := (Return)(__ret)
return __v
}
// nvmlDeviceGetAccountingMode function as declared in nvml/nvml.h
func nvmlDeviceGetAccountingMode(nvmlDevice nvmlDevice, Mode *EnableState) Return {
cnvmlDevice, _ := *(*C.nvmlDevice_t)(unsafe.Pointer(&nvmlDevice)), cgoAllocsUnknown
@@ -1596,6 +1739,15 @@ func nvmlDeviceGetProcessesUtilizationInfo(nvmlDevice nvmlDevice, ProcesesUtilIn
return __v
}
// nvmlDeviceGetPlatformInfo function as declared in nvml/nvml.h
func nvmlDeviceGetPlatformInfo(nvmlDevice nvmlDevice, PlatformInfo *PlatformInfo) Return {
cnvmlDevice, _ := *(*C.nvmlDevice_t)(unsafe.Pointer(&nvmlDevice)), cgoAllocsUnknown
cPlatformInfo, _ := (*C.nvmlPlatformInfo_t)(unsafe.Pointer(PlatformInfo)), cgoAllocsUnknown
__ret := C.nvmlDeviceGetPlatformInfo(cnvmlDevice, cPlatformInfo)
__v := (Return)(__ret)
return __v
}
// nvmlUnitSetLedState function as declared in nvml/nvml.h
func nvmlUnitSetLedState(nvmlUnit nvmlUnit, Color LedColor) Return {
cnvmlUnit, _ := *(*C.nvmlUnit_t)(unsafe.Pointer(&nvmlUnit)), cgoAllocsUnknown
@@ -1809,31 +1961,6 @@ func nvmlDeviceSetMemClkVfOffset(nvmlDevice nvmlDevice, Offset int32) Return {
return __v
}
// nvmlDeviceSetConfComputeUnprotectedMemSize function as declared in nvml/nvml.h
func nvmlDeviceSetConfComputeUnprotectedMemSize(nvmlDevice nvmlDevice, SizeKiB uint64) Return {
cnvmlDevice, _ := *(*C.nvmlDevice_t)(unsafe.Pointer(&nvmlDevice)), cgoAllocsUnknown
cSizeKiB, _ := (C.ulonglong)(SizeKiB), cgoAllocsUnknown
__ret := C.nvmlDeviceSetConfComputeUnprotectedMemSize(cnvmlDevice, cSizeKiB)
__v := (Return)(__ret)
return __v
}
// nvmlSystemSetConfComputeGpusReadyState function as declared in nvml/nvml.h
func nvmlSystemSetConfComputeGpusReadyState(IsAcceptingWork uint32) Return {
cIsAcceptingWork, _ := (C.uint)(IsAcceptingWork), cgoAllocsUnknown
__ret := C.nvmlSystemSetConfComputeGpusReadyState(cIsAcceptingWork)
__v := (Return)(__ret)
return __v
}
// nvmlSystemSetConfComputeKeyRotationThresholdInfo function as declared in nvml/nvml.h
func nvmlSystemSetConfComputeKeyRotationThresholdInfo(PKeyRotationThrInfo *ConfComputeSetKeyRotationThresholdInfo) Return {
cPKeyRotationThrInfo, _ := (*C.nvmlConfComputeSetKeyRotationThresholdInfo_t)(unsafe.Pointer(PKeyRotationThrInfo)), cgoAllocsUnknown
__ret := C.nvmlSystemSetConfComputeKeyRotationThresholdInfo(cPKeyRotationThrInfo)
__v := (Return)(__ret)
return __v
}
// nvmlDeviceSetAccountingMode function as declared in nvml/nvml.h
func nvmlDeviceSetAccountingMode(nvmlDevice nvmlDevice, Mode EnableState) Return {
cnvmlDevice, _ := *(*C.nvmlDevice_t)(unsafe.Pointer(&nvmlDevice)), cgoAllocsUnknown
@@ -1851,6 +1978,15 @@ func nvmlDeviceClearAccountingPids(nvmlDevice nvmlDevice) Return {
return __v
}
// nvmlDeviceSetPowerManagementLimit_v2 function as declared in nvml/nvml.h
func nvmlDeviceSetPowerManagementLimit_v2(nvmlDevice nvmlDevice, PowerValue *PowerValue_v2) Return {
cnvmlDevice, _ := *(*C.nvmlDevice_t)(unsafe.Pointer(&nvmlDevice)), cgoAllocsUnknown
cPowerValue, _ := (*C.nvmlPowerValue_v2_t)(unsafe.Pointer(PowerValue)), cgoAllocsUnknown
__ret := C.nvmlDeviceSetPowerManagementLimit_v2(cnvmlDevice, cPowerValue)
__v := (Return)(__ret)
return __v
}
// nvmlDeviceGetNvLinkState function as declared in nvml/nvml.h
func nvmlDeviceGetNvLinkState(nvmlDevice nvmlDevice, Link uint32, IsActive *EnableState) Return {
cnvmlDevice, _ := *(*C.nvmlDevice_t)(unsafe.Pointer(&nvmlDevice)), cgoAllocsUnknown
@@ -1978,6 +2114,58 @@ func nvmlDeviceGetNvLinkRemoteDeviceType(nvmlDevice nvmlDevice, Link uint32, PNv
return __v
}
// nvmlDeviceSetNvLinkDeviceLowPowerThreshold function as declared in nvml/nvml.h
func nvmlDeviceSetNvLinkDeviceLowPowerThreshold(nvmlDevice nvmlDevice, Info *NvLinkPowerThres) Return {
cnvmlDevice, _ := *(*C.nvmlDevice_t)(unsafe.Pointer(&nvmlDevice)), cgoAllocsUnknown
cInfo, _ := (*C.nvmlNvLinkPowerThres_t)(unsafe.Pointer(Info)), cgoAllocsUnknown
__ret := C.nvmlDeviceSetNvLinkDeviceLowPowerThreshold(cnvmlDevice, cInfo)
__v := (Return)(__ret)
return __v
}
// nvmlSystemSetNvlinkBwMode function as declared in nvml/nvml.h
func nvmlSystemSetNvlinkBwMode(NvlinkBwMode uint32) Return {
cNvlinkBwMode, _ := (C.uint)(NvlinkBwMode), cgoAllocsUnknown
__ret := C.nvmlSystemSetNvlinkBwMode(cNvlinkBwMode)
__v := (Return)(__ret)
return __v
}
// nvmlSystemGetNvlinkBwMode function as declared in nvml/nvml.h
func nvmlSystemGetNvlinkBwMode(NvlinkBwMode *uint32) Return {
cNvlinkBwMode, _ := (*C.uint)(unsafe.Pointer(NvlinkBwMode)), cgoAllocsUnknown
__ret := C.nvmlSystemGetNvlinkBwMode(cNvlinkBwMode)
__v := (Return)(__ret)
return __v
}
// nvmlDeviceGetNvlinkSupportedBwModes function as declared in nvml/nvml.h
func nvmlDeviceGetNvlinkSupportedBwModes(nvmlDevice nvmlDevice, SupportedBwMode *NvlinkSupportedBwModes) Return {
cnvmlDevice, _ := *(*C.nvmlDevice_t)(unsafe.Pointer(&nvmlDevice)), cgoAllocsUnknown
cSupportedBwMode, _ := (*C.nvmlNvlinkSupportedBwModes_t)(unsafe.Pointer(SupportedBwMode)), cgoAllocsUnknown
__ret := C.nvmlDeviceGetNvlinkSupportedBwModes(cnvmlDevice, cSupportedBwMode)
__v := (Return)(__ret)
return __v
}
// nvmlDeviceGetNvlinkBwMode function as declared in nvml/nvml.h
func nvmlDeviceGetNvlinkBwMode(nvmlDevice nvmlDevice, GetBwMode *NvlinkGetBwMode) Return {
cnvmlDevice, _ := *(*C.nvmlDevice_t)(unsafe.Pointer(&nvmlDevice)), cgoAllocsUnknown
cGetBwMode, _ := (*C.nvmlNvlinkGetBwMode_t)(unsafe.Pointer(GetBwMode)), cgoAllocsUnknown
__ret := C.nvmlDeviceGetNvlinkBwMode(cnvmlDevice, cGetBwMode)
__v := (Return)(__ret)
return __v
}
// nvmlDeviceSetNvlinkBwMode function as declared in nvml/nvml.h
func nvmlDeviceSetNvlinkBwMode(nvmlDevice nvmlDevice, SetBwMode *NvlinkSetBwMode) Return {
cnvmlDevice, _ := *(*C.nvmlDevice_t)(unsafe.Pointer(&nvmlDevice)), cgoAllocsUnknown
cSetBwMode, _ := (*C.nvmlNvlinkSetBwMode_t)(unsafe.Pointer(SetBwMode)), cgoAllocsUnknown
__ret := C.nvmlDeviceSetNvlinkBwMode(cnvmlDevice, cSetBwMode)
__v := (Return)(__ret)
return __v
}
// nvmlEventSetCreate function as declared in nvml/nvml.h
func nvmlEventSetCreate(Set *nvmlEventSet) Return {
cSet, _ := (*C.nvmlEventSet_t)(unsafe.Pointer(Set)), cgoAllocsUnknown
@@ -2023,6 +2211,38 @@ func nvmlEventSetFree(Set nvmlEventSet) Return {
return __v
}
// nvmlSystemEventSetCreate function as declared in nvml/nvml.h
func nvmlSystemEventSetCreate(Request *SystemEventSetCreateRequest) Return {
cRequest, _ := (*C.nvmlSystemEventSetCreateRequest_t)(unsafe.Pointer(Request)), cgoAllocsUnknown
__ret := C.nvmlSystemEventSetCreate(cRequest)
__v := (Return)(__ret)
return __v
}
// nvmlSystemEventSetFree function as declared in nvml/nvml.h
func nvmlSystemEventSetFree(Request *SystemEventSetFreeRequest) Return {
cRequest, _ := (*C.nvmlSystemEventSetFreeRequest_t)(unsafe.Pointer(Request)), cgoAllocsUnknown
__ret := C.nvmlSystemEventSetFree(cRequest)
__v := (Return)(__ret)
return __v
}
// nvmlSystemRegisterEvents function as declared in nvml/nvml.h
func nvmlSystemRegisterEvents(Request *SystemRegisterEventRequest) Return {
cRequest, _ := (*C.nvmlSystemRegisterEventRequest_t)(unsafe.Pointer(Request)), cgoAllocsUnknown
__ret := C.nvmlSystemRegisterEvents(cRequest)
__v := (Return)(__ret)
return __v
}
// nvmlSystemEventSetWait function as declared in nvml/nvml.h
func nvmlSystemEventSetWait(Request *SystemEventSetWaitRequest) Return {
cRequest, _ := (*C.nvmlSystemEventSetWaitRequest_t)(unsafe.Pointer(Request)), cgoAllocsUnknown
__ret := C.nvmlSystemEventSetWait(cRequest)
__v := (Return)(__ret)
return __v
}
// nvmlDeviceModifyDrainState function as declared in nvml/nvml.h
func nvmlDeviceModifyDrainState(PciInfo *PciInfo, NewState EnableState) Return {
cPciInfo, _ := (*C.nvmlPciInfo_t)(unsafe.Pointer(PciInfo)), cgoAllocsUnknown
@@ -2171,6 +2391,15 @@ func nvmlVgpuTypeGetFbReservation(nvmlVgpuTypeId nvmlVgpuTypeId, FbReservation *
return __v
}
// nvmlVgpuInstanceGetRuntimeStateSize function as declared in nvml/nvml.h
func nvmlVgpuInstanceGetRuntimeStateSize(nvmlVgpuInstance nvmlVgpuInstance, PState *VgpuRuntimeState) Return {
cnvmlVgpuInstance, _ := (C.nvmlVgpuInstance_t)(nvmlVgpuInstance), cgoAllocsUnknown
cPState, _ := (*C.nvmlVgpuRuntimeState_t)(unsafe.Pointer(PState)), cgoAllocsUnknown
__ret := C.nvmlVgpuInstanceGetRuntimeStateSize(cnvmlVgpuInstance, cPState)
__v := (Return)(__ret)
return __v
}
// nvmlDeviceSetVgpuCapabilities function as declared in nvml/nvml.h
func nvmlDeviceSetVgpuCapabilities(nvmlDevice nvmlDevice, Capability DeviceVgpuCapability, State EnableState) Return {
cnvmlDevice, _ := *(*C.nvmlDevice_t)(unsafe.Pointer(&nvmlDevice)), cgoAllocsUnknown
@@ -2335,6 +2564,15 @@ func nvmlVgpuTypeGetMaxInstancesPerVm(nvmlVgpuTypeId nvmlVgpuTypeId, VgpuInstanc
return __v
}
// nvmlVgpuTypeGetBAR1Info function as declared in nvml/nvml.h
func nvmlVgpuTypeGetBAR1Info(nvmlVgpuTypeId nvmlVgpuTypeId, Bar1Info *VgpuTypeBar1Info) Return {
cnvmlVgpuTypeId, _ := (C.nvmlVgpuTypeId_t)(nvmlVgpuTypeId), cgoAllocsUnknown
cBar1Info, _ := (*C.nvmlVgpuTypeBar1Info_t)(unsafe.Pointer(Bar1Info)), cgoAllocsUnknown
__ret := C.nvmlVgpuTypeGetBAR1Info(cnvmlVgpuTypeId, cBar1Info)
__v := (Return)(__ret)
return __v
}
// nvmlDeviceGetActiveVgpus function as declared in nvml/nvml.h
func nvmlDeviceGetActiveVgpus(nvmlDevice nvmlDevice, VgpuCount *uint32, VgpuInstances *nvmlVgpuInstance) Return {
cnvmlDevice, _ := *(*C.nvmlDevice_t)(unsafe.Pointer(&nvmlDevice)), cgoAllocsUnknown
@@ -2518,6 +2756,86 @@ func nvmlVgpuInstanceGetMdevUUID(nvmlVgpuInstance nvmlVgpuInstance, MdevUuid *by
return __v
}
// nvmlGpuInstanceGetCreatableVgpus function as declared in nvml/nvml.h
func nvmlGpuInstanceGetCreatableVgpus(nvmlGpuInstance nvmlGpuInstance, PVgpus *VgpuTypeIdInfo) Return {
cnvmlGpuInstance, _ := *(*C.nvmlGpuInstance_t)(unsafe.Pointer(&nvmlGpuInstance)), cgoAllocsUnknown
cPVgpus, _ := (*C.nvmlVgpuTypeIdInfo_t)(unsafe.Pointer(PVgpus)), cgoAllocsUnknown
__ret := C.nvmlGpuInstanceGetCreatableVgpus(cnvmlGpuInstance, cPVgpus)
__v := (Return)(__ret)
return __v
}
// nvmlVgpuTypeGetMaxInstancesPerGpuInstance function as declared in nvml/nvml.h
func nvmlVgpuTypeGetMaxInstancesPerGpuInstance(PMaxInstance *VgpuTypeMaxInstance) Return {
cPMaxInstance, _ := (*C.nvmlVgpuTypeMaxInstance_t)(unsafe.Pointer(PMaxInstance)), cgoAllocsUnknown
__ret := C.nvmlVgpuTypeGetMaxInstancesPerGpuInstance(cPMaxInstance)
__v := (Return)(__ret)
return __v
}
// nvmlGpuInstanceGetActiveVgpus function as declared in nvml/nvml.h
func nvmlGpuInstanceGetActiveVgpus(nvmlGpuInstance nvmlGpuInstance, PVgpuInstanceInfo *ActiveVgpuInstanceInfo) Return {
cnvmlGpuInstance, _ := *(*C.nvmlGpuInstance_t)(unsafe.Pointer(&nvmlGpuInstance)), cgoAllocsUnknown
cPVgpuInstanceInfo, _ := (*C.nvmlActiveVgpuInstanceInfo_t)(unsafe.Pointer(PVgpuInstanceInfo)), cgoAllocsUnknown
__ret := C.nvmlGpuInstanceGetActiveVgpus(cnvmlGpuInstance, cPVgpuInstanceInfo)
__v := (Return)(__ret)
return __v
}
// nvmlGpuInstanceSetVgpuSchedulerState function as declared in nvml/nvml.h
func nvmlGpuInstanceSetVgpuSchedulerState(nvmlGpuInstance nvmlGpuInstance, PScheduler *VgpuSchedulerState) Return {
cnvmlGpuInstance, _ := *(*C.nvmlGpuInstance_t)(unsafe.Pointer(&nvmlGpuInstance)), cgoAllocsUnknown
cPScheduler, _ := (*C.nvmlVgpuSchedulerState_t)(unsafe.Pointer(PScheduler)), cgoAllocsUnknown
__ret := C.nvmlGpuInstanceSetVgpuSchedulerState(cnvmlGpuInstance, cPScheduler)
__v := (Return)(__ret)
return __v
}
// nvmlGpuInstanceGetVgpuSchedulerState function as declared in nvml/nvml.h
func nvmlGpuInstanceGetVgpuSchedulerState(nvmlGpuInstance nvmlGpuInstance, PSchedulerStateInfo *VgpuSchedulerStateInfo) Return {
cnvmlGpuInstance, _ := *(*C.nvmlGpuInstance_t)(unsafe.Pointer(&nvmlGpuInstance)), cgoAllocsUnknown
cPSchedulerStateInfo, _ := (*C.nvmlVgpuSchedulerStateInfo_t)(unsafe.Pointer(PSchedulerStateInfo)), cgoAllocsUnknown
__ret := C.nvmlGpuInstanceGetVgpuSchedulerState(cnvmlGpuInstance, cPSchedulerStateInfo)
__v := (Return)(__ret)
return __v
}
// nvmlGpuInstanceGetVgpuSchedulerLog function as declared in nvml/nvml.h
func nvmlGpuInstanceGetVgpuSchedulerLog(nvmlGpuInstance nvmlGpuInstance, PSchedulerLogInfo *VgpuSchedulerLogInfo) Return {
cnvmlGpuInstance, _ := *(*C.nvmlGpuInstance_t)(unsafe.Pointer(&nvmlGpuInstance)), cgoAllocsUnknown
cPSchedulerLogInfo, _ := (*C.nvmlVgpuSchedulerLogInfo_t)(unsafe.Pointer(PSchedulerLogInfo)), cgoAllocsUnknown
__ret := C.nvmlGpuInstanceGetVgpuSchedulerLog(cnvmlGpuInstance, cPSchedulerLogInfo)
__v := (Return)(__ret)
return __v
}
// nvmlGpuInstanceGetVgpuTypeCreatablePlacements function as declared in nvml/nvml.h
func nvmlGpuInstanceGetVgpuTypeCreatablePlacements(nvmlGpuInstance nvmlGpuInstance, PCreatablePlacementInfo *VgpuCreatablePlacementInfo) Return {
cnvmlGpuInstance, _ := *(*C.nvmlGpuInstance_t)(unsafe.Pointer(&nvmlGpuInstance)), cgoAllocsUnknown
cPCreatablePlacementInfo, _ := (*C.nvmlVgpuCreatablePlacementInfo_t)(unsafe.Pointer(PCreatablePlacementInfo)), cgoAllocsUnknown
__ret := C.nvmlGpuInstanceGetVgpuTypeCreatablePlacements(cnvmlGpuInstance, cPCreatablePlacementInfo)
__v := (Return)(__ret)
return __v
}
// nvmlGpuInstanceGetVgpuHeterogeneousMode function as declared in nvml/nvml.h
func nvmlGpuInstanceGetVgpuHeterogeneousMode(nvmlGpuInstance nvmlGpuInstance, PHeterogeneousMode *VgpuHeterogeneousMode) Return {
cnvmlGpuInstance, _ := *(*C.nvmlGpuInstance_t)(unsafe.Pointer(&nvmlGpuInstance)), cgoAllocsUnknown
cPHeterogeneousMode, _ := (*C.nvmlVgpuHeterogeneousMode_t)(unsafe.Pointer(PHeterogeneousMode)), cgoAllocsUnknown
__ret := C.nvmlGpuInstanceGetVgpuHeterogeneousMode(cnvmlGpuInstance, cPHeterogeneousMode)
__v := (Return)(__ret)
return __v
}
// nvmlGpuInstanceSetVgpuHeterogeneousMode function as declared in nvml/nvml.h
func nvmlGpuInstanceSetVgpuHeterogeneousMode(nvmlGpuInstance nvmlGpuInstance, PHeterogeneousMode *VgpuHeterogeneousMode) Return {
cnvmlGpuInstance, _ := *(*C.nvmlGpuInstance_t)(unsafe.Pointer(&nvmlGpuInstance)), cgoAllocsUnknown
cPHeterogeneousMode, _ := (*C.nvmlVgpuHeterogeneousMode_t)(unsafe.Pointer(PHeterogeneousMode)), cgoAllocsUnknown
__ret := C.nvmlGpuInstanceSetVgpuHeterogeneousMode(cnvmlGpuInstance, cPHeterogeneousMode)
__v := (Return)(__ret)
return __v
}
// nvmlVgpuInstanceGetMetadata function as declared in nvml/nvml.h
func nvmlVgpuInstanceGetMetadata(nvmlVgpuInstance nvmlVgpuInstance, nvmlVgpuMetadata *nvmlVgpuMetadata, BufferSize *uint32) Return {
cnvmlVgpuInstance, _ := (C.nvmlVgpuInstance_t)(nvmlVgpuInstance), cgoAllocsUnknown
@@ -3062,45 +3380,74 @@ func nvmlGpmSetStreamingEnabled(nvmlDevice nvmlDevice, State uint32) Return {
return __v
}
// nvmlDeviceSetNvLinkDeviceLowPowerThreshold function as declared in nvml/nvml.h
func nvmlDeviceSetNvLinkDeviceLowPowerThreshold(nvmlDevice nvmlDevice, Info *NvLinkPowerThres) Return {
// nvmlDeviceGetCapabilities function as declared in nvml/nvml.h
func nvmlDeviceGetCapabilities(nvmlDevice nvmlDevice, Caps *DeviceCapabilities) Return {
cnvmlDevice, _ := *(*C.nvmlDevice_t)(unsafe.Pointer(&nvmlDevice)), cgoAllocsUnknown
cInfo, _ := (*C.nvmlNvLinkPowerThres_t)(unsafe.Pointer(Info)), cgoAllocsUnknown
__ret := C.nvmlDeviceSetNvLinkDeviceLowPowerThreshold(cnvmlDevice, cInfo)
cCaps, _ := (*C.nvmlDeviceCapabilities_t)(unsafe.Pointer(Caps)), cgoAllocsUnknown
__ret := C.nvmlDeviceGetCapabilities(cnvmlDevice, cCaps)
__v := (Return)(__ret)
return __v
}
// nvmlSystemSetNvlinkBwMode function as declared in nvml/nvml.h
func nvmlSystemSetNvlinkBwMode(NvlinkBwMode uint32) Return {
cNvlinkBwMode, _ := (C.uint)(NvlinkBwMode), cgoAllocsUnknown
__ret := C.nvmlSystemSetNvlinkBwMode(cNvlinkBwMode)
__v := (Return)(__ret)
return __v
}
// nvmlSystemGetNvlinkBwMode function as declared in nvml/nvml.h
func nvmlSystemGetNvlinkBwMode(NvlinkBwMode *uint32) Return {
cNvlinkBwMode, _ := (*C.uint)(unsafe.Pointer(NvlinkBwMode)), cgoAllocsUnknown
__ret := C.nvmlSystemGetNvlinkBwMode(cNvlinkBwMode)
__v := (Return)(__ret)
return __v
}
// nvmlDeviceSetPowerManagementLimit_v2 function as declared in nvml/nvml.h
func nvmlDeviceSetPowerManagementLimit_v2(nvmlDevice nvmlDevice, PowerValue *PowerValue_v2) Return {
// nvmlDeviceWorkloadPowerProfileGetProfilesInfo function as declared in nvml/nvml.h
func nvmlDeviceWorkloadPowerProfileGetProfilesInfo(nvmlDevice nvmlDevice, ProfilesInfo *WorkloadPowerProfileProfilesInfo) Return {
cnvmlDevice, _ := *(*C.nvmlDevice_t)(unsafe.Pointer(&nvmlDevice)), cgoAllocsUnknown
cPowerValue, _ := (*C.nvmlPowerValue_v2_t)(unsafe.Pointer(PowerValue)), cgoAllocsUnknown
__ret := C.nvmlDeviceSetPowerManagementLimit_v2(cnvmlDevice, cPowerValue)
cProfilesInfo, _ := (*C.nvmlWorkloadPowerProfileProfilesInfo_t)(unsafe.Pointer(ProfilesInfo)), cgoAllocsUnknown
__ret := C.nvmlDeviceWorkloadPowerProfileGetProfilesInfo(cnvmlDevice, cProfilesInfo)
__v := (Return)(__ret)
return __v
}
// nvmlDeviceGetSramEccErrorStatus function as declared in nvml/nvml.h
func nvmlDeviceGetSramEccErrorStatus(nvmlDevice nvmlDevice, Status *EccSramErrorStatus) Return {
// nvmlDeviceWorkloadPowerProfileGetCurrentProfiles function as declared in nvml/nvml.h
func nvmlDeviceWorkloadPowerProfileGetCurrentProfiles(nvmlDevice nvmlDevice, CurrentProfiles *WorkloadPowerProfileCurrentProfiles) Return {
cnvmlDevice, _ := *(*C.nvmlDevice_t)(unsafe.Pointer(&nvmlDevice)), cgoAllocsUnknown
cStatus, _ := (*C.nvmlEccSramErrorStatus_t)(unsafe.Pointer(Status)), cgoAllocsUnknown
__ret := C.nvmlDeviceGetSramEccErrorStatus(cnvmlDevice, cStatus)
cCurrentProfiles, _ := (*C.nvmlWorkloadPowerProfileCurrentProfiles_t)(unsafe.Pointer(CurrentProfiles)), cgoAllocsUnknown
__ret := C.nvmlDeviceWorkloadPowerProfileGetCurrentProfiles(cnvmlDevice, cCurrentProfiles)
__v := (Return)(__ret)
return __v
}
// nvmlDeviceWorkloadPowerProfileSetRequestedProfiles function as declared in nvml/nvml.h
func nvmlDeviceWorkloadPowerProfileSetRequestedProfiles(nvmlDevice nvmlDevice, RequestedProfiles *WorkloadPowerProfileRequestedProfiles) Return {
cnvmlDevice, _ := *(*C.nvmlDevice_t)(unsafe.Pointer(&nvmlDevice)), cgoAllocsUnknown
cRequestedProfiles, _ := (*C.nvmlWorkloadPowerProfileRequestedProfiles_t)(unsafe.Pointer(RequestedProfiles)), cgoAllocsUnknown
__ret := C.nvmlDeviceWorkloadPowerProfileSetRequestedProfiles(cnvmlDevice, cRequestedProfiles)
__v := (Return)(__ret)
return __v
}
// nvmlDeviceWorkloadPowerProfileClearRequestedProfiles function as declared in nvml/nvml.h
func nvmlDeviceWorkloadPowerProfileClearRequestedProfiles(nvmlDevice nvmlDevice, RequestedProfiles *WorkloadPowerProfileRequestedProfiles) Return {
cnvmlDevice, _ := *(*C.nvmlDevice_t)(unsafe.Pointer(&nvmlDevice)), cgoAllocsUnknown
cRequestedProfiles, _ := (*C.nvmlWorkloadPowerProfileRequestedProfiles_t)(unsafe.Pointer(RequestedProfiles)), cgoAllocsUnknown
__ret := C.nvmlDeviceWorkloadPowerProfileClearRequestedProfiles(cnvmlDevice, cRequestedProfiles)
__v := (Return)(__ret)
return __v
}
// nvmlDevicePowerSmoothingActivatePresetProfile function as declared in nvml/nvml.h
func nvmlDevicePowerSmoothingActivatePresetProfile(nvmlDevice nvmlDevice, Profile *PowerSmoothingProfile) Return {
cnvmlDevice, _ := *(*C.nvmlDevice_t)(unsafe.Pointer(&nvmlDevice)), cgoAllocsUnknown
cProfile, _ := (*C.nvmlPowerSmoothingProfile_t)(unsafe.Pointer(Profile)), cgoAllocsUnknown
__ret := C.nvmlDevicePowerSmoothingActivatePresetProfile(cnvmlDevice, cProfile)
__v := (Return)(__ret)
return __v
}
// nvmlDevicePowerSmoothingUpdatePresetProfileParam function as declared in nvml/nvml.h
func nvmlDevicePowerSmoothingUpdatePresetProfileParam(nvmlDevice nvmlDevice, Profile *PowerSmoothingProfile) Return {
cnvmlDevice, _ := *(*C.nvmlDevice_t)(unsafe.Pointer(&nvmlDevice)), cgoAllocsUnknown
cProfile, _ := (*C.nvmlPowerSmoothingProfile_t)(unsafe.Pointer(Profile)), cgoAllocsUnknown
__ret := C.nvmlDevicePowerSmoothingUpdatePresetProfileParam(cnvmlDevice, cProfile)
__v := (Return)(__ret)
return __v
}
// nvmlDevicePowerSmoothingSetState function as declared in nvml/nvml.h
func nvmlDevicePowerSmoothingSetState(nvmlDevice nvmlDevice, State *PowerSmoothingState) Return {
cnvmlDevice, _ := *(*C.nvmlDevice_t)(unsafe.Pointer(&nvmlDevice)), cgoAllocsUnknown
cState, _ := (*C.nvmlPowerSmoothingState_t)(unsafe.Pointer(State)), cgoAllocsUnknown
__ret := C.nvmlDevicePowerSmoothingSetState(cnvmlDevice, cState)
__v := (Return)(__ret)
return __v
}
@@ -3308,3 +3655,13 @@ func nvmlVgpuInstanceGetLicenseInfo_v1(nvmlVgpuInstance nvmlVgpuInstance, Licens
__v := (Return)(__ret)
return __v
}
// nvmlDeviceGetDriverModel_v1 function as declared in nvml/nvml.h
func nvmlDeviceGetDriverModel_v1(nvmlDevice nvmlDevice, Current *DriverModel, Pending *DriverModel) Return {
cnvmlDevice, _ := *(*C.nvmlDevice_t)(unsafe.Pointer(&nvmlDevice)), cgoAllocsUnknown
cCurrent, _ := (*C.nvmlDriverModel_t)(unsafe.Pointer(Current)), cgoAllocsUnknown
cPending, _ := (*C.nvmlDriverModel_t)(unsafe.Pointer(Pending)), cgoAllocsUnknown
__ret := C.nvmlDeviceGetDriverModel(cnvmlDevice, cCurrent, cPending)
__v := (Return)(__ret)
return __v
}

File diff suppressed because it is too large Load Diff

View File

@@ -88,31 +88,31 @@ func (l *library) SystemGetConfComputeCapabilities() (ConfComputeSystemCaps, Ret
}
// nvml.SystemGetConfComputeState()
func SystemGetConfComputeState() (ConfComputeSystemState, Return) {
func (l *library) SystemGetConfComputeState() (ConfComputeSystemState, Return) {
var state ConfComputeSystemState
ret := nvmlSystemGetConfComputeState(&state)
return state, ret
}
// nvml.SystemGetConfComputeGpusReadyState()
func SystemGetConfComputeGpusReadyState() (uint32, Return) {
func (l *library) SystemGetConfComputeGpusReadyState() (uint32, Return) {
var isAcceptingWork uint32
ret := nvmlSystemGetConfComputeGpusReadyState(&isAcceptingWork)
return isAcceptingWork, ret
}
// nvml.SystemSetConfComputeGpusReadyState()
func SystemSetConfComputeGpusReadyState(isAcceptingWork uint32) Return {
func (l *library) SystemSetConfComputeGpusReadyState(isAcceptingWork uint32) Return {
return nvmlSystemSetConfComputeGpusReadyState(isAcceptingWork)
}
// nvml.SystemSetNvlinkBwMode()
func SystemSetNvlinkBwMode(nvlinkBwMode uint32) Return {
func (l *library) SystemSetNvlinkBwMode(nvlinkBwMode uint32) Return {
return nvmlSystemSetNvlinkBwMode(nvlinkBwMode)
}
// nvml.SystemGetNvlinkBwMode()
func SystemGetNvlinkBwMode() (uint32, Return) {
func (l *library) SystemGetNvlinkBwMode() (uint32, Return) {
var nvlinkBwMode uint32
ret := nvmlSystemGetNvlinkBwMode(&nvlinkBwMode)
return nvlinkBwMode, ret
@@ -138,3 +138,11 @@ func (l *library) SystemGetConfComputeSettings() (SystemConfComputeSettings, Ret
func (l *library) SystemSetConfComputeKeyRotationThresholdInfo(keyRotationThresholdInfo ConfComputeSetKeyRotationThresholdInfo) Return {
return nvmlSystemSetConfComputeKeyRotationThresholdInfo(&keyRotationThresholdInfo)
}
// nvml.SystemGetDriverBranch()
func (l *library) SystemGetDriverBranch() (SystemDriverBranchInfo, Return) {
var branchInfo SystemDriverBranchInfo
branchInfo.Version = STRUCT_VERSION(branchInfo, 1)
ret := nvmlSystemGetDriverBranch(&branchInfo, SYSTEM_DRIVER_VERSION_BUFFER_SIZE)
return branchInfo, ret
}

View File

@@ -9,6 +9,10 @@ type nvmlDevice struct {
Handle *_Ctype_struct_nvmlDevice_st
}
type nvmlGpuInstance struct {
Handle *_Ctype_struct_nvmlGpuInstance_st
}
type PciInfoExt_v1 struct {
Version uint32
Domain uint32
@@ -182,6 +186,58 @@ type GpuThermalSettings struct {
Sensor [3]GpuThermalSettingsSensor
}
type CoolerInfo_v1 struct {
Version uint32
Index uint32
SignalType uint32
Target uint32
}
type CoolerInfo struct {
Version uint32
Index uint32
SignalType uint32
Target uint32
}
const sizeofUUIDValue = unsafe.Sizeof([41]byte{})
type UUIDValue [sizeofUUIDValue]byte
type UUID_v1 struct {
Version uint32
Type uint32
Value [41]byte
Pad_cgo_0 [3]byte
}
type UUID struct {
Version uint32
Type uint32
Value [41]byte
Pad_cgo_0 [3]byte
}
type DramEncryptionInfo_v1 struct {
Version uint32
EncryptionState uint32
}
type DramEncryptionInfo struct {
Version uint32
EncryptionState uint32
}
type MarginTemperature_v1 struct {
Version uint32
MarginTemperature int32
}
type MarginTemperature struct {
Version uint32
MarginTemperature int32
}
type ClkMonFaultInfo struct {
ClkApiDomain uint32
ClkDomainFaultMask uint32
@@ -193,6 +249,189 @@ type ClkMonStatus struct {
ClkMonList [32]ClkMonFaultInfo
}
type ClockOffset_v1 struct {
Version uint32
Type uint32
Pstate uint32
ClockOffsetMHz int32
MinClockOffsetMHz int32
MaxClockOffsetMHz int32
}
type ClockOffset struct {
Version uint32
Type uint32
Pstate uint32
ClockOffsetMHz int32
MinClockOffsetMHz int32
MaxClockOffsetMHz int32
}
type FanSpeedInfo_v1 struct {
Version uint32
Fan uint32
Speed uint32
}
type FanSpeedInfo struct {
Version uint32
Fan uint32
Speed uint32
}
type DevicePerfModes_v1 struct {
Version uint32
Str [2048]int8
}
type DevicePerfModes struct {
Version uint32
Str [2048]int8
}
type DeviceCurrentClockFreqs_v1 struct {
Version uint32
Str [2048]int8
}
type DeviceCurrentClockFreqs struct {
Version uint32
Str [2048]int8
}
type ProcessUtilizationSample struct {
Pid uint32
TimeStamp uint64
SmUtil uint32
MemUtil uint32
EncUtil uint32
DecUtil uint32
}
type ProcessUtilizationInfo_v1 struct {
TimeStamp uint64
Pid uint32
SmUtil uint32
MemUtil uint32
EncUtil uint32
DecUtil uint32
JpgUtil uint32
OfaUtil uint32
Pad_cgo_0 [4]byte
}
type ProcessesUtilizationInfo_v1 struct {
Version uint32
ProcessSamplesCount uint32
LastSeenTimeStamp uint64
ProcUtilArray *ProcessUtilizationInfo_v1
}
type ProcessesUtilizationInfo struct {
Version uint32
ProcessSamplesCount uint32
LastSeenTimeStamp uint64
ProcUtilArray *ProcessUtilizationInfo_v1
}
type EccSramErrorStatus_v1 struct {
Version uint32
AggregateUncParity uint64
AggregateUncSecDed uint64
AggregateCor uint64
VolatileUncParity uint64
VolatileUncSecDed uint64
VolatileCor uint64
AggregateUncBucketL2 uint64
AggregateUncBucketSm uint64
AggregateUncBucketPcie uint64
AggregateUncBucketMcu uint64
AggregateUncBucketOther uint64
BThresholdExceeded uint32
Pad_cgo_0 [4]byte
}
type EccSramErrorStatus struct {
Version uint32
AggregateUncParity uint64
AggregateUncSecDed uint64
AggregateCor uint64
VolatileUncParity uint64
VolatileUncSecDed uint64
VolatileCor uint64
AggregateUncBucketL2 uint64
AggregateUncBucketSm uint64
AggregateUncBucketPcie uint64
AggregateUncBucketMcu uint64
AggregateUncBucketOther uint64
BThresholdExceeded uint32
Pad_cgo_0 [4]byte
}
type PlatformInfo_v1 struct {
Version uint32
IbGuid [16]uint8
RackGuid [16]uint8
ChassisPhysicalSlotNumber uint8
ComputeSlotIndex uint8
NodeIndex uint8
PeerType uint8
ModuleId uint8
Pad_cgo_0 [3]byte
}
type PlatformInfo_v2 struct {
Version uint32
IbGuid [16]uint8
ChassisSerialNumber [16]uint8
SlotNumber uint8
TrayIndex uint8
HostId uint8
PeerType uint8
ModuleId uint8
Pad_cgo_0 [3]byte
}
type PlatformInfo struct {
Version uint32
IbGuid [16]uint8
ChassisSerialNumber [16]uint8
SlotNumber uint8
TrayIndex uint8
HostId uint8
PeerType uint8
ModuleId uint8
Pad_cgo_0 [3]byte
}
type DeviceArchitecture uint32
type BusType uint32
type FanControlPolicy uint32
type PowerSource uint32
type GpuDynamicPstatesInfoUtilization struct {
BIsPresent uint32
Percentage uint32
IncThreshold uint32
DecThreshold uint32
}
type GpuDynamicPstatesInfo struct {
Flags uint32
Utilization [8]GpuDynamicPstatesInfoUtilization
}
type PowerScopeType byte
type PowerValue_v2 struct {
Version uint32
PowerScope uint8
PowerValueMw uint32
}
type nvmlVgpuTypeId uint32
type nvmlVgpuInstance uint32
@@ -224,11 +463,32 @@ type VgpuPlacementList_v1 struct {
PlacementIds *uint32
}
type VgpuPlacementList_v2 struct {
Version uint32
PlacementSize uint32
Count uint32
PlacementIds *uint32
Mode uint32
Pad_cgo_0 [4]byte
}
type VgpuPlacementList struct {
Version uint32
PlacementSize uint32
Count uint32
PlacementIds *uint32
Mode uint32
Pad_cgo_0 [4]byte
}
type VgpuTypeBar1Info_v1 struct {
Version uint32
Bar1Size uint64
}
type VgpuTypeBar1Info struct {
Version uint32
Bar1Size uint64
}
type VgpuInstanceUtilizationSample struct {
@@ -306,6 +566,16 @@ type VgpuProcessesUtilizationInfo struct {
VgpuProcUtilArray *VgpuProcessUtilizationInfo_v1
}
type VgpuRuntimeState_v1 struct {
Version uint32
Size uint64
}
type VgpuRuntimeState struct {
Version uint32
Size uint64
}
type VgpuSchedulerParamsVgpuSchedDataWithARR struct {
AvgFactor uint32
Timeslice uint32
@@ -390,41 +660,6 @@ type VgpuLicenseInfo struct {
CurrentState uint32
}
type ProcessUtilizationSample struct {
Pid uint32
TimeStamp uint64
SmUtil uint32
MemUtil uint32
EncUtil uint32
DecUtil uint32
}
type ProcessUtilizationInfo_v1 struct {
TimeStamp uint64
Pid uint32
SmUtil uint32
MemUtil uint32
EncUtil uint32
DecUtil uint32
JpgUtil uint32
OfaUtil uint32
Pad_cgo_0 [4]byte
}
type ProcessesUtilizationInfo_v1 struct {
Version uint32
ProcessSamplesCount uint32
LastSeenTimeStamp uint64
ProcUtilArray *ProcessUtilizationInfo_v1
}
type ProcessesUtilizationInfo struct {
Version uint32
ProcessSamplesCount uint32
LastSeenTimeStamp uint64
ProcUtilArray *ProcessUtilizationInfo_v1
}
type GridLicenseExpiry struct {
Year uint32
Month uint16
@@ -451,58 +686,114 @@ type GridLicensableFeatures struct {
GridLicensableFeatures [3]GridLicensableFeature
}
type EccSramErrorStatus_v1 struct {
Version uint32
AggregateUncParity uint64
AggregateUncSecDed uint64
AggregateCor uint64
VolatileUncParity uint64
VolatileUncSecDed uint64
VolatileCor uint64
AggregateUncBucketL2 uint64
AggregateUncBucketSm uint64
AggregateUncBucketPcie uint64
AggregateUncBucketMcu uint64
AggregateUncBucketOther uint64
BThresholdExceeded uint32
Pad_cgo_0 [4]byte
type VgpuTypeIdInfo_v1 struct {
Version uint32
VgpuCount uint32
VgpuTypeIds *uint32
}
type EccSramErrorStatus struct {
Version uint32
AggregateUncParity uint64
AggregateUncSecDed uint64
AggregateCor uint64
VolatileUncParity uint64
VolatileUncSecDed uint64
VolatileCor uint64
AggregateUncBucketL2 uint64
AggregateUncBucketSm uint64
AggregateUncBucketPcie uint64
AggregateUncBucketMcu uint64
AggregateUncBucketOther uint64
BThresholdExceeded uint32
Pad_cgo_0 [4]byte
type VgpuTypeIdInfo struct {
Version uint32
VgpuCount uint32
VgpuTypeIds *uint32
}
type DeviceArchitecture uint32
type BusType uint32
type FanControlPolicy uint32
type PowerSource uint32
type GpuDynamicPstatesInfoUtilization struct {
BIsPresent uint32
Percentage uint32
IncThreshold uint32
DecThreshold uint32
type VgpuTypeMaxInstance_v1 struct {
Version uint32
VgpuTypeId uint32
MaxInstancePerGI uint32
}
type GpuDynamicPstatesInfo struct {
Flags uint32
Utilization [8]GpuDynamicPstatesInfoUtilization
type VgpuTypeMaxInstance struct {
Version uint32
VgpuTypeId uint32
MaxInstancePerGI uint32
}
type ActiveVgpuInstanceInfo_v1 struct {
Version uint32
VgpuCount uint32
VgpuInstances *uint32
}
type ActiveVgpuInstanceInfo struct {
Version uint32
VgpuCount uint32
VgpuInstances *uint32
}
type VgpuSchedulerState_v1 struct {
Version uint32
EngineId uint32
SchedulerPolicy uint32
EnableARRMode uint32
SchedulerParams [8]byte
}
type VgpuSchedulerState struct {
Version uint32
EngineId uint32
SchedulerPolicy uint32
EnableARRMode uint32
SchedulerParams [8]byte
}
type VgpuSchedulerStateInfo_v1 struct {
Version uint32
EngineId uint32
SchedulerPolicy uint32
ArrMode uint32
SchedulerParams [8]byte
}
type VgpuSchedulerStateInfo struct {
Version uint32
EngineId uint32
SchedulerPolicy uint32
ArrMode uint32
SchedulerParams [8]byte
}
type VgpuSchedulerLogInfo_v1 struct {
Version uint32
EngineId uint32
SchedulerPolicy uint32
ArrMode uint32
SchedulerParams [8]byte
EntriesCount uint32
LogEntries [200]VgpuSchedulerLogEntry
}
type VgpuSchedulerLogInfo struct {
Version uint32
EngineId uint32
SchedulerPolicy uint32
ArrMode uint32
SchedulerParams [8]byte
EntriesCount uint32
LogEntries [200]VgpuSchedulerLogEntry
}
type VgpuCreatablePlacementInfo_v1 struct {
Version uint32
VgpuTypeId uint32
Count uint32
PlacementIds *uint32
PlacementSize uint32
Pad_cgo_0 [4]byte
}
type VgpuCreatablePlacementInfo struct {
Version uint32
VgpuTypeId uint32
Count uint32
PlacementIds *uint32
PlacementSize uint32
Pad_cgo_0 [4]byte
}
type NvLinkPowerThres struct {
LowPwrThreshold uint32
}
type FieldValue struct {
@@ -565,6 +856,66 @@ type nvmlEventData struct {
ComputeInstanceId uint32
}
type SystemEventSet struct {
Handle *_Ctype_struct_nvmlSystemEventSet_st
}
type SystemEventSetCreateRequest_v1 struct {
Version uint32
Set SystemEventSet
}
type SystemEventSetCreateRequest struct {
Version uint32
Set SystemEventSet
}
type SystemEventSetFreeRequest_v1 struct {
Version uint32
Set SystemEventSet
}
type SystemEventSetFreeRequest struct {
Version uint32
Set SystemEventSet
}
type SystemRegisterEventRequest_v1 struct {
Version uint32
EventTypes uint64
Set SystemEventSet
}
type SystemRegisterEventRequest struct {
Version uint32
EventTypes uint64
Set SystemEventSet
}
type SystemEventData_v1 struct {
EventType uint64
GpuId uint32
Pad_cgo_0 [4]byte
}
type SystemEventSetWaitRequest_v1 struct {
Version uint32
Timeoutms uint32
Set SystemEventSet
Data *SystemEventData_v1
DataSize uint32
NumEvent uint32
}
type SystemEventSetWaitRequest struct {
Version uint32
Timeoutms uint32
Set SystemEventSet
Data *SystemEventData_v1
DataSize uint32
NumEvent uint32
}
type AccountingStats struct {
GpuUtilization uint32
MemoryUtilization uint32
@@ -703,16 +1054,70 @@ type GpuFabricInfoV struct {
HealthMask uint32
}
type PowerScopeType byte
type SystemDriverBranchInfo_v1 struct {
Version uint32
Branch [80]int8
}
type PowerValue_v2 struct {
Version uint32
PowerScope uint8
PowerValueMw uint32
type SystemDriverBranchInfo struct {
Version uint32
Branch [80]int8
}
type AffinityScope uint32
type Temperature_v1 struct {
Version uint32
SensorType uint32
Temperature int32
}
type Temperature struct {
Version uint32
SensorType uint32
Temperature int32
}
type NvlinkSupportedBwModes_v1 struct {
Version uint32
BwModes [23]uint8
TotalBwModes uint8
}
type NvlinkSupportedBwModes struct {
Version uint32
BwModes [23]uint8
TotalBwModes uint8
}
type NvlinkGetBwMode_v1 struct {
Version uint32
BIsBest uint32
BwMode uint8
Pad_cgo_0 [3]byte
}
type NvlinkGetBwMode struct {
Version uint32
BIsBest uint32
BwMode uint8
Pad_cgo_0 [3]byte
}
type NvlinkSetBwMode_v1 struct {
Version uint32
BSetBest uint32
BwMode uint8
Pad_cgo_0 [3]byte
}
type NvlinkSetBwMode struct {
Version uint32
BSetBest uint32
BwMode uint8
Pad_cgo_0 [3]byte
}
type VgpuVersion struct {
MinVersion uint32
MaxVersion uint32
@@ -811,10 +1216,6 @@ type nvmlGpuInstanceInfo struct {
Placement GpuInstancePlacement
}
type nvmlGpuInstance struct {
Handle *_Ctype_struct_nvmlGpuInstance_st
}
type ComputeInstancePlacement struct {
Start uint32
Size uint32
@@ -895,7 +1296,7 @@ type nvmlGpmMetricsGetType struct {
NumMetrics uint32
Sample1 nvmlGpmSample
Sample2 nvmlGpmSample
Metrics [98]GpmMetric
Metrics [210]GpmMetric
}
type GpmSupport struct {
@@ -903,6 +1304,90 @@ type GpmSupport struct {
IsSupportedDevice uint32
}
type NvLinkPowerThres struct {
LowPwrThreshold uint32
type DeviceCapabilities_v1 struct {
Version uint32
CapMask uint32
}
type DeviceCapabilities struct {
Version uint32
CapMask uint32
}
type Mask255 struct {
Mask [8]uint32
}
type WorkloadPowerProfileInfo_v1 struct {
Version uint32
ProfileId uint32
Priority uint32
ConflictingMask Mask255
}
type WorkloadPowerProfileInfo struct {
Version uint32
ProfileId uint32
Priority uint32
ConflictingMask Mask255
}
type WorkloadPowerProfileProfilesInfo_v1 struct {
Version uint32
PerfProfilesMask Mask255
PerfProfile [255]WorkloadPowerProfileInfo
}
type WorkloadPowerProfileProfilesInfo struct {
Version uint32
PerfProfilesMask Mask255
PerfProfile [255]WorkloadPowerProfileInfo
}
type WorkloadPowerProfileCurrentProfiles_v1 struct {
Version uint32
PerfProfilesMask Mask255
RequestedProfilesMask Mask255
EnforcedProfilesMask Mask255
}
type WorkloadPowerProfileCurrentProfiles struct {
Version uint32
PerfProfilesMask Mask255
RequestedProfilesMask Mask255
EnforcedProfilesMask Mask255
}
type WorkloadPowerProfileRequestedProfiles_v1 struct {
Version uint32
RequestedProfilesMask Mask255
}
type WorkloadPowerProfileRequestedProfiles struct {
Version uint32
RequestedProfilesMask Mask255
}
type PowerSmoothingProfile_v1 struct {
Version uint32
ProfileId uint32
ParamId uint32
Value float64
}
type PowerSmoothingProfile struct {
Version uint32
ProfileId uint32
ParamId uint32
Value float64
}
type PowerSmoothingState_v1 struct {
Version uint32
State uint32
}
type PowerSmoothingState struct {
Version uint32
State uint32
}

View File

@@ -478,3 +478,32 @@ func (l *library) GetVgpuDriverCapabilities(capability VgpuDriverCapability) (bo
ret := nvmlGetVgpuDriverCapabilities(capability, &capResult)
return (capResult != 0), ret
}
// nvml.VgpuTypeGetBAR1Info()
func (l *library) VgpuTypeGetBAR1Info(vgpuTypeId VgpuTypeId) (VgpuTypeBar1Info, Return) {
return vgpuTypeId.GetBAR1Info()
}
func (vgpuTypeId nvmlVgpuTypeId) GetBAR1Info() (VgpuTypeBar1Info, Return) {
var bar1Info VgpuTypeBar1Info
bar1Info.Version = STRUCT_VERSION(bar1Info, 1)
ret := nvmlVgpuTypeGetBAR1Info(vgpuTypeId, &bar1Info)
return bar1Info, ret
}
// nvml.VgpuInstanceGetRuntimeStateSize()
func (l *library) VgpuInstanceGetRuntimeStateSize(vgpuInstance VgpuInstance) (VgpuRuntimeState, Return) {
return vgpuInstance.GetRuntimeStateSize()
}
func (vgpuInstance nvmlVgpuInstance) GetRuntimeStateSize() (VgpuRuntimeState, Return) {
var pState VgpuRuntimeState
pState.Version = STRUCT_VERSION(pState, 1)
ret := nvmlVgpuInstanceGetRuntimeStateSize(vgpuInstance, &pState)
return pState, ret
}
// nvml.VgpuTypeGetMaxInstancesPerGpuInstance()
func (l *library) VgpuTypeGetMaxInstancesPerGpuInstance(maxInstance *VgpuTypeMaxInstance) Return {
return nvmlVgpuTypeGetMaxInstancesPerGpuInstance(maxInstance)
}

File diff suppressed because it is too large Load Diff

202
vendor/github.com/moby/sys/reexec/LICENSE generated vendored Normal file
View File

@@ -0,0 +1,202 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

83
vendor/github.com/moby/sys/reexec/reexec.go generated vendored Normal file
View File

@@ -0,0 +1,83 @@
// Package reexec facilitates the busybox style reexec of a binary.
//
// Handlers can be registered with a name and the argv 0 of the exec of
// the binary will be used to find and execute custom init paths.
//
// It is used to work around forking limitations when using Go.
package reexec
import (
"fmt"
"os"
"os/exec"
"path/filepath"
"runtime"
)
var registeredInitializers = make(map[string]func())
// Register adds an initialization func under the specified name. It panics
// if the given name is already registered.
func Register(name string, initializer func()) {
if _, exists := registeredInitializers[name]; exists {
panic(fmt.Sprintf("reexec func already registered under name %q", name))
}
registeredInitializers[name] = initializer
}
// Init is called as the first part of the exec process and returns true if an
// initialization function was called.
func Init() bool {
if initializer, ok := registeredInitializers[os.Args[0]]; ok {
initializer()
return true
}
return false
}
// Command returns an [*exec.Cmd] with its Path set to the path of the current
// binary using the result of [Self].
//
// On Linux, the Pdeathsig of [*exec.Cmd.SysProcAttr] is set to SIGTERM.
// This signal is sent to the process when the OS thread that created
// the process dies.
//
// It is the caller's responsibility to ensure that the creating thread is
// not terminated prematurely. See https://go.dev/issue/27505 for more details.
func Command(args ...string) *exec.Cmd {
return command(args...)
}
// Self returns the path to the current process's binary.
//
// On Linux, it returns "/proc/self/exe", which provides the in-memory version
// of the current binary. This makes it safe to delete or replace the on-disk
// binary (os.Args[0]).
//
// On Other platforms, it attempts to look up the absolute path for os.Args[0],
// or otherwise returns os.Args[0] as-is. For example if current binary is
// "my-binary" at "/usr/bin/" (or "my-binary.exe" at "C:\" on Windows),
// then it returns "/usr/bin/my-binary" and "C:\my-binary.exe" respectively.
func Self() string {
if runtime.GOOS == "linux" {
return "/proc/self/exe"
}
return naiveSelf()
}
func naiveSelf() string {
name := os.Args[0]
if filepath.Base(name) == name {
if lp, err := exec.LookPath(name); err == nil {
return lp
}
}
// handle conversion of relative paths to absolute
if absName, err := filepath.Abs(name); err == nil {
return absName
}
// if we couldn't get absolute name, return original
// (NOTE: Go only errors on Abs() if os.Getwd fails)
return name
}

16
vendor/github.com/moby/sys/reexec/reexec_linux.go generated vendored Normal file
View File

@@ -0,0 +1,16 @@
package reexec
import (
"os/exec"
"syscall"
)
func command(args ...string) *exec.Cmd {
return &exec.Cmd{
Path: Self(),
Args: args,
SysProcAttr: &syscall.SysProcAttr{
Pdeathsig: syscall.SIGTERM,
},
}
}

14
vendor/github.com/moby/sys/reexec/reexec_other.go generated vendored Normal file
View File

@@ -0,0 +1,14 @@
//go:build !linux
package reexec
import (
"os/exec"
)
func command(args ...string) *exec.Cmd {
return &exec.Cmd{
Path: Self(),
Args: args,
}
}

View File

@@ -47,11 +47,15 @@ func sealMemfd(f **os.File) error {
// errors because they are not needed and we want to continue
// to work on older kernels.
fd := (*f).Fd()
// F_SEAL_FUTURE_WRITE -- Linux 5.1
_, _ = unix.FcntlInt(fd, unix.F_ADD_SEALS, unix.F_SEAL_FUTURE_WRITE)
// Skip F_SEAL_FUTURE_WRITE, it is not needed because we alreadu use the
// stronger F_SEAL_WRITE (and is buggy on Linux <5.5 -- see kernel commit
// 05d351102dbe and <https://github.com/opencontainers/runc/pull/4640>).
// F_SEAL_EXEC -- Linux 6.3
const F_SEAL_EXEC = 0x20 //nolint:revive // this matches the unix.* name
_, _ = unix.FcntlInt(fd, unix.F_ADD_SEALS, F_SEAL_EXEC)
// Apply all original memfd seals.
_, err := unix.FcntlInt(fd, unix.F_ADD_SEALS, baseMemfdSeals)
return os.NewSyscallError("fcntl(F_ADD_SEALS)", err)

View File

@@ -6,8 +6,6 @@ import (
"fmt"
"io"
"os"
"strconv"
"syscall"
"unsafe"
"github.com/sirupsen/logrus"
@@ -43,49 +41,6 @@ func Exec(cmd string, args []string, env []string) error {
}
}
func execveat(fd uintptr, pathname string, args []string, env []string, flags int) error {
pathnamep, err := syscall.BytePtrFromString(pathname)
if err != nil {
return err
}
argvp, err := syscall.SlicePtrFromStrings(args)
if err != nil {
return err
}
envp, err := syscall.SlicePtrFromStrings(env)
if err != nil {
return err
}
_, _, errno := syscall.Syscall6(
unix.SYS_EXECVEAT,
fd,
uintptr(unsafe.Pointer(pathnamep)),
uintptr(unsafe.Pointer(&argvp[0])),
uintptr(unsafe.Pointer(&envp[0])),
uintptr(flags),
0,
)
return errno
}
func Fexecve(fd uintptr, args []string, env []string) error {
var err error
for {
err = execveat(fd, "", args, env, unix.AT_EMPTY_PATH)
if err != unix.EINTR { // nolint:errorlint // unix errors are bare
break
}
}
if err == unix.ENOSYS { // nolint:errorlint // unix errors are bare
// Fallback to classic /proc/self/fd/... exec.
return Exec("/proc/self/fd/"+strconv.Itoa(int(fd)), args, env)
}
return os.NewSyscallError("execveat", err)
}
func SetParentDeathSignal(sig uintptr) error {
if err := unix.Prctl(unix.PR_SET_PDEATHSIG, sig, 0, 0, 0); err != nil {
return err

View File

@@ -42,9 +42,20 @@ func RecvFile(socket *os.File) (_ *os.File, Err error) {
oob := make([]byte, oobSpace)
sockfd := socket.Fd()
n, oobn, _, _, err := unix.Recvmsg(int(sockfd), name, oob, unix.MSG_CMSG_CLOEXEC)
var (
n, oobn int
err error
)
for {
n, oobn, _, _, err = unix.Recvmsg(int(sockfd), name, oob, unix.MSG_CMSG_CLOEXEC)
if err != unix.EINTR { //nolint:errorlint // unix errors are bare
break
}
}
if err != nil {
return nil, err
return nil, os.NewSyscallError("recvmsg", err)
}
if n >= MaxNameLen || oobn != oobSpace {
return nil, fmt.Errorf("recvfile: incorrect number of bytes read (n=%d oobn=%d)", n, oobn)
@@ -115,5 +126,10 @@ func SendFile(socket *os.File, file *os.File) error {
// SendRawFd sends a specific file descriptor over the given AF_UNIX socket.
func SendRawFd(socket *os.File, msg string, fd uintptr) error {
oob := unix.UnixRights(int(fd))
return unix.Sendmsg(int(socket.Fd()), []byte(msg), oob, nil, 0)
for {
err := unix.Sendmsg(int(socket.Fd()), []byte(msg), oob, nil, 0)
if err != unix.EINTR { //nolint:errorlint // unix errors are bare
return os.NewSyscallError("sendmsg", err)
}
}
}

View File

@@ -7,10 +7,13 @@ import (
"time"
)
type CompareType int
// Deprecated: CompareType has only ever been for internal use and has accidentally been published since v1.6.0. Do not use it.
type CompareType = compareResult
type compareResult int
const (
compareLess CompareType = iota - 1
compareLess compareResult = iota - 1
compareEqual
compareGreater
)
@@ -39,7 +42,7 @@ var (
bytesType = reflect.TypeOf([]byte{})
)
func compare(obj1, obj2 interface{}, kind reflect.Kind) (CompareType, bool) {
func compare(obj1, obj2 interface{}, kind reflect.Kind) (compareResult, bool) {
obj1Value := reflect.ValueOf(obj1)
obj2Value := reflect.ValueOf(obj2)
@@ -325,7 +328,13 @@ func compare(obj1, obj2 interface{}, kind reflect.Kind) (CompareType, bool) {
timeObj2 = obj2Value.Convert(timeType).Interface().(time.Time)
}
return compare(timeObj1.UnixNano(), timeObj2.UnixNano(), reflect.Int64)
if timeObj1.Before(timeObj2) {
return compareLess, true
}
if timeObj1.Equal(timeObj2) {
return compareEqual, true
}
return compareGreater, true
}
case reflect.Slice:
{
@@ -345,7 +354,7 @@ func compare(obj1, obj2 interface{}, kind reflect.Kind) (CompareType, bool) {
bytesObj2 = obj2Value.Convert(bytesType).Interface().([]byte)
}
return CompareType(bytes.Compare(bytesObj1, bytesObj2)), true
return compareResult(bytes.Compare(bytesObj1, bytesObj2)), true
}
case reflect.Uintptr:
{
@@ -381,7 +390,7 @@ func Greater(t TestingT, e1 interface{}, e2 interface{}, msgAndArgs ...interface
if h, ok := t.(tHelper); ok {
h.Helper()
}
return compareTwoValues(t, e1, e2, []CompareType{compareGreater}, "\"%v\" is not greater than \"%v\"", msgAndArgs...)
return compareTwoValues(t, e1, e2, []compareResult{compareGreater}, "\"%v\" is not greater than \"%v\"", msgAndArgs...)
}
// GreaterOrEqual asserts that the first element is greater than or equal to the second
@@ -394,7 +403,7 @@ func GreaterOrEqual(t TestingT, e1 interface{}, e2 interface{}, msgAndArgs ...in
if h, ok := t.(tHelper); ok {
h.Helper()
}
return compareTwoValues(t, e1, e2, []CompareType{compareGreater, compareEqual}, "\"%v\" is not greater than or equal to \"%v\"", msgAndArgs...)
return compareTwoValues(t, e1, e2, []compareResult{compareGreater, compareEqual}, "\"%v\" is not greater than or equal to \"%v\"", msgAndArgs...)
}
// Less asserts that the first element is less than the second
@@ -406,7 +415,7 @@ func Less(t TestingT, e1 interface{}, e2 interface{}, msgAndArgs ...interface{})
if h, ok := t.(tHelper); ok {
h.Helper()
}
return compareTwoValues(t, e1, e2, []CompareType{compareLess}, "\"%v\" is not less than \"%v\"", msgAndArgs...)
return compareTwoValues(t, e1, e2, []compareResult{compareLess}, "\"%v\" is not less than \"%v\"", msgAndArgs...)
}
// LessOrEqual asserts that the first element is less than or equal to the second
@@ -419,7 +428,7 @@ func LessOrEqual(t TestingT, e1 interface{}, e2 interface{}, msgAndArgs ...inter
if h, ok := t.(tHelper); ok {
h.Helper()
}
return compareTwoValues(t, e1, e2, []CompareType{compareLess, compareEqual}, "\"%v\" is not less than or equal to \"%v\"", msgAndArgs...)
return compareTwoValues(t, e1, e2, []compareResult{compareLess, compareEqual}, "\"%v\" is not less than or equal to \"%v\"", msgAndArgs...)
}
// Positive asserts that the specified element is positive
@@ -431,7 +440,7 @@ func Positive(t TestingT, e interface{}, msgAndArgs ...interface{}) bool {
h.Helper()
}
zero := reflect.Zero(reflect.TypeOf(e))
return compareTwoValues(t, e, zero.Interface(), []CompareType{compareGreater}, "\"%v\" is not positive", msgAndArgs...)
return compareTwoValues(t, e, zero.Interface(), []compareResult{compareGreater}, "\"%v\" is not positive", msgAndArgs...)
}
// Negative asserts that the specified element is negative
@@ -443,10 +452,10 @@ func Negative(t TestingT, e interface{}, msgAndArgs ...interface{}) bool {
h.Helper()
}
zero := reflect.Zero(reflect.TypeOf(e))
return compareTwoValues(t, e, zero.Interface(), []CompareType{compareLess}, "\"%v\" is not negative", msgAndArgs...)
return compareTwoValues(t, e, zero.Interface(), []compareResult{compareLess}, "\"%v\" is not negative", msgAndArgs...)
}
func compareTwoValues(t TestingT, e1 interface{}, e2 interface{}, allowedComparesResults []CompareType, failMessage string, msgAndArgs ...interface{}) bool {
func compareTwoValues(t TestingT, e1 interface{}, e2 interface{}, allowedComparesResults []compareResult, failMessage string, msgAndArgs ...interface{}) bool {
if h, ok := t.(tHelper); ok {
h.Helper()
}
@@ -469,7 +478,7 @@ func compareTwoValues(t TestingT, e1 interface{}, e2 interface{}, allowedCompare
return true
}
func containsValue(values []CompareType, value CompareType) bool {
func containsValue(values []compareResult, value compareResult) bool {
for _, v := range values {
if v == value {
return true

View File

@@ -104,8 +104,8 @@ func EqualExportedValuesf(t TestingT, expected interface{}, actual interface{},
return EqualExportedValues(t, expected, actual, append([]interface{}{msg}, args...)...)
}
// EqualValuesf asserts that two objects are equal or convertible to the same types
// and equal.
// EqualValuesf asserts that two objects are equal or convertible to the larger
// type and equal.
//
// assert.EqualValuesf(t, uint32(123), int32(123), "error message %s", "formatted")
func EqualValuesf(t TestingT, expected interface{}, actual interface{}, msg string, args ...interface{}) bool {
@@ -186,7 +186,7 @@ func Eventuallyf(t TestingT, condition func() bool, waitFor time.Duration, tick
// assert.EventuallyWithTf(t, func(c *assert.CollectT, "error message %s", "formatted") {
// // add assertions as needed; any assertion failure will fail the current tick
// assert.True(c, externalValue, "expected 'externalValue' to be true")
// }, 1*time.Second, 10*time.Second, "external state has not changed to 'true'; still false")
// }, 10*time.Second, 1*time.Second, "external state has not changed to 'true'; still false")
func EventuallyWithTf(t TestingT, condition func(collect *CollectT), waitFor time.Duration, tick time.Duration, msg string, args ...interface{}) bool {
if h, ok := t.(tHelper); ok {
h.Helper()
@@ -568,6 +568,23 @@ func NotContainsf(t TestingT, s interface{}, contains interface{}, msg string, a
return NotContains(t, s, contains, append([]interface{}{msg}, args...)...)
}
// NotElementsMatchf asserts that the specified listA(array, slice...) is NOT equal to specified
// listB(array, slice...) ignoring the order of the elements. If there are duplicate elements,
// the number of appearances of each of them in both lists should not match.
// This is an inverse of ElementsMatch.
//
// assert.NotElementsMatchf(t, [1, 1, 2, 3], [1, 1, 2, 3], "error message %s", "formatted") -> false
//
// assert.NotElementsMatchf(t, [1, 1, 2, 3], [1, 2, 3], "error message %s", "formatted") -> true
//
// assert.NotElementsMatchf(t, [1, 2, 3], [1, 2, 4], "error message %s", "formatted") -> true
func NotElementsMatchf(t TestingT, listA interface{}, listB interface{}, msg string, args ...interface{}) bool {
if h, ok := t.(tHelper); ok {
h.Helper()
}
return NotElementsMatch(t, listA, listB, append([]interface{}{msg}, args...)...)
}
// NotEmptyf asserts that the specified object is NOT empty. I.e. not nil, "", false, 0 or either
// a slice or a channel with len == 0.
//
@@ -604,7 +621,16 @@ func NotEqualValuesf(t TestingT, expected interface{}, actual interface{}, msg s
return NotEqualValues(t, expected, actual, append([]interface{}{msg}, args...)...)
}
// NotErrorIsf asserts that at none of the errors in err's chain matches target.
// NotErrorAsf asserts that none of the errors in err's chain matches target,
// but if so, sets target to that error value.
func NotErrorAsf(t TestingT, err error, target interface{}, msg string, args ...interface{}) bool {
if h, ok := t.(tHelper); ok {
h.Helper()
}
return NotErrorAs(t, err, target, append([]interface{}{msg}, args...)...)
}
// NotErrorIsf asserts that none of the errors in err's chain matches target.
// This is a wrapper for errors.Is.
func NotErrorIsf(t TestingT, err error, target error, msg string, args ...interface{}) bool {
if h, ok := t.(tHelper); ok {

View File

@@ -186,8 +186,8 @@ func (a *Assertions) EqualExportedValuesf(expected interface{}, actual interface
return EqualExportedValuesf(a.t, expected, actual, msg, args...)
}
// EqualValues asserts that two objects are equal or convertible to the same types
// and equal.
// EqualValues asserts that two objects are equal or convertible to the larger
// type and equal.
//
// a.EqualValues(uint32(123), int32(123))
func (a *Assertions) EqualValues(expected interface{}, actual interface{}, msgAndArgs ...interface{}) bool {
@@ -197,8 +197,8 @@ func (a *Assertions) EqualValues(expected interface{}, actual interface{}, msgAn
return EqualValues(a.t, expected, actual, msgAndArgs...)
}
// EqualValuesf asserts that two objects are equal or convertible to the same types
// and equal.
// EqualValuesf asserts that two objects are equal or convertible to the larger
// type and equal.
//
// a.EqualValuesf(uint32(123), int32(123), "error message %s", "formatted")
func (a *Assertions) EqualValuesf(expected interface{}, actual interface{}, msg string, args ...interface{}) bool {
@@ -336,7 +336,7 @@ func (a *Assertions) Eventually(condition func() bool, waitFor time.Duration, ti
// a.EventuallyWithT(func(c *assert.CollectT) {
// // add assertions as needed; any assertion failure will fail the current tick
// assert.True(c, externalValue, "expected 'externalValue' to be true")
// }, 1*time.Second, 10*time.Second, "external state has not changed to 'true'; still false")
// }, 10*time.Second, 1*time.Second, "external state has not changed to 'true'; still false")
func (a *Assertions) EventuallyWithT(condition func(collect *CollectT), waitFor time.Duration, tick time.Duration, msgAndArgs ...interface{}) bool {
if h, ok := a.t.(tHelper); ok {
h.Helper()
@@ -361,7 +361,7 @@ func (a *Assertions) EventuallyWithT(condition func(collect *CollectT), waitFor
// a.EventuallyWithTf(func(c *assert.CollectT, "error message %s", "formatted") {
// // add assertions as needed; any assertion failure will fail the current tick
// assert.True(c, externalValue, "expected 'externalValue' to be true")
// }, 1*time.Second, 10*time.Second, "external state has not changed to 'true'; still false")
// }, 10*time.Second, 1*time.Second, "external state has not changed to 'true'; still false")
func (a *Assertions) EventuallyWithTf(condition func(collect *CollectT), waitFor time.Duration, tick time.Duration, msg string, args ...interface{}) bool {
if h, ok := a.t.(tHelper); ok {
h.Helper()
@@ -1128,6 +1128,40 @@ func (a *Assertions) NotContainsf(s interface{}, contains interface{}, msg strin
return NotContainsf(a.t, s, contains, msg, args...)
}
// NotElementsMatch asserts that the specified listA(array, slice...) is NOT equal to specified
// listB(array, slice...) ignoring the order of the elements. If there are duplicate elements,
// the number of appearances of each of them in both lists should not match.
// This is an inverse of ElementsMatch.
//
// a.NotElementsMatch([1, 1, 2, 3], [1, 1, 2, 3]) -> false
//
// a.NotElementsMatch([1, 1, 2, 3], [1, 2, 3]) -> true
//
// a.NotElementsMatch([1, 2, 3], [1, 2, 4]) -> true
func (a *Assertions) NotElementsMatch(listA interface{}, listB interface{}, msgAndArgs ...interface{}) bool {
if h, ok := a.t.(tHelper); ok {
h.Helper()
}
return NotElementsMatch(a.t, listA, listB, msgAndArgs...)
}
// NotElementsMatchf asserts that the specified listA(array, slice...) is NOT equal to specified
// listB(array, slice...) ignoring the order of the elements. If there are duplicate elements,
// the number of appearances of each of them in both lists should not match.
// This is an inverse of ElementsMatch.
//
// a.NotElementsMatchf([1, 1, 2, 3], [1, 1, 2, 3], "error message %s", "formatted") -> false
//
// a.NotElementsMatchf([1, 1, 2, 3], [1, 2, 3], "error message %s", "formatted") -> true
//
// a.NotElementsMatchf([1, 2, 3], [1, 2, 4], "error message %s", "formatted") -> true
func (a *Assertions) NotElementsMatchf(listA interface{}, listB interface{}, msg string, args ...interface{}) bool {
if h, ok := a.t.(tHelper); ok {
h.Helper()
}
return NotElementsMatchf(a.t, listA, listB, msg, args...)
}
// NotEmpty asserts that the specified object is NOT empty. I.e. not nil, "", false, 0 or either
// a slice or a channel with len == 0.
//
@@ -1200,7 +1234,25 @@ func (a *Assertions) NotEqualf(expected interface{}, actual interface{}, msg str
return NotEqualf(a.t, expected, actual, msg, args...)
}
// NotErrorIs asserts that at none of the errors in err's chain matches target.
// NotErrorAs asserts that none of the errors in err's chain matches target,
// but if so, sets target to that error value.
func (a *Assertions) NotErrorAs(err error, target interface{}, msgAndArgs ...interface{}) bool {
if h, ok := a.t.(tHelper); ok {
h.Helper()
}
return NotErrorAs(a.t, err, target, msgAndArgs...)
}
// NotErrorAsf asserts that none of the errors in err's chain matches target,
// but if so, sets target to that error value.
func (a *Assertions) NotErrorAsf(err error, target interface{}, msg string, args ...interface{}) bool {
if h, ok := a.t.(tHelper); ok {
h.Helper()
}
return NotErrorAsf(a.t, err, target, msg, args...)
}
// NotErrorIs asserts that none of the errors in err's chain matches target.
// This is a wrapper for errors.Is.
func (a *Assertions) NotErrorIs(err error, target error, msgAndArgs ...interface{}) bool {
if h, ok := a.t.(tHelper); ok {
@@ -1209,7 +1261,7 @@ func (a *Assertions) NotErrorIs(err error, target error, msgAndArgs ...interface
return NotErrorIs(a.t, err, target, msgAndArgs...)
}
// NotErrorIsf asserts that at none of the errors in err's chain matches target.
// NotErrorIsf asserts that none of the errors in err's chain matches target.
// This is a wrapper for errors.Is.
func (a *Assertions) NotErrorIsf(err error, target error, msg string, args ...interface{}) bool {
if h, ok := a.t.(tHelper); ok {

View File

@@ -6,7 +6,7 @@ import (
)
// isOrdered checks that collection contains orderable elements.
func isOrdered(t TestingT, object interface{}, allowedComparesResults []CompareType, failMessage string, msgAndArgs ...interface{}) bool {
func isOrdered(t TestingT, object interface{}, allowedComparesResults []compareResult, failMessage string, msgAndArgs ...interface{}) bool {
objKind := reflect.TypeOf(object).Kind()
if objKind != reflect.Slice && objKind != reflect.Array {
return false
@@ -50,7 +50,7 @@ func isOrdered(t TestingT, object interface{}, allowedComparesResults []CompareT
// assert.IsIncreasing(t, []float{1, 2})
// assert.IsIncreasing(t, []string{"a", "b"})
func IsIncreasing(t TestingT, object interface{}, msgAndArgs ...interface{}) bool {
return isOrdered(t, object, []CompareType{compareLess}, "\"%v\" is not less than \"%v\"", msgAndArgs...)
return isOrdered(t, object, []compareResult{compareLess}, "\"%v\" is not less than \"%v\"", msgAndArgs...)
}
// IsNonIncreasing asserts that the collection is not increasing
@@ -59,7 +59,7 @@ func IsIncreasing(t TestingT, object interface{}, msgAndArgs ...interface{}) boo
// assert.IsNonIncreasing(t, []float{2, 1})
// assert.IsNonIncreasing(t, []string{"b", "a"})
func IsNonIncreasing(t TestingT, object interface{}, msgAndArgs ...interface{}) bool {
return isOrdered(t, object, []CompareType{compareEqual, compareGreater}, "\"%v\" is not greater than or equal to \"%v\"", msgAndArgs...)
return isOrdered(t, object, []compareResult{compareEqual, compareGreater}, "\"%v\" is not greater than or equal to \"%v\"", msgAndArgs...)
}
// IsDecreasing asserts that the collection is decreasing
@@ -68,7 +68,7 @@ func IsNonIncreasing(t TestingT, object interface{}, msgAndArgs ...interface{})
// assert.IsDecreasing(t, []float{2, 1})
// assert.IsDecreasing(t, []string{"b", "a"})
func IsDecreasing(t TestingT, object interface{}, msgAndArgs ...interface{}) bool {
return isOrdered(t, object, []CompareType{compareGreater}, "\"%v\" is not greater than \"%v\"", msgAndArgs...)
return isOrdered(t, object, []compareResult{compareGreater}, "\"%v\" is not greater than \"%v\"", msgAndArgs...)
}
// IsNonDecreasing asserts that the collection is not decreasing
@@ -77,5 +77,5 @@ func IsDecreasing(t TestingT, object interface{}, msgAndArgs ...interface{}) boo
// assert.IsNonDecreasing(t, []float{1, 2})
// assert.IsNonDecreasing(t, []string{"a", "b"})
func IsNonDecreasing(t TestingT, object interface{}, msgAndArgs ...interface{}) bool {
return isOrdered(t, object, []CompareType{compareLess, compareEqual}, "\"%v\" is not less than or equal to \"%v\"", msgAndArgs...)
return isOrdered(t, object, []compareResult{compareLess, compareEqual}, "\"%v\" is not less than or equal to \"%v\"", msgAndArgs...)
}

View File

@@ -19,7 +19,9 @@ import (
"github.com/davecgh/go-spew/spew"
"github.com/pmezard/go-difflib/difflib"
"gopkg.in/yaml.v3"
// Wrapper around gopkg.in/yaml.v3
"github.com/stretchr/testify/assert/yaml"
)
//go:generate sh -c "cd ../_codegen && go build && cd - && ../_codegen/_codegen -output-package=assert -template=assertion_format.go.tmpl"
@@ -45,6 +47,10 @@ type BoolAssertionFunc func(TestingT, bool, ...interface{}) bool
// for table driven tests.
type ErrorAssertionFunc func(TestingT, error, ...interface{}) bool
// PanicAssertionFunc is a common function prototype when validating a panic value. Can be useful
// for table driven tests.
type PanicAssertionFunc = func(t TestingT, f PanicTestFunc, msgAndArgs ...interface{}) bool
// Comparison is a custom function that returns true on success and false on failure
type Comparison func() (success bool)
@@ -496,7 +502,13 @@ func Same(t TestingT, expected, actual interface{}, msgAndArgs ...interface{}) b
h.Helper()
}
if !samePointers(expected, actual) {
same, ok := samePointers(expected, actual)
if !ok {
return Fail(t, "Both arguments must be pointers", msgAndArgs...)
}
if !same {
// both are pointers but not the same type & pointing to the same address
return Fail(t, fmt.Sprintf("Not same: \n"+
"expected: %p %#v\n"+
"actual : %p %#v", expected, expected, actual, actual), msgAndArgs...)
@@ -516,7 +528,13 @@ func NotSame(t TestingT, expected, actual interface{}, msgAndArgs ...interface{}
h.Helper()
}
if samePointers(expected, actual) {
same, ok := samePointers(expected, actual)
if !ok {
//fails when the arguments are not pointers
return !(Fail(t, "Both arguments must be pointers", msgAndArgs...))
}
if same {
return Fail(t, fmt.Sprintf(
"Expected and actual point to the same object: %p %#v",
expected, expected), msgAndArgs...)
@@ -524,21 +542,23 @@ func NotSame(t TestingT, expected, actual interface{}, msgAndArgs ...interface{}
return true
}
// samePointers compares two generic interface objects and returns whether
// they point to the same object
func samePointers(first, second interface{}) bool {
// samePointers checks if two generic interface objects are pointers of the same
// type pointing to the same object. It returns two values: same indicating if
// they are the same type and point to the same object, and ok indicating that
// both inputs are pointers.
func samePointers(first, second interface{}) (same bool, ok bool) {
firstPtr, secondPtr := reflect.ValueOf(first), reflect.ValueOf(second)
if firstPtr.Kind() != reflect.Ptr || secondPtr.Kind() != reflect.Ptr {
return false
return false, false //not both are pointers
}
firstType, secondType := reflect.TypeOf(first), reflect.TypeOf(second)
if firstType != secondType {
return false
return false, true // both are pointers, but of different types
}
// compare pointer addresses
return first == second
return first == second, true
}
// formatUnequalValues takes two values of arbitrary types and returns string
@@ -572,8 +592,8 @@ func truncatingFormat(data interface{}) string {
return value
}
// EqualValues asserts that two objects are equal or convertible to the same types
// and equal.
// EqualValues asserts that two objects are equal or convertible to the larger
// type and equal.
//
// assert.EqualValues(t, uint32(123), int32(123))
func EqualValues(t TestingT, expected, actual interface{}, msgAndArgs ...interface{}) bool {
@@ -615,21 +635,6 @@ func EqualExportedValues(t TestingT, expected, actual interface{}, msgAndArgs ..
return Fail(t, fmt.Sprintf("Types expected to match exactly\n\t%v != %v", aType, bType), msgAndArgs...)
}
if aType.Kind() == reflect.Ptr {
aType = aType.Elem()
}
if bType.Kind() == reflect.Ptr {
bType = bType.Elem()
}
if aType.Kind() != reflect.Struct {
return Fail(t, fmt.Sprintf("Types expected to both be struct or pointer to struct \n\t%v != %v", aType.Kind(), reflect.Struct), msgAndArgs...)
}
if bType.Kind() != reflect.Struct {
return Fail(t, fmt.Sprintf("Types expected to both be struct or pointer to struct \n\t%v != %v", bType.Kind(), reflect.Struct), msgAndArgs...)
}
expected = copyExportedFields(expected)
actual = copyExportedFields(actual)
@@ -1170,6 +1175,39 @@ func formatListDiff(listA, listB interface{}, extraA, extraB []interface{}) stri
return msg.String()
}
// NotElementsMatch asserts that the specified listA(array, slice...) is NOT equal to specified
// listB(array, slice...) ignoring the order of the elements. If there are duplicate elements,
// the number of appearances of each of them in both lists should not match.
// This is an inverse of ElementsMatch.
//
// assert.NotElementsMatch(t, [1, 1, 2, 3], [1, 1, 2, 3]) -> false
//
// assert.NotElementsMatch(t, [1, 1, 2, 3], [1, 2, 3]) -> true
//
// assert.NotElementsMatch(t, [1, 2, 3], [1, 2, 4]) -> true
func NotElementsMatch(t TestingT, listA, listB interface{}, msgAndArgs ...interface{}) (ok bool) {
if h, ok := t.(tHelper); ok {
h.Helper()
}
if isEmpty(listA) && isEmpty(listB) {
return Fail(t, "listA and listB contain the same elements", msgAndArgs)
}
if !isList(t, listA, msgAndArgs...) {
return Fail(t, "listA is not a list type", msgAndArgs...)
}
if !isList(t, listB, msgAndArgs...) {
return Fail(t, "listB is not a list type", msgAndArgs...)
}
extraA, extraB := diffLists(listA, listB)
if len(extraA) == 0 && len(extraB) == 0 {
return Fail(t, "listA and listB contain the same elements", msgAndArgs)
}
return true
}
// Condition uses a Comparison to assert a complex condition.
func Condition(t TestingT, comp Comparison, msgAndArgs ...interface{}) bool {
if h, ok := t.(tHelper); ok {
@@ -1488,6 +1526,9 @@ func InEpsilon(t TestingT, expected, actual interface{}, epsilon float64, msgAnd
if err != nil {
return Fail(t, err.Error(), msgAndArgs...)
}
if math.IsNaN(actualEpsilon) {
return Fail(t, "relative error is NaN", msgAndArgs...)
}
if actualEpsilon > epsilon {
return Fail(t, fmt.Sprintf("Relative error is too high: %#v (expected)\n"+
" < %#v (actual)", epsilon, actualEpsilon), msgAndArgs...)
@@ -1611,7 +1652,6 @@ func ErrorContains(t TestingT, theError error, contains string, msgAndArgs ...in
// matchRegexp return true if a specified regexp matches a string.
func matchRegexp(rx interface{}, str interface{}) bool {
var r *regexp.Regexp
if rr, ok := rx.(*regexp.Regexp); ok {
r = rr
@@ -1619,7 +1659,14 @@ func matchRegexp(rx interface{}, str interface{}) bool {
r = regexp.MustCompile(fmt.Sprint(rx))
}
return (r.FindStringIndex(fmt.Sprint(str)) != nil)
switch v := str.(type) {
case []byte:
return r.Match(v)
case string:
return r.MatchString(v)
default:
return r.MatchString(fmt.Sprint(v))
}
}
@@ -1872,7 +1919,7 @@ var spewConfigStringerEnabled = spew.ConfigState{
MaxDepth: 10,
}
type tHelper interface {
type tHelper = interface {
Helper()
}
@@ -1911,6 +1958,9 @@ func Eventually(t TestingT, condition func() bool, waitFor time.Duration, tick t
// CollectT implements the TestingT interface and collects all errors.
type CollectT struct {
// A slice of errors. Non-nil slice denotes a failure.
// If it's non-nil but len(c.errors) == 0, this is also a failure
// obtained by direct c.FailNow() call.
errors []error
}
@@ -1919,9 +1969,10 @@ func (c *CollectT) Errorf(format string, args ...interface{}) {
c.errors = append(c.errors, fmt.Errorf(format, args...))
}
// FailNow panics.
func (*CollectT) FailNow() {
panic("Assertion failed")
// FailNow stops execution by calling runtime.Goexit.
func (c *CollectT) FailNow() {
c.fail()
runtime.Goexit()
}
// Deprecated: That was a method for internal usage that should not have been published. Now just panics.
@@ -1934,6 +1985,16 @@ func (*CollectT) Copy(TestingT) {
panic("Copy() is deprecated")
}
func (c *CollectT) fail() {
if !c.failed() {
c.errors = []error{} // Make it non-nil to mark a failure.
}
}
func (c *CollectT) failed() bool {
return c.errors != nil
}
// EventuallyWithT asserts that given condition will be met in waitFor time,
// periodically checking target function each tick. In contrast to Eventually,
// it supplies a CollectT to the condition function, so that the condition
@@ -1951,14 +2012,14 @@ func (*CollectT) Copy(TestingT) {
// assert.EventuallyWithT(t, func(c *assert.CollectT) {
// // add assertions as needed; any assertion failure will fail the current tick
// assert.True(c, externalValue, "expected 'externalValue' to be true")
// }, 1*time.Second, 10*time.Second, "external state has not changed to 'true'; still false")
// }, 10*time.Second, 1*time.Second, "external state has not changed to 'true'; still false")
func EventuallyWithT(t TestingT, condition func(collect *CollectT), waitFor time.Duration, tick time.Duration, msgAndArgs ...interface{}) bool {
if h, ok := t.(tHelper); ok {
h.Helper()
}
var lastFinishedTickErrs []error
ch := make(chan []error, 1)
ch := make(chan *CollectT, 1)
timer := time.NewTimer(waitFor)
defer timer.Stop()
@@ -1978,16 +2039,16 @@ func EventuallyWithT(t TestingT, condition func(collect *CollectT), waitFor time
go func() {
collect := new(CollectT)
defer func() {
ch <- collect.errors
ch <- collect
}()
condition(collect)
}()
case errs := <-ch:
if len(errs) == 0 {
case collect := <-ch:
if !collect.failed() {
return true
}
// Keep the errors from the last ended condition, so that they can be copied to t if timeout is reached.
lastFinishedTickErrs = errs
lastFinishedTickErrs = collect.errors
tick = ticker.C
}
}
@@ -2049,7 +2110,7 @@ func ErrorIs(t TestingT, err, target error, msgAndArgs ...interface{}) bool {
), msgAndArgs...)
}
// NotErrorIs asserts that at none of the errors in err's chain matches target.
// NotErrorIs asserts that none of the errors in err's chain matches target.
// This is a wrapper for errors.Is.
func NotErrorIs(t TestingT, err, target error, msgAndArgs ...interface{}) bool {
if h, ok := t.(tHelper); ok {
@@ -2090,6 +2151,24 @@ func ErrorAs(t TestingT, err error, target interface{}, msgAndArgs ...interface{
), msgAndArgs...)
}
// NotErrorAs asserts that none of the errors in err's chain matches target,
// but if so, sets target to that error value.
func NotErrorAs(t TestingT, err error, target interface{}, msgAndArgs ...interface{}) bool {
if h, ok := t.(tHelper); ok {
h.Helper()
}
if !errors.As(err, target) {
return true
}
chain := buildErrorChainString(err)
return Fail(t, fmt.Sprintf("Target error should not be in err chain:\n"+
"found: %q\n"+
"in chain: %s", target, chain,
), msgAndArgs...)
}
func buildErrorChainString(err error) string {
if err == nil {
return ""

View File

@@ -0,0 +1,25 @@
//go:build testify_yaml_custom && !testify_yaml_fail && !testify_yaml_default
// +build testify_yaml_custom,!testify_yaml_fail,!testify_yaml_default
// Package yaml is an implementation of YAML functions that calls a pluggable implementation.
//
// This implementation is selected with the testify_yaml_custom build tag.
//
// go test -tags testify_yaml_custom
//
// This implementation can be used at build time to replace the default implementation
// to avoid linking with [gopkg.in/yaml.v3].
//
// In your test package:
//
// import assertYaml "github.com/stretchr/testify/assert/yaml"
//
// func init() {
// assertYaml.Unmarshal = func (in []byte, out interface{}) error {
// // ...
// return nil
// }
// }
package yaml
var Unmarshal func(in []byte, out interface{}) error

View File

@@ -0,0 +1,37 @@
//go:build !testify_yaml_fail && !testify_yaml_custom
// +build !testify_yaml_fail,!testify_yaml_custom
// Package yaml is just an indirection to handle YAML deserialization.
//
// This package is just an indirection that allows the builder to override the
// indirection with an alternative implementation of this package that uses
// another implementation of YAML deserialization. This allows to not either not
// use YAML deserialization at all, or to use another implementation than
// [gopkg.in/yaml.v3] (for example for license compatibility reasons, see [PR #1120]).
//
// Alternative implementations are selected using build tags:
//
// - testify_yaml_fail: [Unmarshal] always fails with an error
// - testify_yaml_custom: [Unmarshal] is a variable. Caller must initialize it
// before calling any of [github.com/stretchr/testify/assert.YAMLEq] or
// [github.com/stretchr/testify/assert.YAMLEqf].
//
// Usage:
//
// go test -tags testify_yaml_fail
//
// You can check with "go list" which implementation is linked:
//
// go list -f '{{.Imports}}' github.com/stretchr/testify/assert/yaml
// go list -tags testify_yaml_fail -f '{{.Imports}}' github.com/stretchr/testify/assert/yaml
// go list -tags testify_yaml_custom -f '{{.Imports}}' github.com/stretchr/testify/assert/yaml
//
// [PR #1120]: https://github.com/stretchr/testify/pull/1120
package yaml
import goyaml "gopkg.in/yaml.v3"
// Unmarshal is just a wrapper of [gopkg.in/yaml.v3.Unmarshal].
func Unmarshal(in []byte, out interface{}) error {
return goyaml.Unmarshal(in, out)
}

View File

@@ -0,0 +1,18 @@
//go:build testify_yaml_fail && !testify_yaml_custom && !testify_yaml_default
// +build testify_yaml_fail,!testify_yaml_custom,!testify_yaml_default
// Package yaml is an implementation of YAML functions that always fail.
//
// This implementation can be used at build time to replace the default implementation
// to avoid linking with [gopkg.in/yaml.v3]:
//
// go test -tags testify_yaml_fail
package yaml
import "errors"
var errNotImplemented = errors.New("YAML functions are not available (see https://pkg.go.dev/github.com/stretchr/testify/assert/yaml)")
func Unmarshal([]byte, interface{}) error {
return errNotImplemented
}

File diff suppressed because it is too large Load Diff

View File

@@ -1,4 +1,4 @@
{{.Comment}}
{{ replace .Comment "assert." "require."}}
func {{.DocInfo.Name}}(t TestingT, {{.Params}}) {
if h, ok := t.(tHelper); ok { h.Helper() }
if assert.{{.DocInfo.Name}}(t, {{.ForwardedParams}}) { return }

View File

@@ -187,8 +187,8 @@ func (a *Assertions) EqualExportedValuesf(expected interface{}, actual interface
EqualExportedValuesf(a.t, expected, actual, msg, args...)
}
// EqualValues asserts that two objects are equal or convertible to the same types
// and equal.
// EqualValues asserts that two objects are equal or convertible to the larger
// type and equal.
//
// a.EqualValues(uint32(123), int32(123))
func (a *Assertions) EqualValues(expected interface{}, actual interface{}, msgAndArgs ...interface{}) {
@@ -198,8 +198,8 @@ func (a *Assertions) EqualValues(expected interface{}, actual interface{}, msgAn
EqualValues(a.t, expected, actual, msgAndArgs...)
}
// EqualValuesf asserts that two objects are equal or convertible to the same types
// and equal.
// EqualValuesf asserts that two objects are equal or convertible to the larger
// type and equal.
//
// a.EqualValuesf(uint32(123), int32(123), "error message %s", "formatted")
func (a *Assertions) EqualValuesf(expected interface{}, actual interface{}, msg string, args ...interface{}) {
@@ -337,7 +337,7 @@ func (a *Assertions) Eventually(condition func() bool, waitFor time.Duration, ti
// a.EventuallyWithT(func(c *assert.CollectT) {
// // add assertions as needed; any assertion failure will fail the current tick
// assert.True(c, externalValue, "expected 'externalValue' to be true")
// }, 1*time.Second, 10*time.Second, "external state has not changed to 'true'; still false")
// }, 10*time.Second, 1*time.Second, "external state has not changed to 'true'; still false")
func (a *Assertions) EventuallyWithT(condition func(collect *assert.CollectT), waitFor time.Duration, tick time.Duration, msgAndArgs ...interface{}) {
if h, ok := a.t.(tHelper); ok {
h.Helper()
@@ -362,7 +362,7 @@ func (a *Assertions) EventuallyWithT(condition func(collect *assert.CollectT), w
// a.EventuallyWithTf(func(c *assert.CollectT, "error message %s", "formatted") {
// // add assertions as needed; any assertion failure will fail the current tick
// assert.True(c, externalValue, "expected 'externalValue' to be true")
// }, 1*time.Second, 10*time.Second, "external state has not changed to 'true'; still false")
// }, 10*time.Second, 1*time.Second, "external state has not changed to 'true'; still false")
func (a *Assertions) EventuallyWithTf(condition func(collect *assert.CollectT), waitFor time.Duration, tick time.Duration, msg string, args ...interface{}) {
if h, ok := a.t.(tHelper); ok {
h.Helper()
@@ -1129,6 +1129,40 @@ func (a *Assertions) NotContainsf(s interface{}, contains interface{}, msg strin
NotContainsf(a.t, s, contains, msg, args...)
}
// NotElementsMatch asserts that the specified listA(array, slice...) is NOT equal to specified
// listB(array, slice...) ignoring the order of the elements. If there are duplicate elements,
// the number of appearances of each of them in both lists should not match.
// This is an inverse of ElementsMatch.
//
// a.NotElementsMatch([1, 1, 2, 3], [1, 1, 2, 3]) -> false
//
// a.NotElementsMatch([1, 1, 2, 3], [1, 2, 3]) -> true
//
// a.NotElementsMatch([1, 2, 3], [1, 2, 4]) -> true
func (a *Assertions) NotElementsMatch(listA interface{}, listB interface{}, msgAndArgs ...interface{}) {
if h, ok := a.t.(tHelper); ok {
h.Helper()
}
NotElementsMatch(a.t, listA, listB, msgAndArgs...)
}
// NotElementsMatchf asserts that the specified listA(array, slice...) is NOT equal to specified
// listB(array, slice...) ignoring the order of the elements. If there are duplicate elements,
// the number of appearances of each of them in both lists should not match.
// This is an inverse of ElementsMatch.
//
// a.NotElementsMatchf([1, 1, 2, 3], [1, 1, 2, 3], "error message %s", "formatted") -> false
//
// a.NotElementsMatchf([1, 1, 2, 3], [1, 2, 3], "error message %s", "formatted") -> true
//
// a.NotElementsMatchf([1, 2, 3], [1, 2, 4], "error message %s", "formatted") -> true
func (a *Assertions) NotElementsMatchf(listA interface{}, listB interface{}, msg string, args ...interface{}) {
if h, ok := a.t.(tHelper); ok {
h.Helper()
}
NotElementsMatchf(a.t, listA, listB, msg, args...)
}
// NotEmpty asserts that the specified object is NOT empty. I.e. not nil, "", false, 0 or either
// a slice or a channel with len == 0.
//
@@ -1201,7 +1235,25 @@ func (a *Assertions) NotEqualf(expected interface{}, actual interface{}, msg str
NotEqualf(a.t, expected, actual, msg, args...)
}
// NotErrorIs asserts that at none of the errors in err's chain matches target.
// NotErrorAs asserts that none of the errors in err's chain matches target,
// but if so, sets target to that error value.
func (a *Assertions) NotErrorAs(err error, target interface{}, msgAndArgs ...interface{}) {
if h, ok := a.t.(tHelper); ok {
h.Helper()
}
NotErrorAs(a.t, err, target, msgAndArgs...)
}
// NotErrorAsf asserts that none of the errors in err's chain matches target,
// but if so, sets target to that error value.
func (a *Assertions) NotErrorAsf(err error, target interface{}, msg string, args ...interface{}) {
if h, ok := a.t.(tHelper); ok {
h.Helper()
}
NotErrorAsf(a.t, err, target, msg, args...)
}
// NotErrorIs asserts that none of the errors in err's chain matches target.
// This is a wrapper for errors.Is.
func (a *Assertions) NotErrorIs(err error, target error, msgAndArgs ...interface{}) {
if h, ok := a.t.(tHelper); ok {
@@ -1210,7 +1262,7 @@ func (a *Assertions) NotErrorIs(err error, target error, msgAndArgs ...interface
NotErrorIs(a.t, err, target, msgAndArgs...)
}
// NotErrorIsf asserts that at none of the errors in err's chain matches target.
// NotErrorIsf asserts that none of the errors in err's chain matches target.
// This is a wrapper for errors.Is.
func (a *Assertions) NotErrorIsf(err error, target error, msg string, args ...interface{}) {
if h, ok := a.t.(tHelper); ok {

View File

@@ -6,7 +6,7 @@ type TestingT interface {
FailNow()
}
type tHelper interface {
type tHelper = interface {
Helper()
}

14
vendor/modules.txt vendored
View File

@@ -1,4 +1,4 @@
# github.com/NVIDIA/go-nvlib v0.6.1
# github.com/NVIDIA/go-nvlib v0.7.3
## explicit; go 1.20
github.com/NVIDIA/go-nvlib/pkg/nvlib/device
github.com/NVIDIA/go-nvlib/pkg/nvlib/info
@@ -6,7 +6,7 @@ github.com/NVIDIA/go-nvlib/pkg/nvpci
github.com/NVIDIA/go-nvlib/pkg/nvpci/bytes
github.com/NVIDIA/go-nvlib/pkg/nvpci/mmio
github.com/NVIDIA/go-nvlib/pkg/pciids
# github.com/NVIDIA/go-nvml v0.12.4-1
# github.com/NVIDIA/go-nvml v0.12.9-0
## explicit; go 1.20
github.com/NVIDIA/go-nvml/pkg/dl
github.com/NVIDIA/go-nvml/pkg/nvml
@@ -30,10 +30,13 @@ github.com/google/uuid
## explicit
# github.com/kr/pretty v0.3.1
## explicit; go 1.12
# github.com/moby/sys/reexec v0.1.0
## explicit; go 1.18
github.com/moby/sys/reexec
# github.com/moby/sys/symlink v0.3.0
## explicit; go 1.17
github.com/moby/sys/symlink
# github.com/opencontainers/runc v1.2.5
# github.com/opencontainers/runc v1.2.6
## explicit; go 1.22
github.com/opencontainers/runc/libcontainer/dmz
github.com/opencontainers/runc/libcontainer/system
@@ -52,6 +55,8 @@ github.com/pelletier/go-toml
# github.com/pmezard/go-difflib v1.0.0
## explicit
github.com/pmezard/go-difflib/difflib
# github.com/rogpeppe/go-internal v1.11.0
## explicit; go 1.19
# github.com/russross/blackfriday/v2 v2.1.0
## explicit
github.com/russross/blackfriday/v2
@@ -59,9 +64,10 @@ github.com/russross/blackfriday/v2
## explicit; go 1.13
github.com/sirupsen/logrus
github.com/sirupsen/logrus/hooks/test
# github.com/stretchr/testify v1.9.0
# github.com/stretchr/testify v1.10.0
## explicit; go 1.17
github.com/stretchr/testify/assert
github.com/stretchr/testify/assert/yaml
github.com/stretchr/testify/require
# github.com/syndtr/gocapability v0.0.0-20200815063812-42c35b437635
## explicit

View File

@@ -13,7 +13,7 @@
# limitations under the License.
LIB_NAME := nvidia-container-toolkit
LIB_VERSION := 1.17.5
LIB_VERSION := 1.17.8
LIB_TAG :=
# The package version is the combination of the library version and tag.