talos-extensions/nvidia-gpu/nvidia-container-toolkit
Andrey Smirnov 932a49e120
feat: update releases
gvisor: 20221212.0
Linux firmware: 20221214
NVIDIA: 525.60.13

DRBD disabled, as it doesn't build with Linux 6.1.

Signed-off-by: Andrey Smirnov <andrey.smirnov@talos-systems.com>
2022-12-23 14:34:09 +04:00
..
glibc feat: enable renovate bot 2022-08-30 02:11:40 +05:30
nvidia-container-cli chore: bump deps 2022-09-22 12:03:40 +05:30
nvidia-container-runtime chore: bump deps 2022-09-22 12:03:40 +05:30
nvidia-device-create feat: enable renovate bot 2022-08-30 02:11:40 +05:30
nvidia-persistenced-wrapper feat: update releases 2022-12-23 14:34:09 +04:00
nvidia-pkgs feat: update releases 2022-12-23 14:34:09 +04:00
DEVELOPMENT.md feat: publish nvidia modules and toolkit 2022-08-18 22:44:10 +05:30
manifest.yaml feat: publish nvidia modules and toolkit 2022-08-18 22:44:10 +05:30
nvidia-persistenced.yaml chore: bump deps 2022-09-22 12:03:40 +05:30
pkg.yaml feat: publish nvidia modules and toolkit 2022-08-18 22:44:10 +05:30
README.md feat: publish nvidia modules and toolkit 2022-08-18 22:44:10 +05:30
vars.yaml feat: enable renovate bot 2022-08-30 02:11:40 +05:30

NVIDIA Container toolkit extension

Usage

Enable the extension in the machine configuration before installing Talos:

machine:
  install:
    extensions:
      - image: ghcr.io/siderolabs/nvidia-container-toolkit:<VERSION>

The following NVIDIA modules needs to be loaded, so add this to the talos config:

machine:
  kernel:
    modules:
      - name: nvidia
      - name: nvidia_uvm
      - name: nvidia_drm
      - name: nvidia_modeset

nvidia-container-cli loads BPF programs and requires relaxed KSPP setting for bpf_jit_harden, so Talos default setting should be overridden:

machine:
  sysctls:
    net.core.bpf_jit_harden: 1

Warning! This disables KSPP best practices setting.

Testing

Apply the following manifest to create a runtime class that uses the extension:

---
apiVersion: node.k8s.io/v1
kind: RuntimeClass
metadata:
  name: nvidia
handler: nvidia

Install the NVIDIA device plugin:

helm repo add nvdp https://nvidia.github.io/k8s-device-plugin
helm repo update
helm install nvidia-device-plugin nvdp/nvidia-device-plugin --version=0.11.0 --set=runtimeClassName=nvidia

Apply the following manifest to run CUDA pod via nvidia runtime:

---
apiVersion: v1
kind: Pod
metadata:
  name: gpu-operator-test
spec:
  restartPolicy: OnFailure
  runtimeClassName: nvidia
  containers:
  - name: cuda-vector-add
    image: "nvidia/samples:vectoradd-cuda11.6.0"
    resources:
      limits:
         nvidia.com/gpu: 1

The status can be viewed by running:

 kubectl get pods
NAME                READY   STATUS      RESTARTS   AGE
gpu-operator-test   0/1     Completed   0          13s
 kubectl logs gpu-operator-test
[Vector addition of 50000 elements]
Copy input data from the host memory to the CUDA device
CUDA kernel launch with 196 blocks of 256 threads
Copy output data from the CUDA device to the host memory
Test PASSED
Done