Skip to content

Questions about solutions for containers losing access to GPUs (NVML Error https://github.com/NVIDIA/nvidia-container-toolkit/issues/48))  #421

@BCJuan

Description

@BCJuan

Hi 🖖🏼

In this issue from last year Failed to initialize NVML: Unknown Error
has been described how containers lose access to GPU randomly.

There are some solutions proposed but some are not clear to me:

  1. It is said that the GPU Operator 22.9.2 would solve the issue. However, we are using a higher GPU Operator version and the issue is not solved. Could it be that only if we use the GPU Device Plugin inside the operator the issue is solved?

  2. Another solution consists of changing cgroupfs in docker configuration. However, this is not recommended in Kubernetes for systems with systemd Container Runtimes K8s How can we solve this antagony? Why would it be a solution if it isn't recommended? In what risks do we incur?

  3. It is stated that it can be solved through nvidia-ctk utility and adding a udev rule. If one is using the nvidia driver daemonset from the GPU Operator it is stated that one must run the nvidia-ctk command with the location of the new drivers. I understand then that every time the NVIDIA driver daemonset is restarted the nvidia-ctk command should be run again. Is this right? Could this be added to the NVIDIA driver daemonset so it doesn't have to be manually applied?

Thank you very much.

Metadata

Metadata

Assignees

No one assigned

    Labels

    lifecycle/staleDenotes an issue or PR has remained open with no activity and has become stale.

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions