Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Nvidia driver update might prevent container startup due to outdated /etc/cdi/nvidia.yaml bind mount entries #23

Open
lauromoura opened this issue May 29, 2024 · 1 comment

Comments

@lauromoura
Copy link

After upgrading the nvidia driver on the host system, the container failed to restart, leading to wkdev-enter getting stuck. When starting manually with podman start wkdev, podman failed with the following message:

Error: unable to start container "ecdb2132becbc09015d0bb7d14299693a75a462be160a2a4fc7610d60e0d3f2a": crun: error stat'ing file /lib/x86_64-linux-gnu/libEGL_nvidia.so.535.161.07: No such file or directory: OCI runtime attempted to invoke a command that was not found

In the host system, that file was upgraded to libEGL_nvidia.so.535.171.04 after the driver update.

As @TingPing pointed in Matrix, this file was listed in /etc/cdi/nvidia.yaml as a bind mount (see below). After removing the file and re-running wkdev-setup-nvidia-gpu-for-container, it pointed to the newer library. (but I couldn't check if only this was enough, as the container had been rebuilt after updating the base image). But in any case, future driver updates might trigger similar issues.

- containerPath: /usr/lib/x86_64-linux-gnu/libEGL_nvidia.so.535.171.07
    hostPath: /usr/lib/x86_64-linux-gnu/libEGL_nvidia.so.535.171.07
    options:
    - ro
    - nosuid
    - nodev
    - bind
@TingPing
Copy link
Member

TingPing commented May 30, 2024

From Niko:

that won't work for the case of nvidia since we don't directly bind-mount something, but use the cdi support of podman, to pass on the nvidia.yml config file, generated by the nvidia container toolkit tool

a potential fix, is to dump the nvidia lib paths using nvidia-ctk tool, then cache that output, and compare it every time you enter a container

if that changes, abort, and tell the user to run wkdev-setup-nvidia-gpu and to use wkdev-update and recreate containers there's no way around recreating containers, if the CDI definitions change

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants