You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The template below is mostly useful for bug reports and support questions. Feel free to remove anything which doesn't apply to you and add more information where it makes sense.
Also, before reporting a new issue, please make sure that:
When I use Docker, nvidia-smi shows me different CUDA version that the current driver can support. It shows 11.7 on the host machine but 11.8 inside Docker even though the driver version is the same.
2. Steps to reproduce the issue
3. Information to attach (optional if deemed irrelevant)
Some nvidia-container information: nvidia-container-cli -k -d /dev/tty info
Kernel version from uname -a
Any relevant kernel output lines from dmesg
Driver information from nvidia-smi -a
Docker version from docker version
NVIDIA packages version from dpkg -l '*nvidia*'orrpm -qa '*nvidia*'
NVIDIA container library version from nvidia-container-cli -V
The template below is mostly useful for bug reports and support questions. Feel free to remove anything which doesn't apply to you and add more information where it makes sense.
Also, before reporting a new issue, please make sure that:
1. Issue or feature description
When I use Docker,
nvidia-smi
shows me different CUDA version that the current driver can support. It shows 11.7 on the host machine but 11.8 inside Docker even though the driver version is the same.2. Steps to reproduce the issue
3. Information to attach (optional if deemed irrelevant)
nvidia-container-cli -k -d /dev/tty info
uname -a
dmesg
nvidia-smi -a
docker version
dpkg -l '*nvidia*'
orrpm -qa '*nvidia*'
nvidia-container-cli -V
On the host machine:
Inside Docker:
sed
The text was updated successfully, but these errors were encountered: