You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This is less a bug report and more of an FYI for anyone else trying to use this container with a pytorch or any torch?. I was able to get nvidia-smi to work inside my matroska container, but the cuda version was n/a and torch.cuda.is_available() would be False. So I changed the base from ubuntu to the cuda base that matched my AMI (11.4 in this case) and now it works!
The text was updated successfully, but these errors were encountered:
The typical expectation is to use a container within the DinD container which has the CUDA libraries. Are you using Torch directly in the DinD container?
This is less a bug report and more of an FYI for anyone else trying to use this container with a pytorch or any torch?. I was able to get nvidia-smi to work inside my matroska container, but the cuda version was n/a and torch.cuda.is_available() would be False. So I changed the base from ubuntu to the cuda base that matched my AMI (11.4 in this case) and now it works!
The text was updated successfully, but these errors were encountered: