You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The current Frank lab GPU servers use CUDA 11.6, but the current environment_position.yml specifies the following dependencies:
pytorch<1.12.0
torchvision
torchaudio
cudatoolkit=11.3
which leads to a pytorch version 1.7.1.post2 which does not recognize any GPUs on the lab server due to cuda incompatibility (probably because the cudatoolkit is specified to version 11.3)
torch.cuda.current_device() returns the following error AssertionError: Torch not compiled with CUDA enabled
I have not started using the DLC pipeline so I don't know the impact of this issue. Other people seem to be using the GPUs on lab server without any issues currently, but in the future there may be a need to update the environment_position.yml or make notes about installing the correct pytorch version.
The text was updated successfully, but these errors were encountered:
The current Frank lab GPU servers use CUDA 11.6, but the current
environment_position.yml
specifies the following dependencies:which leads to a pytorch version 1.7.1.post2 which does not recognize any GPUs on the lab server due to cuda incompatibility (probably because the cudatoolkit is specified to version 11.3)
Bug behavior:
torch.cuda.is_available()
returnsFalse
torch.cuda.current_device()
returns the following errorAssertionError: Torch not compiled with CUDA enabled
I have not started using the DLC pipeline so I don't know the impact of this issue. Other people seem to be using the GPUs on lab server without any issues currently, but in the future there may be a need to update the
environment_position.yml
or make notes about installing the correct pytorch version.The text was updated successfully, but these errors were encountered: