You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Most ML libraries/frameworks use CUDA which is an Nvidia technology so it's generally unsupported for AMD GPU's to my knowledge, am I sure there could be a workaround somewhere but I am unsure. The standard for machine learning right now is Nvidia. Sorry to disappoint, however Google Colaboratory lets you run the code with GPU acceleration
I know this is pretty old, but I figured I still answer for someone finds their way here.
To train AI-Models with AMD GPUs you first need ROCm: https://docs.amd.com/bundle/ROCm-Installation-Guide-v5.5/page/Introduction_to_ROCm_Installation_Guide_for_Linux.html
As the URL suggest, you need a supported Linux-Distribution and a fitting pytorch version for ROCm.
ROCm supports the cuda nomenclature and therefore doesn't need any changes in the code as pytorch/ROCm just "translate" it.
After all this is installed properly, you can start training on your AMD-GPU.
With version 5.6 there is supposedly Windows support coming.
Hey, i own a amd gpu and look forward to make models but i cant as it needs nvidia drivers it is there any workarounds
The text was updated successfully, but these errors were encountered: