Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to use amd gpu? #5

Open
crackedpotato007 opened this issue Sep 4, 2021 · 2 comments
Open

How to use amd gpu? #5

crackedpotato007 opened this issue Sep 4, 2021 · 2 comments

Comments

@crackedpotato007
Copy link

Hey, i own a amd gpu and look forward to make models but i cant as it needs nvidia drivers it is there any workarounds

@capps1994
Copy link

Most ML libraries/frameworks use CUDA which is an Nvidia technology so it's generally unsupported for AMD GPU's to my knowledge, am I sure there could be a workaround somewhere but I am unsure. The standard for machine learning right now is Nvidia. Sorry to disappoint, however Google Colaboratory lets you run the code with GPU acceleration

@Hurri08
Copy link

Hurri08 commented May 6, 2023

I know this is pretty old, but I figured I still answer for someone finds their way here.
To train AI-Models with AMD GPUs you first need ROCm: https://docs.amd.com/bundle/ROCm-Installation-Guide-v5.5/page/Introduction_to_ROCm_Installation_Guide_for_Linux.html
As the URL suggest, you need a supported Linux-Distribution and a fitting pytorch version for ROCm.
ROCm supports the cuda nomenclature and therefore doesn't need any changes in the code as pytorch/ROCm just "translate" it.
After all this is installed properly, you can start training on your AMD-GPU.

With version 5.6 there is supposedly Windows support coming.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants