Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: AMD GPU support #3985

Closed
cyberpython opened this issue Jun 24, 2023 · 2 comments
Closed

feat: AMD GPU support #3985

cyberpython opened this issue Jun 24, 2023 · 2 comments

Comments

@cyberpython
Copy link

cyberpython commented Jun 24, 2023

Feature request

It would be nice to have the option to use AMD GPUs that support ROCm .

PyTorch seems to support ROCm AMD GPUs on Linux - the following was tested on Ubuntu 22.04.2 LTS with an AMD Ryzen 5825U (Radeon Vega Barcelo 8-core, shared memory) and ROCm 5.5.0 and PyTorch for ROCm 5.4.2 installed:

>>> import torch
>>> torch.cuda.is_available()
True

A cursory look seems to indicate that currently there are only CPU and NVidia Resource implementations in BentoML:

class NvidiaGpuResource(Resource[t.List[int]], resource_id="nvidia.com/gpu"):

Motivation

This feature would enable running models with OpenLLM on AMD GPUs with ROCm support.

Other

No response

@aarnphm
Copy link
Member

aarnphm commented Jun 24, 2023

Let's open an issue over there? I think the default resource and strategies for BentoML will need some more discussion internally from the team and how we could support AMD GPUs on K8s

@cyberpython
Copy link
Author

Issue opened under bentoml/OpenLLM, closing this one

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants