Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[REQUEST] M1 Max support #1580

Closed
alecl opened this issue Nov 20, 2021 · 9 comments · Fixed by #3907
Closed

[REQUEST] M1 Max support #1580

alecl opened this issue Nov 20, 2021 · 9 comments · Fixed by #3907
Labels
enhancement New feature or request

Comments

@alecl
Copy link

alecl commented Nov 20, 2021

Is your feature request related to a problem? Please describe.
I'm looking to do some fine tuning of GPT-J on a Macbook Pro M1 Max w/ 64GB RAM

Describe the solution you'd like
Given the CPU, GPU, and memory are significant for this chip it seems a reasonable target and would skip the hassle and cost management/budget asks for cloud based training.

@alecl alecl added the enhancement New feature or request label Nov 20, 2021
@jeffra
Copy link
Contributor

jeffra commented Nov 21, 2021

We’ve definitely been watching if/when pytorch will support M1, it sounds like it’s planned though.

pytorch/pytorch#47702

Specifically see this comment:
pytorch/pytorch#47702 (comment)

In terms of DeepSpeed support for M1 I suspect (depending on final design on torch side) many of our features will work well. However, we’ll have to reassess once torch releases their updated plan and final support here.

@alecl
Copy link
Author

alecl commented Jun 11, 2022

Looks like pytorch released support.

@amoghmishra-sl
Copy link

any updates?

@stan-kirdey
Copy link

+1 on the request

@ianbstewart
Copy link

+1 for this, would be very helpful to have support from deepspeed in addition to torch.

@dseddah
Copy link

dseddah commented Nov 6, 2022

+1 on the update, it would be very cool to have deepspeed now working on Apple M1/M2 machines now that PyTorch is supporting them..

@tjruwase
Copy link
Contributor

tjruwase commented Nov 8, 2022

It is exciting to see PyTorch support for M1/M2. We very open to extending deepspeed support to more and more accelerators but we currently lack bandwidth and hardware to explore this at the moment. However, we would gladly support any PRs for this similar to our ongoing support for the following Intel accelerator PR: #2221

@phnessu4
Copy link

Any update? it's been a long time.

@tjruwase
Copy link
Contributor

@phnessu4, unfortunately no update here as we have not had bandwidth or hardware access to drive this line. We would gladly accept any PR in this direction. Our accelerator abstraction is completed, and Intel XPU is now fully supported.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging a pull request may close this issue.

8 participants