New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[REQUEST] M1 Max support #1580
Comments
We’ve definitely been watching if/when pytorch will support M1, it sounds like it’s planned though. Specifically see this comment: In terms of DeepSpeed support for M1 I suspect (depending on final design on torch side) many of our features will work well. However, we’ll have to reassess once torch releases their updated plan and final support here. |
Looks like pytorch released support. |
any updates? |
+1 on the request |
+1 for this, would be very helpful to have support from deepspeed in addition to torch. |
+1 on the update, it would be very cool to have deepspeed now working on Apple M1/M2 machines now that PyTorch is supporting them.. |
It is exciting to see PyTorch support for M1/M2. We very open to extending deepspeed support to more and more accelerators but we currently lack bandwidth and hardware to explore this at the moment. However, we would gladly support any PRs for this similar to our ongoing support for the following Intel accelerator PR: #2221 |
Any update? it's been a long time. |
@phnessu4, unfortunately no update here as we have not had bandwidth or hardware access to drive this line. We would gladly accept any PR in this direction. Our accelerator abstraction is completed, and Intel XPU is now fully supported. |
Is your feature request related to a problem? Please describe.
I'm looking to do some fine tuning of GPT-J on a Macbook Pro M1 Max w/ 64GB RAM
Describe the solution you'd like
Given the CPU, GPU, and memory are significant for this chip it seems a reasonable target and would skip the hassle and cost management/budget asks for cloud based training.
The text was updated successfully, but these errors were encountered: