-
Notifications
You must be signed in to change notification settings - Fork 25k
Description
🚀 Feature
Hi,
I was wondering if we could evaluate PyTorch's performance on Apple's new M1 chip. I'm also wondering how we could possibly optimize Pytorch's capabilities on M1 GPUs/neural engines.
I know the issue of supporting acceleration frameworks outside of CUDA has been discussed in previous issues like #488..but I think this is worth a revisit. In Apple's big reveal today, we learned that Apple's on a roll with 50% of product usage growth being as a result of new users this year. Given that Apple is moving to these in-house designed chips, enhanced support for these chips could make deep learning on personal laptops a better experience for many researchers and engineers. I think this really aligns with PyTorch's theme of facilitating deep learning from research to production.
I'm not quite sure how this should go down. But these could be important:
- A study on M1 chips
- Evaluation of Pytorch's performance on M1 chips
- Assessment on M1's compatibility with acceleration frameworks compatible with PyTorch (best bet would be CUDA transpilation..from what I see at OpenCL Support #488)
- Investigating enhancements to PyTorch that can take advantage of M1's ML features.