Skip to content

GPU acceleration for Apple's M1 chip? #47702

@chris-dare

Description

@chris-dare

🚀 Feature

Hi,

I was wondering if we could evaluate PyTorch's performance on Apple's new M1 chip. I'm also wondering how we could possibly optimize Pytorch's capabilities on M1 GPUs/neural engines.

I know the issue of supporting acceleration frameworks outside of CUDA has been discussed in previous issues like #488..but I think this is worth a revisit. In Apple's big reveal today, we learned that Apple's on a roll with 50% of product usage growth being as a result of new users this year. Given that Apple is moving to these in-house designed chips, enhanced support for these chips could make deep learning on personal laptops a better experience for many researchers and engineers. I think this really aligns with PyTorch's theme of facilitating deep learning from research to production.

I'm not quite sure how this should go down. But these could be important:

  1. A study on M1 chips
  2. Evaluation of Pytorch's performance on M1 chips
  3. Assessment on M1's compatibility with acceleration frameworks compatible with PyTorch (best bet would be CUDA transpilation..from what I see at OpenCL Support #488)
  4. Investigating enhancements to PyTorch that can take advantage of M1's ML features.

cc @VitalyFedyunin @ngimel

Metadata

Metadata

Assignees

No one assigned

    Labels

    module: performanceIssues related to performance, either of kernel code or framework gluetriagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate module

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions