Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix: Disable torch.autocast in RotaryEmbedding of Gemma and LLaMa for MPS device #29439

Merged
merged 3 commits into from Mar 6, 2024

Conversation

currybab
Copy link
Contributor

@currybab currybab commented Mar 4, 2024

What does this PR do?

Fixes #29431

The issue on MPS devices was caused by the merge of #29285 in version 4.38.2.

Before submitting

  • This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
  • Did you read the contributor guideline,
    Pull Request section?
  • Was this discussed/approved via a Github issue or the forum? Please add a link
    to it if that's the case.
  • Did you make sure to update the documentation with your changes? Here are the
    documentation guidelines, and
    here are tips on formatting docstrings.
  • Did you write any new necessary tests?

Who can review?

Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.

Copy link
Collaborator

@ArthurZucker ArthurZucker left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That is indeed a problem. Was not aware that autocast is not available for mps.
We probably need to do a patch for this!
I think we can use cpu device even if the tensors are not on CPU no?

src/transformers/models/gemma/modeling_gemma.py Outdated Show resolved Hide resolved
currybab and others added 2 commits March 6, 2024 12:39
Copy link
Collaborator

@ArthurZucker ArthurZucker left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM thank you for the prompt fix

@ArthurZucker ArthurZucker merged commit d45f47a into huggingface:main Mar 6, 2024
18 checks passed
@ArthurZucker
Copy link
Collaborator

FYI @fxmarty and @gante !

@ArthurZucker
Copy link
Collaborator

I have not tested this with compile but the dtype should be alright to check / we can always check self.dtype to not be input dependant

damithsenanayake pushed a commit to damithsenanayake/transformers that referenced this pull request Mar 7, 2024
… MPS device (huggingface#29439)

* Fix: Disable torch.autocast in RotaryEmbedding of Gemma and LLaMa for MPS devices

* Update src/transformers/models/gemma/modeling_gemma.py

Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>

* Update llama ang gemma rope use cpu in mps device

---------

Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

RuntimeError: User specified an unsupported autocast device_type 'mps'
2 participants