Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add Flash Attention support to Marian model #36429

Closed

Conversation

mrunsung7
Copy link

What does this PR do?

This PR adds support for Flash Attention to the Marian model, enabling faster and more memory-efficient attention computation. Flash Attention significantly boosts performance, especially for large sequence lengths, by reducing memory usage and improving training and inference speed.

Fixes #36169
Partially addresses performance optimization discussions for the Marian model.

Motivation and Context

Marian models benefit from efficient attention mechanisms due to their common usage in translation tasks with long sequences. Flash Attention makes them more scalable and faster without sacrificing accuracy.

Changes Introduced

  • Integrated Flash Attention into the MarianAttention class in modeling_marian.py.
  • Refactored the attention mechanism to use optimized operations while preserving backward compatibility.
  • Removed redundant operations and improved memory efficiency.

Before submitting

  • This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
  • Did you read the contributor guideline, Pull Request section?
  • Was this discussed/approved via a GitHub issue or the forum? Please add a link to it if that's the case. Discussion here
  • Did you make sure to update the documentation with your changes? (Documentation guidelines)
  • Did you write any new necessary tests?

Who can review?

@ArthurZucker
@Rocketknight1

Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR.

Sorry, something went wrong.

@mrunsung7
Copy link
Author

@mrunsung7 mrunsung7 closed this Feb 26, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

add Flash Attention Support for Helsinki-NLP/opus models
1 participant