Add Flash Attention support to Marian model #36429
Closed
+16
−115
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
What does this PR do?
This PR adds support for Flash Attention to the Marian model, enabling faster and more memory-efficient attention computation. Flash Attention significantly boosts performance, especially for large sequence lengths, by reducing memory usage and improving training and inference speed.
Fixes #36169
Partially addresses performance optimization discussions for the Marian model.
Motivation and Context
Marian models benefit from efficient attention mechanisms due to their common usage in translation tasks with long sequences. Flash Attention makes them more scalable and faster without sacrificing accuracy.
Changes Introduced
MarianAttention
class inmodeling_marian.py
.Before submitting
Who can review?
@ArthurZucker
@Rocketknight1
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR.