-
Notifications
You must be signed in to change notification settings - Fork 363
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[XLA:GPU] disable mask in cuDNN attention #11444
Closed
Closed
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Contributor
Cjkkkk
commented
Apr 11, 2024
- cuDNN attention mask is not doing masking with -inf but multiply which is not correct. Hence disable patterns with mask.
- Follow up PR to clean up the remaining mask related logic.
akuegel
approved these changes
Apr 12, 2024
copybara-service bot
pushed a commit
to tensorflow/tensorflow
that referenced
this pull request
Apr 12, 2024
Imported from GitHub PR openxla/xla#11444 1. cuDNN attention mask is not doing masking with -inf but multiply which is not correct. Hence disable patterns with mask. 2. Follow up PR to clean up the remaining mask related logic. Copybara import of the project: -- acf95b6cc7e1084026eaf87c0119ba3801ba8f8c by cjkkkk <ske@nvidia.com>: disable mask Merging this change closes #11444 FUTURE_COPYBARA_INTEGRATE_REVIEW=openxla/xla#11444 from Cjkkkk:remove_mask acf95b6cc7e1084026eaf87c0119ba3801ba8f8c PiperOrigin-RevId: 624057479
copybara-service bot
pushed a commit
to tensorflow/tensorflow
that referenced
this pull request
Apr 12, 2024
…ensor into ifrt array. FUTURE_COPYBARA_INTEGRATE_REVIEW=openxla/xla#11444 from Cjkkkk:remove_mask acf95b6cc7e1084026eaf87c0119ba3801ba8f8c PiperOrigin-RevId: 623707308
copybara-service bot
pushed a commit
to tensorflow/tensorflow
that referenced
this pull request
Apr 12, 2024
Imported from GitHub PR openxla/xla#11444 1. cuDNN attention mask is not doing masking with -inf but multiply which is not correct. Hence disable patterns with mask. 2. Follow up PR to clean up the remaining mask related logic. Copybara import of the project: -- acf95b6cc7e1084026eaf87c0119ba3801ba8f8c by cjkkkk <ske@nvidia.com>: disable mask Merging this change closes #11444 PiperOrigin-RevId: 624068883
copybara-service bot
pushed a commit
to tensorflow/tensorflow
that referenced
this pull request
Apr 25, 2024
… in cuDNN Imported from GitHub PR openxla/xla#11717 * Default to flash attention as it is high performant and maintained actively by cuDNN, remove old fused attn. * Remove lowering to fused attn in rewriter * Remove cudnn graph generation * Remove mask input as it is only doing multiply instead of masking with -inf, give incorrect results. Also cuDNN does not support this anymore, mask should be combined with bias. This is follow up on openxla/xla#11444. * Remove mask logic in rewriter * Remove mask buffer/descriptor in thunk * Remove bmm1-bmm2 pattern as it is not support by flash attention. Modified related rewriter test to use bmm1-softmax - bmm2. Current pattern: bmm1 - (scale) - (bias) - softmax - (dropout) - bmm2. Copybara import of the project: -- 552b4a3387c6d5b2b5adcf31b6f44cc858387b23 by cjkkkk <ske@nvidia.com>: remove fused attn -- 13b683bf923e6fe344f879f913ae6ce41334eeb2 by cjkkkk <ske@nvidia.com>: remove mask and bmm1-bmm2 pattern -- 9e843dd66c8f7d51d239b433e1b9bc329afee90d by cjkkkk <ske@nvidia.com>: rm unused vari -- 1104df540e9196b34d9e61e679d88260151728d5 by cjkkkk <ske@nvidia.com>: remove fused attn cudnnv version check and update flash attn cudnn version check -- b020cb8e91d8f7b834645ff815c38c4798174857 by cjkkkk <ske@nvidia.com>: remove mask related cudnnfmhakind&descriptor&buffer -- ff1952faa460eecfca62660a5c34ea6fa3c2dfd4 by cjkkkk <ske@nvidia.com>: rename hlo_string to shorter name Merging this change closes #11717 FUTURE_COPYBARA_INTEGRATE_REVIEW=openxla/xla#11717 from Cjkkkk:remove_fused_attn_and_mask ff1952faa460eecfca62660a5c34ea6fa3c2dfd4 PiperOrigin-RevId: 627269256
copybara-service bot
pushed a commit
to tensorflow/tensorflow
that referenced
this pull request
Apr 25, 2024
… in cuDNN Imported from GitHub PR openxla/xla#11717 * Default to flash attention as it is high performant and maintained actively by cuDNN, remove old fused attn. * Remove lowering to fused attn in rewriter * Remove cudnn graph generation * Remove mask input as it is only doing multiply instead of masking with -inf, give incorrect results. Also cuDNN does not support this anymore, mask should be combined with bias. This is follow up on openxla/xla#11444. * Remove mask logic in rewriter * Remove mask buffer/descriptor in thunk * Remove bmm1-bmm2 pattern as it is not support by flash attention. Modified related rewriter test to use bmm1-softmax - bmm2. Current pattern: bmm1 - (scale) - (bias) - softmax - (dropout) - bmm2. Copybara import of the project: -- 552b4a3387c6d5b2b5adcf31b6f44cc858387b23 by cjkkkk <ske@nvidia.com>: remove fused attn -- 13b683bf923e6fe344f879f913ae6ce41334eeb2 by cjkkkk <ske@nvidia.com>: remove mask and bmm1-bmm2 pattern -- 9e843dd66c8f7d51d239b433e1b9bc329afee90d by cjkkkk <ske@nvidia.com>: rm unused vari -- 1104df540e9196b34d9e61e679d88260151728d5 by cjkkkk <ske@nvidia.com>: remove fused attn cudnnv version check and update flash attn cudnn version check -- b020cb8e91d8f7b834645ff815c38c4798174857 by cjkkkk <ske@nvidia.com>: remove mask related cudnnfmhakind&descriptor&buffer -- ff1952faa460eecfca62660a5c34ea6fa3c2dfd4 by cjkkkk <ske@nvidia.com>: rename hlo_string to shorter name Merging this change closes #11717 FUTURE_COPYBARA_INTEGRATE_REVIEW=openxla/xla#11717 from Cjkkkk:remove_fused_attn_and_mask ff1952faa460eecfca62660a5c34ea6fa3c2dfd4 PiperOrigin-RevId: 627269256
copybara-service bot
pushed a commit
that referenced
this pull request
Apr 25, 2024
… in cuDNN Imported from GitHub PR #11717 * Default to flash attention as it is high performant and maintained actively by cuDNN, remove old fused attn. * Remove lowering to fused attn in rewriter * Remove cudnn graph generation * Remove mask input as it is only doing multiply instead of masking with -inf, give incorrect results. Also cuDNN does not support this anymore, mask should be combined with bias. This is follow up on #11444. * Remove mask logic in rewriter * Remove mask buffer/descriptor in thunk * Remove bmm1-bmm2 pattern as it is not support by flash attention. Modified related rewriter test to use bmm1-softmax - bmm2. Current pattern: bmm1 - (scale) - (bias) - softmax - (dropout) - bmm2. Copybara import of the project: -- 552b4a3 by cjkkkk <ske@nvidia.com>: remove fused attn -- 13b683b by cjkkkk <ske@nvidia.com>: remove mask and bmm1-bmm2 pattern -- 9e843dd by cjkkkk <ske@nvidia.com>: rm unused vari -- 1104df5 by cjkkkk <ske@nvidia.com>: remove fused attn cudnnv version check and update flash attn cudnn version check -- b020cb8 by cjkkkk <ske@nvidia.com>: remove mask related cudnnfmhakind&descriptor&buffer -- ff1952f by cjkkkk <ske@nvidia.com>: rename hlo_string to shorter name Merging this change closes #11717 COPYBARA_INTEGRATE_REVIEW=#11717 from Cjkkkk:remove_fused_attn_and_mask ff1952f PiperOrigin-RevId: 628146618
copybara-service bot
pushed a commit
to tensorflow/tensorflow
that referenced
this pull request
Apr 25, 2024
… in cuDNN Imported from GitHub PR openxla/xla#11717 * Default to flash attention as it is high performant and maintained actively by cuDNN, remove old fused attn. * Remove lowering to fused attn in rewriter * Remove cudnn graph generation * Remove mask input as it is only doing multiply instead of masking with -inf, give incorrect results. Also cuDNN does not support this anymore, mask should be combined with bias. This is follow up on openxla/xla#11444. * Remove mask logic in rewriter * Remove mask buffer/descriptor in thunk * Remove bmm1-bmm2 pattern as it is not support by flash attention. Modified related rewriter test to use bmm1-softmax - bmm2. Current pattern: bmm1 - (scale) - (bias) - softmax - (dropout) - bmm2. Copybara import of the project: -- 552b4a3387c6d5b2b5adcf31b6f44cc858387b23 by cjkkkk <ske@nvidia.com>: remove fused attn -- 13b683bf923e6fe344f879f913ae6ce41334eeb2 by cjkkkk <ske@nvidia.com>: remove mask and bmm1-bmm2 pattern -- 9e843dd66c8f7d51d239b433e1b9bc329afee90d by cjkkkk <ske@nvidia.com>: rm unused vari -- 1104df540e9196b34d9e61e679d88260151728d5 by cjkkkk <ske@nvidia.com>: remove fused attn cudnnv version check and update flash attn cudnn version check -- b020cb8e91d8f7b834645ff815c38c4798174857 by cjkkkk <ske@nvidia.com>: remove mask related cudnnfmhakind&descriptor&buffer -- ff1952faa460eecfca62660a5c34ea6fa3c2dfd4 by cjkkkk <ske@nvidia.com>: rename hlo_string to shorter name Merging this change closes #11717 PiperOrigin-RevId: 628146618
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.