-
Notifications
You must be signed in to change notification settings - Fork 21.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[dtensor] run transformer sdpa in dtensor #122997
Conversation
[ghstack-poisoned]
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/122997
Note: Links to docs will display an error until the docs builds have been completed. ✅ No FailuresAs of commit b336a91 with merge base a3d97f6 ( This comment was automatically generated by Dr. CI and updates every 15 minutes. |
ghstack-source-id: 661b23ba34e8a05ca8eb3057fe04b82832ad64d1 Pull Request resolved: #122997
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm
Now that efficient attention is supported in dtensor, we can modify the transformer test to use dtensor in SDPA and get rid of the manual num_head adjustments. Caveat: Efficient attention is supported only with bf16/fp32 (not fp64) and has other constraints. If any of the constraints are not satisfied, the SDPA would fall back to the math decomposed attention, which will break as it does not fully work with dtensor (it creates a `torch.Tensor` mask in the middle). I considered adding some checks like in P1202254918 but that needs to be added everywhere this Transformer is used. Is it necessary if the current CI machines can run efficient attention? Test files containing this Transformer: - `test/distributed/tensor/parallel/test_tp_examples.py` - `test/distributed/_composable/fsdp/test_fully_shard_training.py` - `test/distributed/_composable/fsdp/test_fully_shard_clip_grad_norm_.py` cc mrshenli pritamdamania87 zhaojuanmao satgera rohan-varma gqchen aazzolini osalpekar jiayisuse H-Huang kwen2501 awgu penguinwu fegin XilunWu wanchaol fduwjj wz337 wconstab yf225 chauhang [ghstack-poisoned]
ghstack-source-id: f81c79a4dd2d78e33b28f445c9a10bebb1a80997 Pull Request resolved: #122997
Now that efficient attention is supported in dtensor, we can modify the transformer test to use dtensor in SDPA and get rid of the manual num_head adjustments. Caveat: Efficient attention is supported only with bf16/fp32 (not fp64) and has other constraints. If any of the constraints are not satisfied, the SDPA would fall back to the math decomposed attention, which will break as it does not fully work with dtensor (it creates a `torch.Tensor` mask in the middle). I considered adding some checks like in P1202254918 but that needs to be added everywhere this Transformer is used. Is it necessary if the current CI machines can run efficient attention? Test files containing this Transformer: - `test/distributed/tensor/parallel/test_tp_examples.py` - `test/distributed/_composable/fsdp/test_fully_shard_training.py` - `test/distributed/_composable/fsdp/test_fully_shard_clip_grad_norm_.py` cc mrshenli pritamdamania87 zhaojuanmao satgera rohan-varma gqchen aazzolini osalpekar jiayisuse H-Huang kwen2501 awgu penguinwu fegin XilunWu wanchaol fduwjj wz337 wconstab yf225 chauhang [ghstack-poisoned]
ghstack-source-id: 1bc4d914853e4d647c72f37485b5b1102771a23e Pull Request resolved: #122997
Now that efficient attention is supported in dtensor, we can modify the transformer test to use dtensor in SDPA and get rid of the manual num_head adjustments. Caveat: Efficient attention is supported only with bf16/fp32 (not fp64) and has other constraints. If any of the constraints are not satisfied, the SDPA would fall back to the math decomposed attention, which will break as it does not fully work with dtensor (it creates a `torch.Tensor` mask in the middle). I considered adding some checks like in P1202254918 but that needs to be added everywhere this Transformer is used. Is it necessary if the current CI machines can run efficient attention? Test files containing this Transformer: - `test/distributed/tensor/parallel/test_tp_examples.py` - `test/distributed/_composable/fsdp/test_fully_shard_training.py` - `test/distributed/_composable/fsdp/test_fully_shard_clip_grad_norm_.py` cc mrshenli pritamdamania87 zhaojuanmao satgera rohan-varma gqchen aazzolini osalpekar jiayisuse H-Huang kwen2501 awgu penguinwu fegin XilunWu wanchaol fduwjj wz337 wconstab yf225 chauhang [ghstack-poisoned]
ghstack-source-id: bf6e6601fc4b03adf9fbe34cbb3e5ad574746665 Pull Request resolved: #122997
Now that efficient attention is supported in dtensor, we can modify the transformer test to use dtensor in SDPA and get rid of the manual num_head adjustments. Caveat: Efficient attention is supported only with bf16/fp32 (not fp64) and has other constraints. If any of the constraints are not satisfied, the SDPA would fall back to the math decomposed attention, which will break as it does not fully work with dtensor (it creates a `torch.Tensor` mask in the middle). I considered adding some checks like in P1202254918 but that needs to be added everywhere this Transformer is used. Is it necessary if the current CI machines can run efficient attention? Test files containing this Transformer: - `test/distributed/tensor/parallel/test_tp_examples.py` - `test/distributed/_composable/fsdp/test_fully_shard_training.py` - `test/distributed/_composable/fsdp/test_fully_shard_clip_grad_norm_.py` cc mrshenli pritamdamania87 zhaojuanmao satgera rohan-varma gqchen aazzolini osalpekar jiayisuse H-Huang kwen2501 awgu penguinwu fegin XilunWu wanchaol fduwjj wz337 wconstab yf225 chauhang [ghstack-poisoned]
ghstack-source-id: 2edbf1c3b1288f5ec404eafa3087e4964b222578 Pull Request resolved: #122997
Now that efficient attention is supported in dtensor, we can modify the transformer test to use dtensor in SDPA and get rid of the manual num_head adjustments. Caveat: Efficient attention is supported only with bf16/fp32 (not fp64) and has other constraints. If any of the constraints are not satisfied, the SDPA would fall back to the math decomposed attention, which will break as it does not fully work with dtensor (it creates a `torch.Tensor` mask in the middle). I considered adding some checks like in P1202254918 but that needs to be added everywhere this Transformer is used. Is it necessary if the current CI machines can run efficient attention? Test files containing this Transformer: - `test/distributed/tensor/parallel/test_tp_examples.py` - `test/distributed/_composable/fsdp/test_fully_shard_training.py` - `test/distributed/_composable/fsdp/test_fully_shard_clip_grad_norm_.py` cc mrshenli pritamdamania87 zhaojuanmao satgera rohan-varma gqchen aazzolini osalpekar jiayisuse H-Huang kwen2501 awgu penguinwu fegin XilunWu wanchaol fduwjj wz337 wconstab yf225 chauhang [ghstack-poisoned]
ghstack-source-id: 514e047b05af916c1a23ec4e6b45338a976eae07 Pull Request resolved: #122997
Now that efficient attention is supported in dtensor, we can modify the transformer test to use dtensor in SDPA and get rid of the manual num_head adjustments. Caveat: Efficient attention is supported only with bf16/fp32 (not fp64) and has other constraints. If any of the constraints are not satisfied, the SDPA would fall back to the math decomposed attention, which will break as it does not fully work with dtensor (it creates a `torch.Tensor` mask in the middle). I considered adding some checks like in P1202254918 but that needs to be added everywhere this Transformer is used. Is it necessary if the current CI machines can run efficient attention? Test files containing this Transformer: - `test/distributed/tensor/parallel/test_tp_examples.py` - `test/distributed/_composable/fsdp/test_fully_shard_training.py` - `test/distributed/_composable/fsdp/test_fully_shard_clip_grad_norm_.py` cc mrshenli pritamdamania87 zhaojuanmao satgera rohan-varma gqchen aazzolini osalpekar jiayisuse H-Huang kwen2501 awgu penguinwu fegin XilunWu wanchaol fduwjj wz337 wconstab yf225 chauhang [ghstack-poisoned]
ghstack-source-id: 9a2cac1eaba0211bd115a00db2aa5eded8fae36a Pull Request resolved: #122997
@pytorchbot merge |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
Stack from ghstack (oldest at bottom):
Now that efficient attention is supported in dtensor, we can modify the transformer test to use dtensor in SDPA and get rid of the manual num_head adjustments.
Caveat: Efficient attention is supported only with bf16/fp32 (not fp64) and has other constraints. If any of the constraints are not satisfied, the SDPA would fall back to the math decomposed attention, which will break as it does not fully work with dtensor (it creates a
torch.Tensor
mask in the middle). I considered adding some checks like in P1202254918 but that needs to be added everywhere this Transformer is used. Is it necessary if the current CI machines can run efficient attention?Test files containing this Transformer:
test/distributed/tensor/parallel/test_tp_examples.py
test/distributed/_composable/fsdp/test_fully_shard_training.py
test/distributed/_composable/fsdp/test_fully_shard_clip_grad_norm_.py
cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu @penguinwu @fegin @XilunWu @wanchaol @fduwjj @wz337 @wconstab @yf225 @chauhang @d4l3k @rohan-varma