-
Notifications
You must be signed in to change notification settings - Fork 684
[ET-VK] Allow specifying multiple storage types/memory layouts for an operator + register group norm operator #11974
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Pull Request resolved: #11825 ## Changes * Introduce `permute_buffer.glsl` and `permute_texture.glsl` compute shader templates to implement the permute operator ## Motivation The existing implementation of permute produced incorrect outputs for width packed textures. Furthermore, there was no buffer implementation for the permute operator. My goal with this diff is to introduce a more flexible implementation of permute that could work for any tensor representation. ## Performance impact None expected. ghstack-source-id: 292530157 @exported-using-ghexport Differential Revision: [D76483755](https://our.internmc.facebook.com/intern/diff/D76483755/)
Pull Request resolved: #11826 ## Changes * Allow test cases to specify storage types / memory layouts for individual args * Allow test cases to specify different data generation functions for individual args ## Motivation > Allow test cases to specify storage types / memory layouts for individual args Make it possible to test args that require specific storage types for certain input/output tensors. > Allow test cases to specify different data generation functions for individual args Useful for debugging operators during development. ghstack-source-id: 292530160 @exported-using-ghexport Differential Revision: [D77038777](https://our.internmc.facebook.com/intern/diff/D77038777/)
Pull Request resolved: #11827 ## Changes * Add implementation for the group norm operator. The operator is implemented via a 2 stage implementation. First, a reduction operator is executed to calculate the mean and standard deviation of each channel group. Then, the normalization is applied in an elementwise fashion. ghstack-source-id: 292530158 @exported-using-ghexport Differential Revision: [D77038778](https://our.internmc.facebook.com/intern/diff/D77038778/)
…r an operator + register group norm operator Pull Request resolved: #11828 ## Changes * Handle cases where an operator needs to specify a separate storage type / memory layout for each individual output. ## Motivation Required for the group norm operator. ## Future Work Currently, the `tag_memory_meta_pass` graph pass assumes that all tensors participating in a computation (aside from weights) will have the same storage type and memory layout. As more operators are being added, there are more exceptions to this rule. The pass may need an update in the near future to make it possible to specify required storage types and memory layouts on a more granular level. ghstack-source-id: 292530159 @exported-using-ghexport Differential Revision: [D77038781](https://our.internmc.facebook.com/intern/diff/D77038781/)
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/11974
Note: Links to docs will display an error until the docs builds have been completed. ⏳ 1 Pending, 1 Unrelated FailureAs of commit f3efc6c with merge base 89bdd1d ( FLAKY - The following job failed but was likely due to flakiness present on trunk:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
SS-JIA
approved these changes
Jun 25, 2025
hinriksnaer
pushed a commit
to hinriksnaer/executorch
that referenced
this pull request
Jun 26, 2025
… operator + register group norm operator (pytorch#11974) ## Changes * Handle cases where an operator needs to specify a separate storage type / memory layout for each individual output. ## Motivation Required for the group norm operator. ## Future Work Currently, the `tag_memory_meta_pass` graph pass assumes that all tensors participating in a computation (aside from weights) will have the same storage type and memory layout. As more operators are being added, there are more exceptions to this rule. The pass may need an update in the near future to make it possible to specify required storage types and memory layouts on a more granular level. Differential Revision: [D77038781](https://our.internmc.facebook.com/intern/diff/D77038781/)
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Labels
CLA Signed
This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This PR was created by the merge bot to help merge the original PR into the main branch.
ghstack PR number: #11828 by @SS-JIA
^ Please use this as the source of truth for the PR details, comments, and reviews
ghstack PR base: https://github.com/pytorch/executorch/tree/gh/SS-JIA/248/base
ghstack PR head: https://github.com/pytorch/executorch/tree/gh/SS-JIA/248/head
Merge bot PR base: https://github.com/pytorch/executorch/tree/gh/SS-JIA/247/orig
Merge bot PR head: https://github.com/pytorch/executorch/tree/gh/SS-JIA/248/orig
@diff-train-skip-merge