-
Notifications
You must be signed in to change notification settings - Fork 3.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add faster mask_select
method
#8369
Conversation
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## master #8369 +/- ##
==========================================
- Coverage 88.80% 88.41% -0.39%
==========================================
Files 475 475
Lines 28841 28837 -4
==========================================
- Hits 25611 25497 -114
- Misses 3230 3340 +110 ☔ View full report in Codecov by Sentry. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great catch, thank you!
This PR addresses a minor performance issue with the
utils.mask.mask_select
method.The previous implementation unsqueezed the mask to be broadcastable along all dimensions.
This broadcasted masking is significantly slower in torch than applying a 1-dimensional mask to the first dimension of a tensor.
The new implementation simply transposes the tensor such that the masked dimension comes first and later undoes this.
This mainly affects the methods for computing subgraphs which are needlessly slow on bigger graphs with many features.
Here is a code example to illustrate the difference:
The prior version yields:
The updated version yields:
So more than a 30-fold speedup. We thought this may be helpful to accelerate workflows involving subgraph extraction.