-
Notifications
You must be signed in to change notification settings - Fork 25.6k
Symintify baddbmm #154656
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Symintify baddbmm #154656
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/154656
Note: Links to docs will display an error until the docs builds have been completed. ❌ 4 New Failures, 2 Unrelated FailuresAs of commit d4c0853 with merge base aa84c03 ( NEW FAILURES - The following jobs have failed:
FLAKY - The following job failed but was likely due to flakiness present on trunk:
UNSTABLE - The following job is marked as unstable, possibly due to flakiness on trunk:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
torch/_meta_registrations.py
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
sym_and takes any number of arguments, so you can write this as a 3-way.
Also why are we expanding on equality, instead of inequality now? wouldn't it be guard_or_true(not sym_and(self.shape[0] == dim1, self.shape[1] == dim2, selfl.shape[2] == dim3))
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
oh um.. you're right
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
oh sorry I think I forgot the not... but guard_or_true didn't work for me
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
probably you want sym_eq here,
this should work for your test case.
if not guard_or_false(sym_eq(self.shape, (dim1, dim2, dim3))):
self = self.expand((dim1, dim2, dim3))
is this what you want here?
i verified this work on your example. but not sure if that have other consequences?
does calling self.expand((dim1, dim2, dim3)) in theory works for when
self.shape == (dim1, dim2, dim3)?
what did not work with gaurd_or_true?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
those also should work
if guard_or_true(sym_eq(self.shape, (dim1, dim2, dim3))==False):
self = self.expand((dim1, dim2, dim3))
I think we need a sym_not , having not will specialize
cc @pianpwk
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
could be
def sym_not (a):
return a==False
then you can do
if guard_or_true(sym_not(sym_eq(self.shape, (dim1, dim2, dim3)))):
self = self.expand((dim1, dim2, dim3))
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what did not work with gudrd_or_true?
I think originally doing guard_or_true(not sym_and(self.shape[0] == dim1, self.shape[1] == dim2, self.shape[2] == dim3))
didn't work because of the not
in the middle of the expression. Your above examples all work!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
turns out there already exists torch.sym_not :D
8e1dcf5
to
5975843
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
left some comments.
5975843
to
153ecc1
Compare
153ecc1
to
859fc59
Compare
859fc59
to
d4c0853
Compare
@pytorchbot merge -f "can repro failures on main" |
Merge startedYour change will be merged immediately since you used the force (-f) flag, bypassing any CI checks (ETA: 1-5 minutes). Please use Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
Previously we would specialize on the shape in this if-statement Pull Request resolved: pytorch#154656 Approved by: https://github.com/pianpwk
Previously we would specialize on the shape in this if-statement