-
Notifications
You must be signed in to change notification settings - Fork 25.6k
Convert operator.not_ to torch.logical_not #94626
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/94626
Note: Links to docs will display an error until the docs builds have been completed. ✅ No FailuresAs of commit f963105: This comment was automatically generated by Dr. CI and updates every 15 minutes. |
if self.fn is operator.not_: | ||
fn = torch.logical_not |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Check out call_not_
for how to implement this.
I would also double check that every valid not
type input downstream of constant prop (the only way to get to call_not_
) is valid for torch.logical_not
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Check out call_not_ for how to implement this.
I think the following code already handles the code that's in call_not_ (creating the proxy and then creating the variable object).
I would also double check that every valid not type input downstream of constant prop (the only way to get to call_not_) is valid for torch.logical_not
Added a test for the UnspecializedPythonVariable type, and you said we can ignore the FakeItemVariable :P
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is logical_not
a good replacement for operator.not_
? operator.not_
works only on 1-element tensors, logical_not
will happily work on any size tensors, and hopefully it would error out downstream of this callsite, but are we sure about that?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is it ok to assume that the programs users pass into export are valid programs? If we assume that, then we would know that the tensor passed in is 1-element since it passes an eager run with operator.not_
. If not then could we do an eager run before doing the actual exporting to ensure that the program is passing in valid values to the not
?
@pytorchbot merge |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
I think this PR is wrong. When you write
I don't know what the actual use case for the proferred test case is, but if you have control over the torch.cond call site (which I am pretty sure you do, since I'm guessing this is an export use case), you should just manually negate it with the appropriate PyTorch API ( |
Ah, this all makes good sense. One question I had was - when would we get to the changes in Angela's PR with a non tensor value? iirc this change is downstream of constant prop, so the torch/tensor assumption might be ok? |
It's the other way around, not "non-tensor value as input", but "tensor downstream of cond, when it should be python bool". |
@pytorchbot revert -c weird -m "not correct" |
@pytorchbot successfully started a revert job. Check the current status here. |
@angelayi your PR has been successfully reverted. |
This reverts commit 97510c6. Reverted #94626 on behalf of https://github.com/ezyang due to not correct
This reverts commit 97510c6. Reverted pytorch/pytorch#94626 on behalf of https://github.com/ezyang due to not correct
This reverts commit 97510c6. Reverted pytorch/pytorch#94626 on behalf of https://github.com/ezyang due to not correct
This reverts commit 97510c6. Reverted pytorch/pytorch#94626 on behalf of https://github.com/ezyang due to not correct
This reverts commit 97510c6. Reverted pytorch/pytorch#94626 on behalf of https://github.com/ezyang due to not correct
This reverts commit 97510c6. Reverted pytorch/pytorch#94626 on behalf of https://github.com/ezyang due to not correct
This reverts commit 97510c6. Reverted pytorch#94626 on behalf of https://github.com/ezyang due to not correct
If the input to operator.not_ is a tensor, I want to convert the operator to a torch.logical_not. This allows the following test case to pass. Beforehand it resulted in the error
NotImplementedError("local_scalar_dense/item NYI for torch.bool")
cc @mlazos @soumith @voznesenskym @yanboliang @penguinwu @anijain2305 @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @desertfire