Skip to content

Conversation

angelayi
Copy link
Contributor

@angelayi angelayi commented Feb 10, 2023

If the input to operator.not_ is a tensor, I want to convert the operator to a torch.logical_not. This allows the following test case to pass. Beforehand it resulted in the error NotImplementedError("local_scalar_dense/item NYI for torch.bool")

    def test_export_tensor_bool_not(self):
        def true_fn(x, y):
            return x + y

        def false_fn(x, y):
            return x - y

        def f(x, y):
            return cond(not torch.any(x), true_fn, false_fn, [x, y])

cc @mlazos @soumith @voznesenskym @yanboliang @penguinwu @anijain2305 @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @desertfire

@pytorch-bot
Copy link

pytorch-bot bot commented Feb 10, 2023

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/94626

Note: Links to docs will display an error until the docs builds have been completed.

✅ No Failures

As of commit f963105:
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

Comment on lines +476 to +477
if self.fn is operator.not_:
fn = torch.logical_not
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Check out call_not_ for how to implement this.

I would also double check that every valid not type input downstream of constant prop (the only way to get to call_not_) is valid for torch.logical_not

Copy link
Contributor Author

@angelayi angelayi Feb 14, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Check out call_not_ for how to implement this.

I think the following code already handles the code that's in call_not_ (creating the proxy and then creating the variable object).

I would also double check that every valid not type input downstream of constant prop (the only way to get to call_not_) is valid for torch.logical_not

Added a test for the UnspecializedPythonVariable type, and you said we can ignore the FakeItemVariable :P

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is logical_not a good replacement for operator.not_? operator.not_ works only on 1-element tensors, logical_not will happily work on any size tensors, and hopefully it would error out downstream of this callsite, but are we sure about that?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it ok to assume that the programs users pass into export are valid programs? If we assume that, then we would know that the tensor passed in is 1-element since it passes an eager run with operator.not_. If not then could we do an eager run before doing the actual exporting to ensure that the program is passing in valid values to the not?

@angelayi angelayi added the topic: not user facing topic category label Feb 14, 2023
@angelayi
Copy link
Contributor Author

@pytorchbot merge

@pytorch-bot pytorch-bot bot added the ciflow/trunk Trigger trunk jobs on your pull request label Feb 14, 2023
@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged once all checks pass (ETA 0-4 Hours).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

@ezyang
Copy link
Contributor

ezyang commented Feb 27, 2023

I think this PR is wrong.

When you write not torch.any(torch.randn(3)), here's what happens:

  1. torch.any returns a scalar bool tensor torch.tensor(True)
  2. not is called on a non-bool object. This triggers Tensor.__bool__, which converts the Tensor into a regular Python bool via item()
  3. not now does the normal thing, which is negate the boolean

I don't know what the actual use case for the proferred test case is, but if you have control over the torch.cond call site (which I am pretty sure you do, since I'm guessing this is an export use case), you should just manually negate it with the appropriate PyTorch API (torch.logical_not(torch.any(torch.randn(3)))) instead of doing the conversion here. By silently translating the Python not call into a Tensor operation, the output is a Tensor, and you are now obligated to ensure any further operations continue to be Tensor operations (but abiding by Python logical operator rules). We should strive for our internal implementation to match how a regular Python interpreter would have done the interpreter.

@voznesenskym
Copy link
Collaborator

I think this PR is wrong.

When you write not torch.any(torch.randn(3)), here's what happens:

  1. torch.any returns a scalar bool tensor torch.tensor(True)
  2. not is called on a non-bool object. This triggers Tensor.__bool__, which converts the Tensor into a regular Python bool via item()
  3. not now does the normal thing, which is negate the boolean

I don't know what the actual use case for the proferred test case is, but if you have control over the torch.cond call site (which I am pretty sure you do, since I'm guessing this is an export use case), you should just manually negate it with the appropriate PyTorch API (torch.logical_not(torch.any(torch.randn(3)))) instead of doing the conversion here. By silently translating the Python not call into a Tensor operation, the output is a Tensor, and you are now obligated to ensure any further operations continue to be Tensor operations (but abiding by Python logical operator rules). We should strive for our internal implementation to match how a regular Python interpreter would have done the interpreter.

Ah, this all makes good sense. One question I had was - when would we get to the changes in Angela's PR with a non tensor value? iirc this change is downstream of constant prop, so the torch/tensor assumption might be ok?

@ngimel
Copy link
Collaborator

ngimel commented Feb 27, 2023

It's the other way around, not "non-tensor value as input", but "tensor downstream of cond, when it should be python bool".
not torch.any(torch.randn(3)) is python bool(original behavior, eager behavior), torch.logical_not(torch.any(torch.randn(3)) is 0d torch tensor (post this PR)

@ezyang
Copy link
Contributor

ezyang commented Feb 27, 2023

@pytorchbot revert -c weird -m "not correct"

@pytorchmergebot
Copy link
Collaborator

@pytorchbot successfully started a revert job. Check the current status here.
Questions? Feedback? Please reach out to the PyTorch DevX Team

@pytorchmergebot
Copy link
Collaborator

@angelayi your PR has been successfully reverted.

pytorchmergebot added a commit that referenced this pull request Feb 27, 2023
This reverts commit 97510c6.

Reverted #94626 on behalf of https://github.com/ezyang due to not correct
cyyever pushed a commit to cyyever/pytorch_private that referenced this pull request Mar 1, 2023
cyyever pushed a commit to cyyever/pytorch_private that referenced this pull request Mar 2, 2023
cyyever pushed a commit to cyyever/pytorch_private that referenced this pull request Mar 5, 2023
cyyever pushed a commit to cyyever/pytorch_private that referenced this pull request Mar 5, 2023
cyyever pushed a commit to cyyever/pytorch_private that referenced this pull request Mar 27, 2023
pruthvistony added a commit to ROCm/pytorch that referenced this pull request May 2, 2023
jhavukainen pushed a commit to kulinseth/pytorch that referenced this pull request Mar 15, 2024
@github-actions github-actions bot deleted the not_any branch August 15, 2024 01:53
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants