Skip to content

Conversation

@StellarrZ
Copy link
Contributor

@StellarrZ StellarrZ commented Feb 29, 2024

To check existence of __torch_function__, the code intended to iterate each element but got TupleVariable when the ordinary has_torch_function() was being used. Needs further unpack in this case

Fixes #120653

cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @aakhundov

@pytorch-bot
Copy link

pytorch-bot bot commented Feb 29, 2024

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/120885

Note: Links to docs will display an error until the docs builds have been completed.

✅ No Failures

As of commit 183c5bb with merge base 5929d4e (image):
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@linux-foundation-easycla
Copy link

linux-foundation-easycla bot commented Feb 29, 2024

CLA Signed

The committers listed above are authorized under a signed CLA.

@StellarrZ StellarrZ marked this pull request as ready for review February 29, 2024 04:43
@janeyx99
Copy link
Contributor

@StellarrZ remember to sign the CLA and comment "/easycla" to get the cla job to retrigger

@StellarrZ
Copy link
Contributor Author

/easycla

assert not (kwargs or len(args) != 1)
return ConstantVariable.create(
any(has_torch_function(a) for a in args[0].unpack_var_sequence(tx)),
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you merge it with the other place?

elif self.value in (
torch.overrides.has_torch_function,
torch.overrides.has_torch_function_variadic,
torch.overrides.has_torch_function_unary,
):
assert not kwargs
return ConstantVariable.create(
any(has_torch_function(a) for a in args),

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can probably merge it likes this. But IMO the original solution is stricter and cleaner. Please let me know your preference or if there's a better way to go

        elif self.value in (
            torch.overrides.has_torch_function,
            torch.overrides.has_torch_function_variadic,
            torch.overrides.has_torch_function_unary,
        ):
            assert not kwargs
            elems = (
                args[0].unpack_var_sequence(tx)
                if len(args) == 1 and isinstance(args[0], TupleVariable)
                else args
            )
            return ConstantVariable.create(
                any(has_torch_function(x) for x in elems),
            )

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you want to update has_torch_function as this, it would cover more cases and more concise:

def has_torch_function(vt: "torch._dynamo.variables.base.VariableTracker", tx: "torch._dynamo.symbolic_convert.InstructionTranslatorBase") -> bool:
    from torch._dynamo.variables import UserDefinedObjectVariable
    from torch._dynamo.variables.torch_function import TensorWithTFOverrideVariable

    if vt.has_unpack_var_sequence(tx):
        return any(has_torch_function(v, tx) for v in vt.unpack_var_sequence(tx))
    else:
        return isinstance(vt, TensorWithTFOverrideVariable) or (
            isinstance(vt, UserDefinedObjectVariable)
            and hasattr(vt.value, "__torch_function__")
        )

@lezcano lezcano removed their request for review March 1, 2024 13:07
@StellarrZ
Copy link
Contributor Author

Thanks @yanboliang for guiding! Unpacking inside has_torch_function(vt, tx) triggers an infinite recursion caught by TestEinsumOverride. Per our offline discussion, we believe it's another bug and cut a new issue #121551

Given this, strictly unpack in call_function as a quick fix of #120653

@StellarrZ StellarrZ requested a review from yanboliang March 8, 2024 23:12
@StellarrZ
Copy link
Contributor Author

@pytorchbot merge

@pytorch-bot pytorch-bot bot added the ciflow/trunk Trigger trunk jobs on your pull request label Mar 10, 2024
@pytorchmergebot
Copy link
Collaborator

Merge failed

Reason: This PR needs a release notes: label
If your changes are user facing and intended to be a part of release notes, please use a label starting with release notes:.

If not, please add the topic: not user facing label.

To add a label, you can comment to pytorchbot, for example
@pytorchbot label "topic: not user facing"

For more information, see
https://github.com/pytorch/pytorch/wiki/PyTorch-AutoLabel-Bot#why-categorize-for-release-notes-and-how-does-it-work.

Details for Dev Infra team Raised by workflow job

@StellarrZ
Copy link
Contributor Author

@pytorchbot label "topic: not user facing"

@pytorch-bot pytorch-bot bot added the topic: not user facing topic category label Mar 10, 2024
@StellarrZ
Copy link
Contributor Author

@pytorchbot merge

@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged once all checks pass (ETA 0-4 Hours).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

pianpwk pushed a commit that referenced this pull request Mar 11, 2024
To check existence of `__torch_function__`, the code intended to iterate each element but got `TupleVariable` when the ordinary `has_torch_function()` was being used. Needs further unpack in this case

Fixes #120653

Pull Request resolved: #120885
Approved by: https://github.com/yanboliang
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 Dynamo test are failing with "Object comparison failed: DiagonalTensor(N=5, value=2) != -1".

4 participants