-
Notifications
You must be signed in to change notification settings - Fork 25.6k
typing: allow integer in bitwise operations #155704
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/155704
Note: Links to docs will display an error until the docs builds have been completed. ✅ You can merge normally! (2 Unrelated Failures)As of commit 5972b24 with merge base 013cf1e ( BROKEN TRUNK - The following job failed but were present on the merge base:👉 Rebase onto the `viable/strict` branch to avoid these failures
UNSTABLE - The following job is marked as unstable, possibly due to flakiness on trunk:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
@pytorchbot label "module: typing" |
efa7016
to
fe4ec18
Compare
fe4ec18
to
5972b24
Compare
The false negatives seems to originate from autogenerated function signatures from C++ code. |
^ TENSOR, | ||
Tensor, | ||
) | ||
FLOAT & TENSOR # E: Unsupported operand types for & ("float" and "Tensor") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What's the thinking here? Do we not want to check the result type because it doesn't really make sense to check the type of an unsupported op?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes
return [f"def {opname}(self, other: Tensor | Number | _complex) -> Tensor: ..."] | ||
elif name in logic_ops: | ||
return [f"def {opname}(self, other: Tensor | _bool) -> Tensor: ..."] | ||
return [f"def {opname}(self, other: Tensor | _int) -> Tensor: ..."] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should this include bool
?
return [f"def {opname}(self, other: Tensor | _int) -> Tensor: ..."] | |
return [f"def {opname}(self, other: Tensor | _bool | _int) -> Tensor: ..."] |
This is valid code:
bool_tensor1 = torch.tensor([True, False, True, False], dtype=torch.bool)
print(bool_tensor1 & False)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
builtins.bool
is a subclass of builtins.int
, perhaps somewhat surprisingly. So adding bool
here would only serve documentation purposes (which can be a good reason), but wouldn't change the type inference.
From a typing POV it can be somewhat unfortunate that bool
is a subclass of int
, here is a longer discussion on the topic: https://discuss.python.org/t/is-there-a-road-towards-making-bool-a-fully-fledged-boolean-type/62392
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If you check the tests, this behavior is already covered.
pytorch/test/typing/pass/arithmetic_ops.py
Lines 74 to 76 in 5972b24
assert_type(BOOL & TENSOR, Tensor) | |
assert_type(BOOL | TENSOR, Tensor) | |
assert_type(BOOL ^ TENSOR, Tensor) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah - I had just noticed that the tests covered and was trying to understand why there wasn't an error. The subclass explains it.
return [f"def {opname}(self, other: Tensor | Number | _complex) -> Tensor: ..."] | ||
elif name in logic_ops: | ||
return [f"def {opname}(self, other: Tensor | _bool) -> Tensor: ..."] | ||
return [f"def {opname}(self, other: Tensor | _int) -> Tensor: ..."] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah - I had just noticed that the tests covered and was trying to understand why there wasn't an error. The subclass explains it.
@pytorchbot merge |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
Merge failedReason: 1 jobs have failed, first few of them are: trunk / macos-py3-arm64 / test (mps, 1, 1, macos-m1-14) Details for Dev Infra teamRaised by workflow job |
@pytorchbot merge -i |
Merge startedYour change will be merged while ignoring the following 3 checks: pull / linux-jammy-py3-clang12-executorch / test (executorch, 1, 1, linux.2xlarge, unstable), pull / cuda12.8-py3.10-gcc9-sm75 / test (pr_time_benchmarks, 1, 1, linux.g4dn.metal.nvidia.gpu), trunk / macos-py3-arm64 / test (mps, 1, 1, macos-m1-14) Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
Fixes pytorch#155701 (false positives) Pull Request resolved: pytorch#155704 Approved by: https://github.com/Skylion007, https://github.com/aorenste
Fixes #155701 (false positives)
cc @ezyang @malfet @xuzhao9 @gramster