-
Notifications
You must be signed in to change notification settings - Fork 25.5k
Fix NestedTensor max/min operations for integer dtypes. #162273
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/162273
Note: Links to docs will display an error until the docs builds have been completed. ✅ You can merge normally! (1 Unrelated Failure)As of commit 5bc1829 with merge base e1bd5b6 ( FLAKY - The following job failed but was likely due to flakiness present on trunk:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
@pytorchbot label "topic: not user facing" |
Thanks for taking the time to work on this! |
Thanks @Callidior for the suggestion, since this directly involves additional affected functions (amin/amax/argmin/argmax) I went ahead and added a fix for them in this PR as well and added some UT's for verification. @Skylion007 Thank you for taking a look and please let me know if you see anything that needs attention. |
@adabeyta the tests for |
@isuruf Thanks for catching that! After some investigation I found it was an overflow issue for int64 cases only. What was happeningThe int64 tests were failing because of how the padding value gets handled internally. Basically, when we use the maximum int64 value as padding, it has to pass through a float64 conversion in the C++ code. But float64 can't precisely represent such a huge number, so it gets rounded and then overflows, turning into the minimum int64 value instead of the maximum. This broke all the min operations. FixI've updated the code to use a padding value for int64 that won't cause this overflow (mathematical boundary of what IEEE 754 double-precision floating point can represent). Applied this fix to all 6 affected functions (min, max, amin, amax, argmin, argmax) and all tests are passing now, including the int64 ones that were failing before. Let me know if you'd like any other changes or have questions about the approach! @jbschlosser |
Since it needs a double, can we use |
torch/nested/_internal/ops.py
Outdated
_INT64_SAFE_MAX_FLOAT64 = (1 << 53) - 1 | ||
_INT64_SAFE_MIN_FLOAT64 = -_INT64_SAFE_MAX_FLOAT64 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
curious if float('inf')
/ float('-inf')
work instead as suggested by @isuruf?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the feedback @jbschlosser and @isuruf
I've updated the PR to address the suggestions. I factored out the dtype logic into a _get_padding_value()
helper function.
Regarding float('inf')/float('-inf'):
I tested this approach but it causes overflow errors when the padding values need to be converted back to int64 type. The test failures showed:
RuntimeError: value cannot be converted to type int64_t without overflow
I've kept the safe int64 values (1 << 53) - 1 and -(1 << 53) for now. Let me know if you have any other suggestions to try out in its place.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The test failures showed:
RuntimeError: value cannot be converted to type int64_t without overflow
We could check for infinite values before
Tensor padded = values.new_full(padded_shape, padding_value); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hm I think it's a bit inefficient to check all values, so I'm good with the current approach, even if it's non-ideal. thanks for looking into it!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not all values, just the padding_value which is a double.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My concern is that if you do
import torch
x = torch.nested.nested_tensor(
[torch.arange(0, n) - 2**60 for n in (10, 20, 30)],
layout=torch.jagged,
)
print(x.max(dim=1).values)
now you get wrong results because of this workaround.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
okay I agree this is a problem - there's a large range of big int64 values for which the results will be wrong, not just a single edge case value. I think we need a bit more exploration @adabeyta
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@adabeyta would you be able to explore that option?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I guess in a new PR, as the merge went through :p
@pytorchbot merge |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
@pytorchbot merge cancel |
❌ 🤖 pytorchbot command failed:
Try |
Fixes: #162049
Summary
max_dim and min_dim functions incorrectly used torch.finfo()
for all dtypes, causing TypeError for integer tensors.
Changes
test_jagged_max_min_dtypes
coveringint8, int16, int32, int64, uint8, float16, bfloat16, float32 and float64
Testing
Before Fix:
python -m pytest test/test_nestedtensor.py -k "test_jagged_max_min_dtypes" -v
Output:
After Fix:
python -m pytest test/test_nestedtensor.py -k "test_jagged_max_min_dtypes" -v
Output: