-
Notifications
You must be signed in to change notification settings - Fork 25.6k
Address printing inconsistency between float and complex tensors #35841
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
💊 CircleCI build failures summary and remediationsAs of commit b83562a (more details on the Dr. CI page): 💚 💚 Looks good so far! There are no CircleCI failures yet. 💚 💚 This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.Please report bugs/suggestions on the GitHub issue tracker. This comment has been revised 26 times. |
@anjali411, does this address the printing inconsistency to your liking? |
torch/_tensor_str.py
Outdated
p = PRINT_OPTS.precision | ||
ret = '({{:.{}f}} {{}} {{:.{}f}}j)'.format(p, p).format(value.real, '+-'[value.imag < 0], abs(value.imag)) | ||
# format real and imaginary values according to type | ||
real_val = '({{:.0f}}. '.format(p).format(value.real) if value.real.is_integer() \ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
real_val = '({{:.0f}}.'.format(p).format(value.real) if value.real.is_integer() \
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we should remove the extra white space
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Addressed in PR update.
torch/_tensor_str.py
Outdated
# format real and imaginary values according to type | ||
real_val = '({{:.0f}}. '.format(p).format(value.real) if value.real.is_integer() \ | ||
else '({{:.{}f}}'.format(p).format(value.real) | ||
imag_val = '{{:.0f}}.j )'.format(p).format(value.imag) if value.imag.is_integer() \ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
imag_val = '{{:.0f}}.j)'.format(p).format(value.imag) if value.imag.is_integer() \
same as above
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Addressed in PR update.
I think this behavior is in line with what we do for floating point tensors:
for example, this: gives
However the first case that you mentioned is correct and to generalize it I think we wouldn't want to print the zeros after decimal if none of the entries have any non zero value after decimal |
Updated the PR description with current outputs. |
Based on the CircleCI build failures summary and remediations, it seems that of the six failing tests, five of them are due to upstream breakages. My understanding based on other PRs I've read is that the sixth failure (rocmdeb) is flaky. Is there anything else I should address? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@anjali411 has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
torch/_tensor_str.py
Outdated
p = PRINT_OPTS.precision | ||
ret = '({{:.{}f}} {{}} {{:.{}f}}j)'.format(p, p).format(value.real, '+-'[value.imag < 0], abs(value.imag)) | ||
# format real and imaginary values according to type | ||
if not is_float_tensor: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what are you trying to do here?
Strictly speaking, is_floating_point() returns false for complex, so it's misleading to have is_float_tensor check and do something for it under complex branch
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You said we only want to truncate the zeros only in cases where all elements of the complex are integers. This checks to see if the complex tensor has any float components. I can change the variable name, but that's what it does.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hmm maybe change it to has_non_zero_decimal_val ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh, I changed it to complex_contains_float
. Can change it tohas_non_zero_decimal_val
if you prefer that though, let me know!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah let's change it to has_non_zero_decimal_val
:)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Okay! Doing it right now.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Updated PR, with values consistent as shown in the PR description.
Hey @choidongyeon thanks for the PR :D it looks good! wondering if you looked into why we have the extra space in printing, for eg.
is it because we are not using masked_select? |
@anjali411 Thanks for merging the changes from master in for me. Regarding your question about why we have the extra spaces - I haven't investigated too deeply but I know that it has to do with
Then, our return values will look like this:
The only thing is, I feel like there should be a more elegant solution to preserve the current form of the code (one return statement). I can create another issue and look into it separately. Or just push the proposed change here in. Personally, I think we should close this issue out by merging it, but let me know! |
Hi @choidongyeon I see. I agree we should figure out a more elegant solution for this. I'll merge this PR. would you like to create a follow up issue and work on it? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@anjali411 is landing this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
Sounds like a plan! Will do that right now. Edit: no idea if I did this correctly but here it is: #36279 |
@anjali411 merged this pull request in fdf7a83. |
…orch#35841) Summary: See issue [pytorch#33494 Complex number printing inconsistent with float](pytorch#33494). Changes introduces an optional argument in Formatter's ```format``` function to discern whether a tensor is a float tensor or not. This way, there is consistency between float tensors and complex tensors so that the complex tensors print in the same manner as float tensors: - Only a decimal point and no zeros for integer values. - Trailing zeros only if the value is truly a float. - White space introduced to fill the gap so that +/- symbols and commas align. Here are some example outputs. ``` print(torch.zeros((2,2), dtype=torch.float64)) ``` yields ``` tensor([[0., 0.], [0., 0.]], dtype=torch.float64) ``` ``` print(torch.zeros((2,2), dtype=torch.complex64)) ``` previously yielded ``` tensor([[(0.0000 + 0.0000j), (0.0000 + 0.0000j)], [(0.0000 + 0.0000j), (0.0000 + 0.0000j)]], dtype=torch.complex64) ``` and now yields ``` tensor([[(0 + 0.j), (0 + 0.j)], [(0 + 0.j), (0 + 0.j)]], dtype=torch.complex64) ``` This new print version is more consistent with float tensor's pretty print. The following example mixes integer and decimals: ``` print(torch.tensor([[1 + 1.340j, 3 + 4j], [1.2 + 1.340j, 6.5 + 7j]], dtype=torch.complex64)) ``` This yields: ``` tensor([[ (1.0000 + 1.3400j), (3.0000 + 4.0000j)], [ (1.2000 + 1.3400j), (6.5000 + 7.0000j)]], dtype=torch.complex64) ``` The following example ``` torch.tensor([1,2,3,4.5]) ``` yields ``` tensor([1.0000, 2.0000, 3.0000, 4.5000]) . ``` Pull Request resolved: pytorch#35841 Differential Revision: D20893848 Pulled By: anjali411 fbshipit-source-id: f84c533b8957a1563602439c07e60efbc79691bc
See issue #33494 Complex number printing inconsistent with float.
Changes introduces an optional argument in Formatter's
format
function to discern whether a tensor is a float tensor or not. This way, there is consistency between float tensors and complex tensors so that the complex tensors print in the same manner as float tensors:Here are some example outputs.
yields
previously yielded
and now yields
This new print version is more consistent with float tensor's pretty print.
The following example mixes integer and decimals:
This yields:
The following example
yields