Skip to content

Conversation

krshrimali
Copy link
Contributor

@krshrimali krshrimali commented Jul 1, 2021

TODOs:

  • Do not clamp inputs for low and high when given and valid.
  • Devise rules for modifying low and high when extremals/invalid values passed.
  • Testing with test_references_numerics_hard with the revised changes. (I've tested locally, the changes will take place in a separate PR though after offline discussion with @mruberry)
  • Revise comments/documentation for make_tensor

See #61758 for tracker issue.

cc: @mruberry @pmeier

@krshrimali krshrimali added the module: testing Issues related to the torch.testing module (not tests) label Jul 1, 2021
@krshrimali krshrimali self-assigned this Jul 1, 2021
@facebook-github-bot
Copy link
Contributor

facebook-github-bot commented Jul 1, 2021

🔗 Helpful links

💊 CI failures summary and remediations

As of commit cd166fa (more details on the Dr. CI page):


💚 💚 Looks good so far! There are no failures yet. 💚 💚


This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions to the (internal) Dr. CI Users group.

Click here to manually regenerate this comment.

@krshrimali krshrimali marked this pull request as draft July 1, 2021 09:34
@mruberry mruberry self-requested a review July 8, 2021 22:53
@krshrimali krshrimali marked this pull request as ready for review July 13, 2021 04:10
@krshrimali krshrimali requested a review from pmeier July 13, 2021 04:10
Copy link
Collaborator

@pmeier pmeier left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @krshrimali for the PR. Apart from my inline comments, I remember that discussed using torch.distributions.Uniform as the base sampler rather than torch.rand. What became of that plan?

@krshrimali
Copy link
Contributor Author

Thanks @krshrimali for the PR. Apart from my inline comments, I remember that discussed using torch.distributions.Uniform as the base sampler rather than torch.rand. What became of that plan?

Thanks for the review, @pmeier - I have addressed a couple of your comments, will take a better look and update this PR with changes suggested. Regarding using: torch.distriutions.uniform.Uniform rather than torch.rand, I don't exactly remember why I reverted the commit. Will have to take a look again and see if it broke any relevant test. I'll get back to you on this after some testing. Thanks!

@krshrimali krshrimali changed the title [WIP] Fixing user inputs for low, high in make_tensor Fixing user inputs for low, high in make_tensor Jul 13, 2021
@H-Huang H-Huang added the triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module label Jul 13, 2021
Copy link
Contributor Author

@krshrimali krshrimali left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is ready for another round of review, unless and until there are unexpected test failures. A couple of comments still require attention, I'm looking at them. Thanks!

Leaving some sample outputs from modified make_tensor:

Click to expand!
>>> make_tensor((1, 2), device='cpu', dtype=torch.float32, low=-9, high=9)
tensor([[-8.4784, -1.7658]])
>>> make_tensor((1, 2), device='cpu', dtype=torch.uint8, low=-9, high=9)
tensor([[3, 2]], dtype=torch.uint8)
>>> make_tensor((1, 2), device='cpu', dtype=torch.uint8, low=-9, high=None)
tensor([[6, 4]], dtype=torch.uint8)
>>> make_tensor((1, 2), device='cpu', dtype=torch.uint8, low=None, high=None)
tensor([[2, 5]], dtype=torch.uint8)
>>> make_tensor((1, 2), device='cpu', dtype=torch.float32, low=None, high=None)
tensor([[ 3.1285, -3.0337]])
>>> make_tensor((1, 10), device='cpu', dtype=torch.float32, low=None, high=None)
tensor([[ 5.1067,  1.1351,  4.9473,  5.7744, -3.9730,  3.2708, -3.8939,  2.8211,
         -4.7024,  4.1631]])
>>> make_tensor((1, 10), device='cpu', dtype=torch.float32, low=None, high=float('inf'))
tensor([[2.0459e+38, 1.0356e+38, 8.6688e+37, 2.1416e+38, 3.2889e+38, 2.5179e+38,
         1.5371e+38, 1.6188e+38, 2.6685e+38, 5.1892e+37]])
>>> make_tensor((1, 2), device='cpu', dtype=torch.float32, low=float('inf'), high=float('inf'))
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/home/krshrimali/pytorch/torch/testing/_internal/common_utils.py", line 2047, in make_tensor
    low, high = _modify_low_high(ranges_floats, low, high, default_values=(-9, 9))
  File "/home/krshrimali/pytorch/torch/testing/_internal/common_utils.py", line 2028, in _modify_low_high
    low = _for_val('low')
  File "/home/krshrimali/pytorch/torch/testing/_internal/common_utils.py", line 2008, in _for_val
    raise ValueError(f"Found invalid value {val} for {inp}")
ValueError: Found invalid value inf for low
>>> make_tensor((1,2), 'cpu', torch.float32, low=float('nan'), high=float('nan'))
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/home/krshrimali/pytorch/torch/testing/_internal/common_utils.py", line 2048, in make_tensor
    low, high = _modify_low_high(ranges_floats, low, high, default_values=(-9, 9))
  File "/home/krshrimali/pytorch/torch/testing/_internal/common_utils.py", line 2029, in _modify_low_high
    low = _for_val('low')
  File "/home/krshrimali/pytorch/torch/testing/_internal/common_utils.py", line 2009, in _for_val
    raise ValueError(f"Found invalid value {val} for {inp}")
ValueError: Found invalid value nan for low
cc: @pmeier @mruberry

@mruberry mruberry self-requested a review July 30, 2021 08:28
Copy link
Collaborator

@mruberry mruberry left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overall this looks good, @krshrimali. I just made a couple suggestions and then we should wait for the tests

@krshrimali
Copy link
Contributor Author

This is ready for a final review @mruberry - thanks for your patience and help. PTAL, whenever you find the time. :)

@krshrimali krshrimali requested a review from mruberry August 4, 2021 03:30
Copy link
Collaborator

@mruberry mruberry left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cool; finally a make_tensor() that respects the user's low and high!

@facebook-github-bot
Copy link
Contributor

@mruberry has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@facebook-github-bot
Copy link
Contributor

@mruberry merged this pull request in fcc1f87.

alanwaketan pushed a commit that referenced this pull request Aug 17, 2021
Summary:
**TODOs:**

* [x] Do not clamp inputs for low and high when given and valid.
* [x] Devise rules for modifying `low` and `high` when extremals/invalid values passed.
* [x] Testing with `test_references_numerics_hard` with the revised changes. _(I've tested locally, the changes will take place in a separate PR though after offline discussion with mruberry)_
* [x] Revise comments/documentation for `make_tensor`

See #61758 for tracker issue.

cc: mruberry pmeier

Pull Request resolved: #61108

Reviewed By: VitalyFedyunin

Differential Revision: D30296167

Pulled By: mruberry

fbshipit-source-id: 67e8d15b173209a9c97ca013231494a5fa99f8c7
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cla signed Merged module: testing Issues related to the torch.testing module (not tests) open source triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants