Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[numpy] torch.sqrt : promote integer inputs to float #47293

Conversation

kshitij12345
Copy link
Collaborator

Reference #42515

@facebook-github-bot facebook-github-bot added cla signed oncall: jit Add this issue/PR to JIT oncall triage queue labels Nov 3, 2020
@dr-ci
Copy link

dr-ci bot commented Nov 3, 2020

💊 CI failures summary and remediations

As of commit 91b9a3a (more details on the Dr. CI page):


None of the CI failures appear to be your fault 💚



🚧 2 ongoing upstream failures:

These were probably caused by upstream breakages that are not fixed yet:


This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions on the GitHub issue tracker or post in the (internal) Dr. CI Users group.

See how this bot performed.

This comment has been revised 18 times.

@kshitij12345
Copy link
Collaborator Author

Reason for disabling bfloat16

>>> import torch
>>> torch.tensor(4980736.).sqrt()
tensor(2231.7563)
>>> torch.tensor(4980736.).sqrt().to(torch.bfloat16)
tensor(2224., dtype=torch.bfloat16) # Seems like bfloat16 can't represent tensor(2231.7563)

@kshitij12345
Copy link
Collaborator Author

Reason for disabling torch.cdouble, torch.cfloat
#47358

@kshitij12345 kshitij12345 marked this pull request as ready for review November 4, 2020 17:50
@kshitij12345 kshitij12345 removed the request for review from apaszke November 4, 2020 17:50
@ejguan ejguan added module: numpy Related to numpy support, and also numpy compatibility of our operators triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module and removed oncall: jit Add this issue/PR to JIT oncall triage queue labels Nov 4, 2020
@codecov
Copy link

codecov bot commented Nov 4, 2020

Codecov Report

Merging #47293 (60d35a1) into master (abae12b) will decrease coverage by 0.41%.
The diff coverage is 100.00%.

@@            Coverage Diff             @@
##           master   #47293      +/-   ##
==========================================
- Coverage   81.23%   80.82%   -0.42%     
==========================================
  Files        1836     1806      -30     
  Lines      197684   189822    -7862     
==========================================
- Hits       160586   153416    -7170     
+ Misses      37098    36406     -692     

@kshitij12345 kshitij12345 mentioned this pull request Nov 5, 2020
8 tasks
@@ -419,7 +419,36 @@ def sample_inputs(self, device, dtype, requires_grad=False):
ref=np.nan_to_num,
dtypes=all_types_and(torch.half, torch.bool),
dtypesIfCPU=None,
dtypesIfCUDA=None)
dtypesIfCUDA=None),
UnaryUfuncInfo('sqrt',
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's great to see another OpInfo! With this ported this PR can also remove the TorchMathTestMeta for sqrt in test_torch.py (see torch_op_tests, I can't link the line because the file is too big to render on Github).

dtypesIfCUDA=None),
UnaryUfuncInfo('sqrt',
ref=np.sqrt,
domain=(0, float('inf')),
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Metanote: it's a little unfortunate that we don't have dtype-specific domains so we can't test taking the sqrt of negative complex values easily.

SkipInfo('TestUnaryUfuncs', 'test_reference_numerics',
dtypes=[torch.bfloat16]),
# RuntimeError: sqrt does not support automatic differentiation for outputs with complex dtype.
SkipInfo('TestGradients', 'test_fn_grad',
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Metanote: if this happens again we should add a property for whether a function supports complex autograd or not, and, if complex autograd isn't supported, skip the complex autograd tests.

@mruberry
Copy link
Collaborator

mruberry commented Nov 6, 2020

Hey @kshitij12345, this looks great, as usual! Would you update this to remove the redundant test for sqrt in test_torch.py, too?

@kshitij12345
Copy link
Collaborator Author

@mruberry Have removed the redundant test. PTAL :)

@mruberry mruberry self-requested a review November 10, 2020 10:05
Copy link
Collaborator

@mruberry mruberry left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Awesome! Thanks @kshitij12345!

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@mruberry has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@mruberry has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@kshitij12345
Copy link
Collaborator Author

@mruberry Gentle Ping :)

@mruberry
Copy link
Collaborator

@mruberry Gentle Ping :)

I had to re-run some internal tests that were flaky. This should land soon.

@facebook-github-bot
Copy link
Contributor

@mruberry merged this pull request in 3649a2c.

@kshitij12345 kshitij12345 deleted the develop/numpy/unary-float-op/sqrt branch November 13, 2020 03:31
tugsbayasgalan pushed a commit to tugsbayasgalan/pytorch that referenced this pull request Nov 16, 2020
Summary:
Reference pytorch#42515

Pull Request resolved: pytorch#47293

Reviewed By: malfet

Differential Revision: D24855994

Pulled By: mruberry

fbshipit-source-id: 1e6752f2eeba6d638dea0bdea0c650cf722718c9
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cla signed Merged module: numpy Related to numpy support, and also numpy compatibility of our operators open source triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants