Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[numpy] torch.exp: promote integer inputs to float #50093

Conversation

kshitij12345
Copy link
Collaborator

Reference: #42515

@facebook-github-bot facebook-github-bot added cla signed oncall: jit Add this issue/PR to JIT oncall triage queue labels Jan 5, 2021
@facebook-github-bot
Copy link
Contributor

facebook-github-bot commented Jan 5, 2021

💊 CI failures summary and remediations

As of commit 9134f8c (more details on the Dr. CI page):


💚 💚 Looks good so far! There are no failures yet. 💚 💚


This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions to the (internal) Dr. CI Users group.

This comment has been revised 4 times.

Copy link
Collaborator Author

@kshitij12345 kshitij12345 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reason for skipping BFloat16

>>> torch.tensor(8.8750, device='cuda:0', dtype=torch.bfloat16)
tensor(8.8750, device='cuda:0', dtype=torch.bfloat16)
>>> t = torch.tensor(8.8750, device='cuda:0', dtype=torch.bfloat16)
>>> torch.exp(t)
tensor(7136., device='cuda:0', dtype=torch.bfloat16)
>>> torch.exp(t.to(torch.float32))
tensor(7150.9463, device='cuda:0')

@kshitij12345 kshitij12345 marked this pull request as ready for review January 5, 2021 15:55
@codecov
Copy link

codecov bot commented Jan 5, 2021

Codecov Report

Merging #50093 (9134f8c) into master (6e6231f) will decrease coverage by 0.01%.
The diff coverage is 100.00%.

@@            Coverage Diff             @@
##           master   #50093      +/-   ##
==========================================
- Coverage   80.67%   80.66%   -0.02%     
==========================================
  Files        1899     1899              
  Lines      206066   206066              
==========================================
- Hits       166241   166219      -22     
- Misses      39825    39847      +22     

@smessmer smessmer added the triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module label Jan 6, 2021
@kshitij12345 kshitij12345 mentioned this pull request Jan 6, 2021
8 tasks
Copy link
Collaborator

@mruberry mruberry left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Awesome! Thanks @kshitij12345!

I don't want to interrupt your flow because the progress on these PRs has been great, but would you take a look at creating an OpInfo for torch.tile?

We need to be very careful about its performance. See #49962. A recent PR reduced the size of the tensors generated by its method_test entries to reduce the time those tests took. We should probably respect those sizes.

While creating the OpInfo for tile, would you also review the following:

  • was torch.tile implemented correctly? that is, is it really like np.tile? Note that torch.tile is not a unary ufunc, so while we can create an OpInfo for jit and autograd testing it will still need its own forward tests to validate its behavior. These tests exist, but are they complete?
  • can torch.repeat be implemented as a call to torch.tile? I understand that torch.tile is actually implemented as a call to repeat currently, but from a UX standpoint, could we alias torch.repeat to torch.tile? It's true that torch.tile can accept more inputs than torch.repeat, but will every valid input to torch.repeat produce the same output when given to torch.tile?

I'm especially interested in this because of #50013. I'm not suggesting that we proceed with the deprecation in that issue, but I'd like more data about torch.tile and torch.repeat to make an informed decision.

Bonus points if the PR also creates an OpInfo for torch.repeat and/or merges their "forward" tests together.

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@mruberry has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@kshitij12345
Copy link
Collaborator Author

Awesome! Thanks @kshitij12345!

I don't want to interrupt your flow because the progress on these PRs has been great, but would you take a look at creating an OpInfo for torch.tile?

We need to be very careful about its performance. See #49962. A recent PR reduced the size of the tensors generated its method_test entries to reduce the time those tests took. We should probably respect those sizes.

While creating the OpInfo for tile, would you also review the following:

  • was torch.tile implemented correctly? that is, is it really like np.tile? Note that torch.tile is not a unary ufunc, so while we can create an OpInfo for jit and autograd testing it will still need its own forward tests to validate its behavior. These tests exist, but are they complete?
  • can torch.repeat be implemented as a call to torch.tile? I understand that torch.tile is actually implemented as a call to repeat currently, but from a UX standpoint, could we alias torch.repeat to torch.tile? It's true that torch.tile can accept more inputs than torch.repeat, but will every valid input to torch.repeat produce the same output when given to torch.tile?

I'm especially interested in this because of #50013. I'm not suggesting that we proceed with the deprecation in that issue, but I'd like more data about torch.tile and torch.repeat to make an informed decision.

Bonus points if the PR also creates an OpInfo for torch.repeat and/or merges their "forward" tests together.

Sure. Thanks for the pointers!

@facebook-github-bot
Copy link
Contributor

@mruberry merged this pull request in 9f832c8.

hwangdeyu pushed a commit to hwangdeyu/pytorch that referenced this pull request Jan 14, 2021
Summary:
Reference: pytorch#42515

Pull Request resolved: pytorch#50093

Reviewed By: H-Huang

Differential Revision: D25803549

Pulled By: mruberry

fbshipit-source-id: e6f245b5e728f2dca6072f8c359f03dff63aa14d
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cla signed Merged oncall: jit Add this issue/PR to JIT oncall triage queue open source triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants