Skip to content

Conversation

kshitij12345
Copy link
Collaborator

Reference: #42515

@facebook-github-bot facebook-github-bot added cla signed oncall: jit Add this issue/PR to JIT oncall triage queue labels Dec 7, 2020
@kshitij12345
Copy link
Collaborator Author

Reason for skipping for bfloat16

>>> import torch
>>> torch.tensor(6.75, dtype=torch.bfloat16)
tensor(6.7500, dtype=torch.bfloat16)
>>> t = torch.tensor(6.75, dtype=torch.bfloat16)
>>> torch.expm1(t)
tensor(852., dtype=torch.bfloat16)
>>> torch.expm1(t.to(torch.float32))
tensor(853.0588)

There are cases with more difference in tolerance for other values.

@kshitij12345 kshitij12345 mentioned this pull request Dec 7, 2020
8 tasks
@kshitij12345 kshitij12345 requested a review from mruberry December 7, 2020 09:53
@dr-ci
Copy link

dr-ci bot commented Dec 7, 2020

💊 CI failures summary and remediations

As of commit ced829a (more details on the Dr. CI page):


  • 1/1 failures introduced in this PR

1 failure not recognized by patterns:

Job Step Action
CircleCI pytorch_linux_xenial_py3_clang5_asan_test2 Report results 🔁 rerun

This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions on the GitHub issue tracker or post in the (internal) Dr. CI Users group.

See how this bot performed.

This comment has been revised 1 time.

@ailzhang ailzhang added the triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module label Dec 7, 2020
@mruberry
Copy link
Collaborator

mruberry commented Dec 8, 2020

Reason for skipping for bfloat16

>>> import torch
>>> torch.tensor(6.75, dtype=torch.bfloat16)
tensor(6.7500, dtype=torch.bfloat16)
>>> t = torch.tensor(6.75, dtype=torch.bfloat16)
>>> torch.expm1(t)
tensor(852., dtype=torch.bfloat16)
>>> torch.expm1(t.to(torch.float32))
tensor(853.0588)

There are cases with more difference in tolerance for other values.

This makes sense. I would really like to start considering ways we can work around NumPy's lack of bfloat16 support. Maybe we can cast the values to float32, then cast them to bfloat16, then nextafter them in the direction of the PyTorch result?

Copy link
Collaborator

@mruberry mruberry left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Awesome! Thanks @kshitij12345!

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@mruberry has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@kshitij12345
Copy link
Collaborator Author

This makes sense. I would really like to start considering ways we can work around NumPy's lack of bfloat16 support. Maybe we can cast the values to float32, then cast them to bfloat16, then nextafter them in the direction of the PyTorch result?

I think that makes sense since both have same range. Right now the catch is that nextafter is not implemented for bfloat16.

@facebook-github-bot
Copy link
Contributor

@mruberry merged this pull request in eb9516e.

@kshitij12345 kshitij12345 deleted the develop/numpy/unary-float-op/exp2-expm1 branch December 10, 2020 18:08
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

cla signed Merged oncall: jit Add this issue/PR to JIT oncall triage queue open source triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants