Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Workaround at::scalar_tensor with BFloat16. #2536

Merged
merged 1 commit into from
Oct 7, 2020

Conversation

ailzhang
Copy link
Contributor

@ailzhang ailzhang commented Oct 7, 2020

fixes #2535

@ailzhang ailzhang requested a review from JackCaoG October 7, 2020 03:00
@cjolivier01
Copy link
Contributor

Do you think this also affects normal float16, or is specifically related to bfloat16 using more bits for precision?

@ailzhang
Copy link
Contributor Author

ailzhang commented Oct 7, 2020

@cjolivier01 It might :P Since XLA doesn't support Half so it doesn't matter much here I think. I'll try to send a fix on pytorch side.

@ailzhang ailzhang requested a review from davidel October 7, 2020 16:53
Copy link
Collaborator

@JackCaoG JackCaoG left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks!

@ailzhang ailzhang merged commit c095859 into pytorch:master Oct 7, 2020
@ailzhang ailzhang deleted the scalar_tensor_bf16 branch October 7, 2020 17:27
@@ -226,8 +226,13 @@ xla::ComputationClient::DataPtr GetDeviceData(const at::Tensor& tensor,
xla::ComputationClient::DataPtr GetDeviceData(at::Scalar value,
at::ScalarType scalar_type,
const Device& device) {
return GetDeviceData(at::scalar_tensor(value, at::TensorOptions(scalar_type)),
device);
// Workaround since at::scalar_tensor doesn't support bfloat16 yet.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should be a utility function in torch_util.cpp/h, and a bug open with a reminder to remove the hack once scalar_tensor supports bfloat16.
Actually, it may be argued that this should be in pytorch main directly.

value, at::TensorOptions(scalar_type == at::ScalarType::BFloat16
? at::ScalarType::Float
: scalar_type));
if (scalar_type == at::ScalarType::BFloat16) t = t.to(scalar_type);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This requires brackets around the single statement 😉

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[PT_BREAK] value cannot be converted to type at::BFloat16 without overflow: -3.40282e+38
4 participants