-
Notifications
You must be signed in to change notification settings - Fork 22.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
implement bool_tensor.bernoulli_ #25076
Conversation
@@ -132,7 +132,7 @@ Tensor& bernoulli_out(Tensor& result, const Tensor& self, Generator* gen) { | |||
} | |||
|
|||
Tensor& bernoulli_tensor_cpu_(Tensor& self, const Tensor& p_, Generator* gen) { | |||
AT_DISPATCH_ALL_TYPES(self.scalar_type(), "bernoulli_tensor_cpu_self_", [&] { | |||
AT_DISPATCH_ALL_TYPES_AND(at::ScalarType::Bool, self.scalar_type(), "bernoulli_tensor_cpu_self_", [&] { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why is there a difference between CPU and CUDA bernoulli_
implementations w.r.t. supporting half precision?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think a lot ops are not supported for CPU half due to inefficiency.
@izdeby could you review? |
@pytorchbot merge this please |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@ezyang is landing this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
Summary: Fixes pytorch/pytorch#25072 Pull Request resolved: pytorch/pytorch#25076 Differential Revision: D17073453 Pulled By: ezyang fbshipit-source-id: 42410da8c9911c1d7b3543bde740c7e66ae0cc1c
Fixes #25072