New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Please implement where_cpu and where_cuda for BoolTensors #26247
Comments
Summary: Enabled "where_cuda" for bool tensors on CUDA Fixing pytorch/pytorch#26247 Tested via unit tests Pull Request resolved: pytorch/pytorch#26430 Differential Revision: D17464181 Pulled By: izdeby fbshipit-source-id: cbb09925753b2e6f35e7400da3243d4d3fc86b69
There is also the same kind of problem on CPU. |
Reopening in order to fix the CPU path |
@TheodoreZhao, can you provide repro code? This should work.
Output: |
@izdeby It looks the types of
|
Reopening while investigating |
Any update on this? |
I have to fix this. Will do in next few days. sorry for the delay. |
Have you solved it now? I have also encountered this problem |
Fixed in #47454 |
This issue still exist on On
On
Working as expected |
🚀 Feature
mycudabooltensor.where(...)
returns:
RuntimeError: "where_cuda" not implemented for 'Bool'
Motivation
In order to do masked operations on bool tensors, I have to cast those to byte tensors first, which looks unelegant.
cc @izdeby
The text was updated successfully, but these errors were encountered: