-
Notifications
You must be signed in to change notification settings - Fork 25.6k
Bool tensor creation (cpu) #17376
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bool tensor creation (cpu) #17376
Conversation
eabf001
to
26e35e6
Compare
Test failures are real |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I most heavily reviewed the test code. The main code changes seem basically reasonable.
I took a look at the scalar dispatch macro situation, because I am a bit to blame for the situation. Here's a summary of what the FORALL macros do:
Hopefully that makes the situation clearer. To summarize, we can roughly classify dtypes into support levels:
|
…ill (#17536) Summary: For some additional context on this change, please, see this [PR](#17376) As a part of work on Bool Tensor, we will need to add support for a bool type to _fill() and _zero() methods that are currently located in THTensorMath. As we don't need anything else and those methods are not really math related - we are moving them out into separate THTensorFill for simplicity. Change: -moved _fill() and _zero() from THTensorMath.h to THTensorFill -enabled _fill() and _zero() for HALF type. Pull Request resolved: #17536 Differential Revision: D14242130 Pulled By: izdeby fbshipit-source-id: 1d8bd806f0f5510723b9299d360b70cc4ab96afb
…ill (#17536) Summary: For some additional context on this change, please, see this [PR](pytorch/pytorch#17376) As a part of work on Bool Tensor, we will need to add support for a bool type to _fill() and _zero() methods that are currently located in THTensorMath. As we don't need anything else and those methods are not really math related - we are moving them out into separate THTensorFill for simplicity. Change: -moved _fill() and _zero() from THTensorMath.h to THTensorFill -enabled _fill() and _zero() for HALF type. Pull Request resolved: pytorch/pytorch#17536 Differential Revision: D14242130 Pulled By: izdeby fbshipit-source-id: 1d8bd806f0f5510723b9299d360b70cc4ab96afb
c9dc84e
to
11e9016
Compare
@ezyang I was actually referring to the dispatch macros here: https://github.com/pytorch/pytorch/blob/master/aten/src/ATen/Dispatch.h I don't know if we have a name for the macros you are referencing -- ScalarType macros or something? In any case, I'm less worried about those because users don't usually touch them. But they do touch the dispatch macros. |
how about something like this: consider both Half and Bool special. So we can keep e.g. AT_DISPATCH_FLOATING_TYPES_AND_HALF because I doubt anyone will ever want AT_DISPATCH_FLOATING_TYPES_AND_HALF_AND_BOOL. And we can add AT_DISPATCH_INTEGRAL_TYPES_AND_BOOL as the analog of AT_DISPATCH_FLOATING_TYPES_AND_HALF. The issue is with the AT_DISPATCH_ALL_TYPES* macros. Instead of having AT_DISPATCH_ALL_TYPES_AND_HALF, how about: so AT_DISPATCH_ALL_TYPES_AND_HALF would just be AT_DISPATCH_ALL_TYPES_AND(ScalarType::HALF,...) and what would logically have been AT_DISPATCH_ALL_TYPES_AND_HALF_AND_BOOL would be AT_DISPATCH_ALL_TYPES_AND(ScalarType::HALF, ScalarType::BOOL, ...) you can implement this statically with templates. Similarly with AT_DISPATCH_ALL_TYPES_AND_COMPLEX -- just add: |
92129c2
to
e60f856
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
GO GO GO
…onal input scalar type
Added tests for a bool tensor (CPU) Minor code cleanup and rebase on latest master Resolved some PR comments Added bool type to AT_DISPATCH_ALL_TYPES_AND_COMPLEX_AND as an additional input scalar type Revert unwanted changes
Added tests for a bool tensor (CPU) Minor code cleanup and rebase on latest master Resolved some PR comments Added bool type to AT_DISPATCH_ALL_TYPES_AND_COMPLEX_AND as an additional input scalar type Revert unwanted changes
e3cc209
to
150373c
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@izdeby has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@izdeby is landing this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
Summary: This PR enables bool tensor creation and some basic operations for the CPU backend. This is a part of Bool Tensor feature implementation work. The whole plan looks like this: 1. Storage Implementation [Done] 2. Tensor Creation. a) CPU (this PR) b) CUDA 3. Tensor Conversions. 4. Tensor Indexing. 5. Tensor Operations. 6. Back compatibility related changes. **Change**: Enable CPU tensors and these operations: - torch.zeros - torch.tensor - torch.ones - torch.randint - torch.full - torch.full_like - torch.empty - torch.empty_like **Tested via**: 1) unit tests 2) torch.zeros(2,2, dtype=torch.bool) torch.tensor([True, False], dtype=torch.bool) torch.tensor([-1, -1.1, 0, 1, 1.1, 2], dtype=torch.bool) torch.ones([1,2], dtype=torch.bool) torch.randint(10, (2, 2), dtype=torch.bool) torch.full((2, 3), True, dtype=torch.bool) torch.empty(4, dtype=torch.bool) a = torch.tensor([0,0,1]) b = torch.full_like(a, True) Pull Request resolved: pytorch/pytorch#17376 Reviewed By: ezyang Differential Revision: D14375995 Pulled By: izdeby fbshipit-source-id: a65490b5360ee0e6e3accc54ce7e32e49ad2d2a8
This PR enables bool tensor creation and some basic operations for the CPU backend. This is a part of Bool Tensor feature implementation work. The whole plan looks like this:
1. Storage Implementation [Done]
2. Tensor Creation.
a) CPU (this PR)
b) CUDA
3. Tensor Conversions.
4. Tensor Indexing.
5. Tensor Operations.
6. Back compatibility related changes.
Change:
Enable CPU tensors and these operations:
Tested via:
unit tests
torch.zeros(2,2, dtype=torch.bool)
torch.tensor([True, False], dtype=torch.bool)
torch.tensor([-1, -1.1, 0, 1, 1.1, 2], dtype=torch.bool)
torch.ones([1,2], dtype=torch.bool)
torch.randint(10, (2, 2), dtype=torch.bool)
torch.full((2, 3), True, dtype=torch.bool)
torch.empty(4, dtype=torch.bool)
a = torch.tensor([0,0,1])
b = torch.full_like(a, True)