New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add unsigned integer dtypes to PyTorch #116594
Conversation
[ghstack-poisoned]
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/116594
Note: Links to docs will display an error until the docs builds have been completed. ✅ No FailuresAs of commit d893654 with merge base 5f5405f (): This comment was automatically generated by Dr. CI and updates every 15 minutes. |
Signed-off-by: Edward Z. Yang <ezyang@meta.com> ghstack-source-id: 0a0a53bc66a27201601da0de0d6361f00a71943e Pull Request resolved: #116594
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit on the c++ API naming but sounds good.
it might be good to collect a full list on the issue of the todos for these dtype (python binding, op support, testing support, serialization, etc) so that we can ensure we have a clear "current state" for them somewhere.
@@ -424,6 +454,10 @@ static inline bool isBitsType(ScalarType t) { | |||
t == ScalarType::Bits16; | |||
} | |||
|
|||
static inline bool isBarebonesUnsignedType(ScalarType t) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should UInt8 be in here as well?
I guess not given how you use it for the type promotion checks. But this might be a confusing API for our c++ devs?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Bare bones here is defined to be "we have only minimal kernel support in traditional C++ eager mode". Since uint8 is grandfathered from Lua days to have lots of kernels, it shouldn't be included.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What is the plan to document that to make sure user expectations for these are appropriate?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The most logical place to document that these are bare bones is in torch.Tensor where we list supported dtypes. Note that the f8 dtypes are not documented right now, so we could also keep it under the radar undocumented until enough stuff is working. I am also half expecting to end up having a select few eager kernels sprout arbitrary unsigned support, especially gather-like operations I think is what people are most likely to want.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One could argue that fp8 dtypes are not very well documented and shouldn't be the example here. Also there is a strong alignment there that we will not implement any arithmetic op for these dtypes (no scale means no compute with the value).
These dtypes could contain the full thing so we most likely want to have some details about that?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'll post a proposed doc PR and we can discuss it there
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Signed-off-by: Edward Z. Yang <ezyang@meta.com> ghstack-source-id: c57e37ed810903053c39d39938c8a474e45d2ec2 Pull Request resolved: #116594
a new era for pytorch in 2024! :) |
Signed-off-by: Edward Z. Yang <ezyang@meta.com> ghstack-source-id: cdd5bc9d8c2d674dcbc81ab0068350036151b5b5 Pull Request resolved: #116594
The dtypes are very useless right now (not even fill works), but it makes torch.uint16, uint32 and uint64 available as a dtype. Towards #58734 Signed-off-by: Edward Z. Yang <ezyangmeta.com> [ghstack-poisoned]
Signed-off-by: Edward Z. Yang <ezyang@meta.com> ghstack-source-id: ea80c197bc2da1a11aebe3b69bd7a61fd3feaf2c Pull Request resolved: #116594
The dtypes are very useless right now (not even fill works), but it makes torch.uint16, uint32 and uint64 available as a dtype. Towards #58734 Signed-off-by: Edward Z. Yang <ezyangmeta.com> [ghstack-poisoned]
The dtypes are very useless right now (not even fill works), but it makes torch.uint16, uint32 and uint64 available as a dtype. Towards #58734 Signed-off-by: Edward Z. Yang <ezyangmeta.com> [ghstack-poisoned]
@pytorchbot merge |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
Will reinterpret int16_tensor.view(torch.uint16) and reverse be supported out of the box? Another useful thing to support for unsigned tensors are more bit ops: #105465 as more bit ops are clearly defined for unsigned types and do not need any special behavior for negative operands |
Yeah, I didn't test it but the mechanism is dtype agnostic. We probably will end up negotiating some sort of basic set of ops which we will provide in eager mode, I'm thinking of letting the current binary size changes settle first before moving on. |
Stack from ghstack (oldest at bottom):
The dtypes are very useless right now (not even fill works), but it makes torch.uint16, uint32 and uint64 available as a dtype.
Towards #58734
Signed-off-by: Edward Z. Yang ezyang@meta.com