-
Notifications
You must be signed in to change notification settings - Fork 559
wconstab/ltc core hash #3148
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
wconstab/ltc core hash #3148
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
TF update seems irrelevant. Is that made by a mistake?
yep mistake. Btw, this isn't ready yet. I was just seeing if I could push. I just got the right bazel version and trying to build locally, but I am sure some things are missing/wrong. |
Sounds good, let me know when it is ready 😄 |
dda0426
to
ad9e44c
Compare
As part of migration of core lazy tensor functionality to PyTorch core, this uses the newly added torch::lazy::Hash functions from PyTorch. Note: while the Hash* functions are largely identical to the original XLA ones, the underlying uint128 class is from protobuf instead of absl, since it was a slightly smaller dependency to ingest and get building on multiple OS/platform combinations for PyTorch.
ad9e44c
to
0c8bd01
Compare
@JackCaoG OK- this is ready for review now. I am in the process of landing the pytorch/pytorch PR. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Mostly LGTM, minor nits. Should we also clean up xla/third_party/xla_client/util.h
?
torch_xla/csrc/ops/generic_slice.cpp
Outdated
[&]() { return NodeOutputShape(input, base_indices, sizes); }, | ||
/*num_outputs=*/1, xla::util::MHash(base_indices, sizes)), | ||
/*num_outputs=*/1, | ||
torch::lazy::MHash(torch::lazy::Hash(base_indices), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why couldn't it just be
torch::lazy::MHash(base_indices, sizes);
here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yea, ill take another look here. I think I had trouble with the getting torch/core's MHash to work natively with torch_xla/util's Hash(absl::Span) impl.
I might have to define or declare the absl one in a different way so that the torch one can find it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I found a solution here, which is to define a new MHash template in torch_util.h which specializes on absl::span.
@wconstab Could you also include your pytorch pr link in the pr description? This will make it easier for us to track this pr in the future. |
Sure. got reverted already though, some extra windows builds that didn't run on the PR. I'll tag the new pytorch PR here once it's ready. |
- remove from places that only used it for util::Hash
622c095
to
b67be8e
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks!
I'm landing the pytorch changes again now. I'm not going to rush to merge the xla side in case the pytorch side gets reverted. I'll wait a few days then if stable press the button. Btw what do I do about the .torch_pin file? I guess I can delete it once pytorch/pytorch is merged and then CI will still pass by running against master. |
Yea, you can delete the |
…_data" Looks like these files are getting used by downstream xla (as of pytorch/xla#3148) so we need to include them in our package_data so that it gets included when we build `bdist_wheel` like in our GHA workflows Signed-off-by: Eli Uriegas <eliuriegasfb.com> Differential Revision: [D32622241](https://our.internmc.facebook.com/intern/diff/D32622241) [ghstack-poisoned]
Looks like these files are getting used by downstream xla (as of pytorch/xla#3148) so we need to include them in our package_data so that it gets included when we build `bdist_wheel` like in our GHA workflows Signed-off-by: Eli Uriegas <eliuriegasfb.com> Differential Revision: [D32622241](https://our.internmc.facebook.com/intern/diff/D32622241) [ghstack-poisoned]
…_data" Looks like these files are getting used by downstream xla (as of pytorch/xla#3148) so we need to include them in our package_data so that it gets included when we build `bdist_wheel` like in our GHA workflows Signed-off-by: Eli Uriegas <eliuriegasfb.com> Differential Revision: [D32622241](https://our.internmc.facebook.com/intern/diff/D32622241) [ghstack-poisoned]
Looks like these files are getting used by downstream xla (as of pytorch/xla#3148) so we need to include them in our package_data so that it gets included when we build `bdist_wheel` like in our GHA workflows Signed-off-by: Eli Uriegas <eliuriegasfb.com> Differential Revision: [D32622241](https://our.internmc.facebook.com/intern/diff/D32622241) [ghstack-poisoned]
Depends on pytorch PR pytorch/pytorch#66181