-
Notifications
You must be signed in to change notification settings - Fork 560
Fix breakage 0626 #785
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix breakage 0626 #785
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
#786 should replace this one.
The clang-format thing, still is off. We need to check if we use a modified version internally, and eventually us moving to a public version.
The tests failing were due to the wrong CreateFrom() API being used (w/out the dtype).
torch_xla/csrc/tensor_methods.cpp
Outdated
| } | ||
|
|
||
| XLATensor XLATensor::softmax(const XLATensor& input, xla::int64 dim) { | ||
| XLATensor XLATensor::softmax(const XLATensor& input, xla::int64 dim, c10::optional<at::ScalarType> dtype) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Line seems too long to me.
torch_xla/csrc/tensor_methods.cpp
Outdated
| } | ||
|
|
||
| XLATensor XLATensor::log_softmax(const XLATensor& input, xla::int64 dim) { | ||
| XLATensor XLATensor::log_softmax(const XLATensor& input, xla::int64 dim, c10::optional<at::ScalarType> dtype) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Line seems too long to me.
torch_xla/csrc/tensor.h
Outdated
| const XLATensor& buffer); | ||
|
|
||
| static XLATensor log_softmax(const XLATensor& input, xla::int64 dim); | ||
| static XLATensor log_softmax(const XLATensor& input, xla::int64 dim, c10::optional<at::ScalarType> dtype); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Line seems too long to me. I'd recheck clang-format.
| m.def("_xla_set_default_device", | ||
| [](const std::string& device) { return SetCurrentDevice(device); }); | ||
| m.def("_xla_get_default_device", []() { return GetCurrentDevice(); }); | ||
| m.def( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Seems like clang-format is still not OK.
These changes should not be there.
| at::IntArrayRef padding, at::IntArrayRef dilation, bool ceil_mode, | ||
| const at::Tensor& indices); | ||
|
|
||
| static at::Tensor mean(const at::Tensor& self, at::ScalarType dtype); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So all these overrides (here and below) are really gone?
Anyone new with similar semantics being added?
We need to be careful of two things when we cover one API.
Removals and additions of operators with similar semantics (but new/changed args, for example).
These 3 PRs landed approx the same time and require changes on xla side. pytorch/pytorch#22237
pytorch/pytorch#22266
pytorch/pytorch#20558
This PR fixes the breakage introduced, but it has 2 todos:
mean/prodalready handlesdtypeso it might be a bug on our lowering. (I confirmed that these tests are failing before breakage if added). I'm not familiar how to debug dtype returned from lowering. @dlibenzi could you take a look?I will update this PR if I can find the right clang-format comand