Skip to content

Conversation

@ailzhang
Copy link
Contributor

@ailzhang ailzhang commented Jun 27, 2019

These 3 PRs landed approx the same time and require changes on xla side. pytorch/pytorch#22237
pytorch/pytorch#22266
pytorch/pytorch#20558
This PR fixes the breakage introduced, but it has 2 todos:

  1. I added a few tests, among them TestMeanCast/TestProdCast/TestMeanInDimsKeepCast/TestProdInDimKeepCast are failing. But mean/prod already handles dtype so it might be a bug on our lowering. (I confirmed that these tests are failing before breakage if added). I'm not familiar how to debug dtype returned from lowering. @dlibenzi could you take a look?
  2. [done] I will update this PR if I can find the right clang-format comand

@ailzhang ailzhang requested review from asuhan and dlibenzi June 27, 2019 05:17
Copy link
Collaborator

@dlibenzi dlibenzi left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

#786 should replace this one.
The clang-format thing, still is off. We need to check if we use a modified version internally, and eventually us moving to a public version.
The tests failing were due to the wrong CreateFrom() API being used (w/out the dtype).

}

XLATensor XLATensor::softmax(const XLATensor& input, xla::int64 dim) {
XLATensor XLATensor::softmax(const XLATensor& input, xla::int64 dim, c10::optional<at::ScalarType> dtype) {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Line seems too long to me.

}

XLATensor XLATensor::log_softmax(const XLATensor& input, xla::int64 dim) {
XLATensor XLATensor::log_softmax(const XLATensor& input, xla::int64 dim, c10::optional<at::ScalarType> dtype) {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Line seems too long to me.

const XLATensor& buffer);

static XLATensor log_softmax(const XLATensor& input, xla::int64 dim);
static XLATensor log_softmax(const XLATensor& input, xla::int64 dim, c10::optional<at::ScalarType> dtype);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Line seems too long to me. I'd recheck clang-format.

m.def("_xla_set_default_device",
[](const std::string& device) { return SetCurrentDevice(device); });
m.def("_xla_get_default_device", []() { return GetCurrentDevice(); });
m.def(
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems like clang-format is still not OK.
These changes should not be there.

at::IntArrayRef padding, at::IntArrayRef dilation, bool ceil_mode,
const at::Tensor& indices);

static at::Tensor mean(const at::Tensor& self, at::ScalarType dtype);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So all these overrides (here and below) are really gone?
Anyone new with similar semantics being added?
We need to be careful of two things when we cover one API.
Removals and additions of operators with similar semantics (but new/changed args, for example).

@dlibenzi dlibenzi closed this Jun 27, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants