Skip to content

Conversation

@ailzhang
Copy link
Contributor

No description provided.

@ailzhang ailzhang requested a review from dlibenzi September 30, 2019 04:16
return tensor_type == type;
}

void sub_check(const at::Tensor& self, const at::Tensor& other) {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

CheckSubOperands()

at::Scalar alpha) {
return bridge::AtenFromXlaTensor(
XLATensor::rsub(bridge::GetXlaTensor(self), other, alpha));
return rsub(self, c10::scalar_to_tensor(other, self.device()), alpha);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do not route this to the previous rsub().
We have special logic to convert scalars to tensors.
Same thing below.

@ailzhang ailzhang force-pushed the fix_bool_sub branch 4 times, most recently from 6cbc115 to a59a14e Compare September 30, 2019 22:22
} else if (scalar.isComplex()) {
return at::kComplexDouble;
} else {
TORCH_CHECK(scalar.isIntegral(/*includeBool=*/false));
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd use:

XLA_CHECK(scalar.isIntegral(/*includeBool=*/false));

Maybe you can even pipe the scalar in there:

XLA_CHECK(scalar.isIntegral(/*includeBool=*/false)) << scalar;

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We use XLA_CHECK*() in our code 😉

@ailzhang ailzhang merged commit b2c3191 into pytorch:master Sep 30, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants