Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
10 changes: 10 additions & 0 deletions test/cpp/test_aten_xla_tensor.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -910,6 +910,16 @@ TEST_F(AtenXlaTensorTest, TestDim) {
});
}

TEST_F(AtenXlaTensorTest, TestContiguous) {
at::Tensor input = GetTestTensor({2, 3});
at::Tensor output = at::native::contiguous(input);
ForEachDevice([&](const Device& device) {
at::Tensor xla_input = bridge::CreateXlaTensor(input, device);
at::Tensor xla_output = at::native::contiguous(xla_input);
AllClose(output, xla_output);
});
}

TEST_F(AtenXlaTensorTest, TestAvgPool2DBackward) {
int kernel_size = 2;
for (int stride = 1; stride <= 2; ++stride) {
Expand Down
6 changes: 6 additions & 0 deletions torch_xla/csrc/tensor_impl.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -74,6 +74,12 @@ c10::intrusive_ptr<c10::TensorImpl> XLATensorImpl::shallow_copy_and_detach()
return impl;
}

bool XLATensorImpl::is_contiguous() const {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When this trigger?
I mean, in which occasions we have seen.
Asking this because their TensorImpl is a moving target, so the less we fiddle with it, the lower the chances we break their CI.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here: https://github.com/pytorch/pytorch/blob/master/aten/src/ATen/native/TensorProperties.cpp#L56. If the tensor is contiguous (always the case for us), the operation is no-op. I doubt this is the kind of method which would see API changes.

// Only check that the storage is already contiguous.
XLA_CHECK(is_contiguous_) << "Non-contiguous storage for XLA tensor";
return true;
}

void XLATensorImpl::SetupSizeProperties() {
// Fill up the basic dimension data members which the base class
// implementation uses in its APIs.
Expand Down
2 changes: 2 additions & 0 deletions torch_xla/csrc/tensor_impl.h
Original file line number Diff line number Diff line change
Expand Up @@ -20,6 +20,8 @@ class XLATensorImpl : public c10::TensorImpl {

c10::intrusive_ptr<c10::TensorImpl> shallow_copy_and_detach() const override;

bool is_contiguous() const override;

private:
void SetupSizeProperties();

Expand Down