Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[PyTorch][Tensor] Introduce tensor.dim_order #106835

Closed
wants to merge 1 commit into from

Conversation

digantdesai
Copy link
Contributor

@digantdesai digantdesai commented Aug 9, 2023

Summary:
This is a stride based attribute for a tensor available in Python.

This can help inspect tensors generated using torch.empty_permuted(.., physical_layout, ...), where physical_layout should match the dim_order returned here. empty_permuted will be renamed to use dim_order as the param name in the future. And also help Executorch export pipeline with implementing dim_order based tensors.

Differential Revision: D48134476

cc @albanD

@pytorch-bot
Copy link

pytorch-bot bot commented Aug 9, 2023

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/106835

Note: Links to docs will display an error until the docs builds have been completed.

✅ No Failures

As of commit 15b9e64 with merge base f6cce3c (image):
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D48134476

@github-actions
Copy link

github-actions bot commented Aug 9, 2023

This PR needs a release notes: label

If your changes are user facing and intended to be a part of release notes, please use a label starting with release notes:.

If not, please add the topic: not user facing label.

To add a label, you can comment to pytorchbot, for example
@pytorchbot label "topic: not user facing"

For more information, see
https://github.com/pytorch/pytorch/wiki/PyTorch-AutoLabel-Bot#why-categorize-for-release-notes-and-how-does-it-work.

@digantdesai digantdesai added module: python frontend For issues relating to PyTorch's Python frontend topic: new features topic category labels Aug 9, 2023
digantdesai added a commit to digantdesai/pytorch that referenced this pull request Aug 9, 2023
Summary:
Pull Request resolved: pytorch#106835

This is a stride based attribute for a tensor available in Python.

This can help inspect tensors generated using `torch.empty_permuted(.., physical_layout, ...)`, where physical_layout should match the dim_order returned here. `empty_permuted` will be renamed to use dim_order as the param name in the future. And also help Executorch export pipeline with implementing dim_order based tensors.

Differential Revision: D48134476

fbshipit-source-id: 12855d49038e76e27714cae554d3730bcf83f5d7
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D48134476

digantdesai added a commit to digantdesai/pytorch that referenced this pull request Aug 9, 2023
Summary:
Pull Request resolved: pytorch#106835

This is a stride based attribute for a tensor available in Python.

This can help inspect tensors generated using `torch.empty_permuted(.., physical_layout, ...)`, where physical_layout should match the dim_order returned here. `empty_permuted` will be renamed to use dim_order as the param name in the future. And also help Executorch export pipeline with implementing dim_order based tensors.

Differential Revision: D48134476

fbshipit-source-id: 098f73c454703e1947114bf03add1b5b5b4301e6
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D48134476

@digantdesai digantdesai force-pushed the export-D48134476 branch 2 times, most recently from ed6c40e to dc182be Compare August 9, 2023 16:25
digantdesai added a commit to digantdesai/pytorch that referenced this pull request Aug 9, 2023
Summary:
Pull Request resolved: pytorch#106835

This is a stride based attribute for a tensor available in Python.

This can help inspect tensors generated using `torch.empty_permuted(.., physical_layout, ...)`, where physical_layout should match the dim_order returned here. `empty_permuted` will be renamed to use dim_order as the param name in the future. And also help Executorch export pipeline with implementing dim_order based tensors.

Differential Revision: D48134476

fbshipit-source-id: ee6077a589b870e89a54be894d208398a5f19f46
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D48134476

test/test_torch.py Outdated Show resolved Hide resolved
test/test_torch.py Outdated Show resolved Hide resolved
torch/_tensor.py Outdated
"""
if self.layout == torch.strided:
return tuple(
[
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you don't need the list here if you have tuple() wrapping around the top

self.assertSequenceEqual(dim_order, torch.empty_permuted(shape, dim_order).dim_order())

self.assertRaises(RuntimeError, lambda: torch.empty(shape).to_sparse().dim_order())
self.assertRaises(RuntimeError, lambda: torch.empty(shape).to_mkldnn().dim_order())
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are some ambiguous cases when size=1, and it would be good to ensure that the sorting algorithm does the right thing. In fact, I do not think it does the right thing, because PrimTorch's/TensorIterator's algorithm for inferring what the contiguous layout of a tensor should be is substantially more complicated

    # insertion sort with support for ambiguous comparisons
    for i in range(1, ndim):
        dim1 = i
        for dim0 in reversed(range(i)):
            comparison = should_swap(perm[dim0], perm[dim1])
            if comparison > 0:
                perm[dim0], perm[dim1] = perm[dim1], perm[dim0]
                dim1 = dim0
            elif comparison < 0:
                break

see torch/_prims_common/__init__.py

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So I look at the algorithm and I think I understand. The main essence of difference is when stride[index0] == stride[index1]. The default behavior is to swap order when stride[index0] > stride[index1] (https://github.com/pytorch/pytorch/blob/main/torch/_prims_common/__init__.py#L392). However, when stride[index0] == stride[index1], it treats it the same as the case of stride[index0] > stride[index1] when shape[index0] > shape[index1]. So 1) smaller stride dim is swapped with larger stride dim 2) if those are same smaller shape dim is swapped with larger shape dim. 2 can happen when only one of the dim has size=1.

Thus for example:

t = torch.rand((10, 1))
# t.shape = (10, 1) t.strides = (1, 1)
t_permuted = t.permute((1, 0))
# t_permuted.shape = (1, 10) t_permuted.stride = (1, 1)
# correct answer according to this algorithm for t_permuted.dim_order() = (1, 0)
# wrong answer from not handling size=1 cases t_permuted.dim_order() = (0, 1)

working example: https://gist.github.com/kimishpatel/b241207b03e7d9caeb3e27a064bd2004

@digantdesai on executorch side we probably handled this in emit without considering the ambiguous case because the exported model always had contiguous tensors. Lets just align with the one @ezyang pointed to

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think chatgpt could hvae reasoned about this way way faster than how long it took me.

Copy link
Contributor

@ezyang ezyang left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think the implementation of dim_order() is consistent with TensorIterator's behavior.

@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D48134476

digantdesai added a commit to digantdesai/pytorch that referenced this pull request Aug 22, 2023
Summary:
Pull Request resolved: pytorch#106835

This is a stride based attribute for a tensor available in Python.

This can help inspect tensors generated using `torch.empty_permuted(.., physical_layout, ...)`, where physical_layout should match the dim_order returned here. `empty_permuted` will be renamed to use dim_order as the param name in the future. And also help Executorch export pipeline with implementing dim_order based tensors.

Differential Revision: D48134476

fbshipit-source-id: 484a3b9fd4bc62d096f66b9f8a4d3179c7930c2c
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D48134476

digantdesai added a commit to digantdesai/pytorch that referenced this pull request Aug 22, 2023
Summary:
Pull Request resolved: pytorch#106835

This is a stride based attribute for a tensor available in Python.

This can help inspect tensors generated using `torch.empty_permuted(.., physical_layout, ...)`, where physical_layout should match the dim_order returned here. `empty_permuted` will be renamed to use dim_order as the param name in the future. And also help Executorch export pipeline with implementing dim_order based tensors.

Differential Revision: D48134476

fbshipit-source-id: b713c539af7be1e05a3fedd71f5ffe6a95dcf596
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D48134476

Copy link
Contributor

@kimishpatel kimishpatel left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For the most part looks good to me. Just check if we can directly use prim impl

torch/_tensor.py Outdated
Comment on lines 1334 to 1336
Thus, the conversion from strides is done by sorting the strides
from larger to smaller since the dimension with the largest stride
is the outermost and the dimension with the smallest stride is the innermost.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Comments need to be changes to at least refer to how ambiguous cases are handled.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually I see your NB below

torch/_tensor.py Outdated
# NB:
# - Based on the implementation in TensorIterator.cpp
# - Should have similar behavior (esp in ambiguous cases) to,
# torch._prims_common.compute_elementwise_output_logical_to_physical_perm
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can just invoke that function with a single tensor?


# cases with same strides, and size == 1
t = torch.empty(shape)
self._check_dim_order_against_prim_impl(torch.empty(shape))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would prefer if we had this test self contained, although assuming prim_impl is well tested this also seems reasonable. However, if we just direclty use prim_impl then these tests becomes moot or just test prim_impl.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since now we have two impls, the goal of this test is to make sure they align.

@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D48134476

digantdesai added a commit to digantdesai/pytorch that referenced this pull request Aug 24, 2023
Summary:
Pull Request resolved: pytorch#106835

This is a stride based attribute for a tensor available in Python.

This can help inspect tensors generated using `torch.empty_permuted(.., physical_layout, ...)`, where physical_layout should match the dim_order returned here. `empty_permuted` will be renamed to use dim_order as the param name in the future. And also help Executorch export pipeline with implementing dim_order based tensors.

Importing in the function to avoid circular dependency on torch.Tensor.

Differential Revision: D48134476

fbshipit-source-id: 9bf7b4fdbcabefd4602dd3820703891f9257e6cb
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D48134476

digantdesai added a commit to digantdesai/pytorch that referenced this pull request Aug 24, 2023
Summary:
Pull Request resolved: pytorch#106835

This is a stride based attribute for a tensor available in Python.

This can help inspect tensors generated using `torch.empty_permuted(.., physical_layout, ...)`, where physical_layout should match the dim_order returned here. `empty_permuted` will be renamed to use dim_order as the param name in the future. And also help Executorch export pipeline with implementing dim_order based tensors.

Importing in the function to avoid circular dependency on torch.Tensor.

Differential Revision: D48134476

fbshipit-source-id: cb98a25fe977a8b2563bf991803c333bc520680b
t = torch.empty(shape)
self.assertSequenceEqual(t.dim_order(), (0, 1, 2, 3), seq_type=tuple)
# transpose doesn't really change the underlying physical memory
# so execpting dim_order change to reflect that (like strides)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

excepting

@@ -1319,6 +1319,32 @@ def to_sparse_coo(self):
"""
return self.to_sparse()

def dim_order(self):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You may want to add torch function override support here, look for things like has_torch_function_unary in this file

@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D48134476

digantdesai added a commit to digantdesai/pytorch that referenced this pull request Aug 24, 2023
Summary:
Pull Request resolved: pytorch#106835

This is a stride based attribute for a tensor available in Python.

This can help inspect tensors generated using `torch.empty_permuted(.., physical_layout, ...)`, where physical_layout should match the dim_order returned here. `empty_permuted` will be renamed to use dim_order as the param name in the future. And also help Executorch export pipeline with implementing dim_order based tensors.

Importing in the function to avoid circular dependency on torch.Tensor.

Reviewed By: ezyang

Differential Revision: D48134476

fbshipit-source-id: 531b930067ccf12f3b708baa5533fdd378df43ee
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D48134476

digantdesai added a commit to digantdesai/pytorch that referenced this pull request Aug 24, 2023
Summary:
Pull Request resolved: pytorch#106835

This is a stride based attribute for a tensor available in Python.

This can help inspect tensors generated using `torch.empty_permuted(.., physical_layout, ...)`, where physical_layout should match the dim_order returned here. `empty_permuted` will be renamed to use dim_order as the param name in the future. And also help Executorch export pipeline with implementing dim_order based tensors.

Importing in the function to avoid circular dependency on torch.Tensor.

Reviewed By: ezyang

Differential Revision: D48134476

fbshipit-source-id: e392865c19a3ce8896a73dfe98d4595391d7cefc
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D48134476

digantdesai added a commit to digantdesai/pytorch that referenced this pull request Aug 24, 2023
Summary:
Pull Request resolved: pytorch#106835

This is a stride based attribute for a tensor available in Python.

This can help inspect tensors generated using `torch.empty_permuted(.., physical_layout, ...)`, where physical_layout should match the dim_order returned here. `empty_permuted` will be renamed to use dim_order as the param name in the future. And also help Executorch export pipeline with implementing dim_order based tensors.

Importing in the function to avoid circular dependency on torch.Tensor.

Reviewed By: ezyang

Differential Revision: D48134476

fbshipit-source-id: f82c400357e3d7547d489fa20c86fce014b2e806
Summary:
Pull Request resolved: pytorch#106835

This is a stride based attribute for a tensor available in Python.

This can help inspect tensors generated using `torch.empty_permuted(.., physical_layout, ...)`, where physical_layout should match the dim_order returned here. `empty_permuted` will be renamed to use dim_order as the param name in the future. And also help Executorch export pipeline with implementing dim_order based tensors.

Importing in the function to avoid circular dependency on torch.Tensor.

Reviewed By: ezyang

Differential Revision: D48134476

fbshipit-source-id: 58513d445931d122af77ac1e04d35eec36cb4c3a
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D48134476

@facebook-github-bot
Copy link
Contributor

@pytorchbot merge

(Initiating merge automatically since Phabricator Diff has merged)

@pytorch-bot pytorch-bot bot added the ciflow/trunk Trigger trunk jobs on your pull request label Aug 25, 2023
@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged once all checks pass (ETA 0-4 Hours).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ciflow/trunk Trigger trunk jobs on your pull request fb-exported Merged module: python frontend For issues relating to PyTorch's Python frontend topic: new features topic category
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

6 participants