Skip to content

Conversation

kimishpatel
Copy link
Contributor

Summary:
When in doubt because both is_contiguous and is_contiguous(channels_last)
return true, assume it is channels first

Differential Revision: D83998877

Summary:
When in doubt because both is_contiguous and is_contiguous(channels_last)
return true, assume it is channels first

Differential Revision: D83998877
Copy link

pytorch-bot bot commented Oct 7, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/14862

Note: Links to docs will display an error until the docs builds have been completed.

❌ 1 New Failure

As of commit f0abe93 with merge base 8ac6300 (image):

NEW FAILURE - The following job has failed:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@meta-cla meta-cla bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Oct 7, 2025
Copy link

meta-codesync bot commented Oct 7, 2025

@kimishpatel has exported this pull request. If you are a Meta employee, you can view the originating Diff in D83998877.

Copy link

github-actions bot commented Oct 7, 2025

This PR needs a release notes: label

If your change should be included in the release notes (i.e. would users of this library care about this change?), please use a label starting with release notes:. This helps us keep track and include your important work in the next release notes.

To add a label, you can comment to pytorchbot, for example
@pytorchbot label "release notes: none"

For more information, see
https://github.com/pytorch/pytorch/wiki/PyTorch-AutoLabel-Bot#why-categorize-for-release-notes-and-how-does-it-work.

def _is_nhwc_tensor(tensor: torch.Tensor) -> bool:
nhwc = tensor.is_contiguous(memory_format=torch.channels_last)
nchw = tensor.is_contiguous()
# if both are true false
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we just use Tensor.dim_order(ambiguity_check=True)?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it raises runtime error when ambiguity_check=True. Thats why i had to do this

Copy link
Contributor

@digantdesai digantdesai left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IIUC this is about ambiguous cases right? If yes, when we have ambiguous shape [1,1,1,1] this may cause issues when the shape changes at runtime and our assumptions about the memory format (one way or the other) breaks down and results in silently calculating incorrect results.

The right solution might be to get rid of torch.memory_format from XNNPACK switch to unambiguous (at least with the new API with check) and deal with it properly.

@kimishpatel
Copy link
Contributor Author

IIUC this is about ambiguous cases right? If yes, when we have ambiguous shape [1,1,1,1] this may cause issues when the shape changes at runtime and our assumptions about the memory format (one way or the other) breaks down and results in silently calculating incorrect results.

How would you determine the unambiguous cases? What I found for one of the models, which was not using channels in the conventional sense, it was just a 4D tensor, it resulted in failures to lower.

The right solution might be to get rid of torch.memory_format from XNNPACK switch to unambiguous (at least with the new API with check) and deal with it properly.

Yes, but that challenge is what is unambiguous. Problem is we dont have a language in pytorch to unambiguously say what is the dim order. I think the right solution would be that. Although when I think about it, it feels fairly non-trivial. User would have to be explicit in their intent within the model to tell you what is the dim order.

I think safe thing to do would be start out wit making everything contiguous/channels first and then introduce any dim order changes explicitly and this would have to happen within XNNPACK backend passes.

AdrianLundell pushed a commit to AdrianLundell/executorch that referenced this pull request Oct 9, 2025
Differential Revision: D83998877

Pull Request resolved: pytorch#14862
@GregoryComer
Copy link
Member

IIUC this is about ambiguous cases right? If yes, when we have ambiguous shape [1,1,1,1] this may cause issues when the shape changes at runtime and our assumptions about the memory format (one way or the other) breaks down and results in silently calculating incorrect results.

The right solution might be to get rid of torch.memory_format from XNNPACK switch to unambiguous (at least with the new API with check) and deal with it properly.

Are there any specific examples of this that you're aware of that yield silent correctness issues?

@meta-codesync meta-codesync bot merged commit 64b0fd9 into pytorch:main Oct 9, 2025
208 of 236 checks passed
@digantdesai
Copy link
Contributor

How would you determine the unambiguous cases?

if a tensor of a given shape can be mapped to multiple dim_order ==> it is ambiguous.

https://github.com/pytorch/pytorch/blob/85801126821d4f509f3cf5aafa24dbcd3cd11183/torch/_tensor.py#L1532

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. fb-exported meta-exported

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants