-
Notifications
You must be signed in to change notification settings - Fork 25.7k
[NHWC support] Revoking mutually exclusive requirement on channels last and contiguous tensor #24113
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
The old implementation assumed `is_channels_last_contiguous_` to be mutually exclusive to `is_contiguous_`, which is not true. Properly set the flag by checking strides.
Realized that I forgot to tag @VitalyFedyunin |
test/test_torch.py
Outdated
self.assertTrue(nhwc.is_contiguous(memory_format=torch.channels_last)) | ||
self.assertEqual(nhwc, x) | ||
|
||
def test_memory_format_consistency(self): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wait, does this test fails if we remove all TensorImpl.h
changes
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, that's the issue I was complaining (along with the ambiguity things) earlier last week.
I guess I didn't make it clear enough back then. :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@VitalyFedyunin has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@VitalyFedyunin has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@VitalyFedyunin has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@ifedan has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@VitalyFedyunin has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
cc: @VitalyFedyunin this PR is not landed yet, is this expected ? (internal diff is in preparation). |
It is expected, please do not land it yet. |
|
Please rebase and retitle to something like: 'Revoking mutually |
@VitalyFedyunin This one should be good to go. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@VitalyFedyunin has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
Fails inner caffe2 tests, trying to figure out now what need to be fixed. |
…us tensor Summary: The old implementation assumed `is_channels_last_contiguous_` to be mutually exclusive to `is_contiguous_`, which is not true. Properly set the flag by checking strides. Original Pull Request resolved: pytorch#24113 Original GitHub Author: jjsjann123 <jiej@nvidia.com> Differential Revision: D16860715 fbshipit-source-id: 866ffbee5626c65e06a186616fcf1e7d7b4ed36f
@VitalyFedyunin merged this pull request in ddeeb56. |
The old implementation assumed
is_channels_last_contiguous_
to be mutuallyexclusive to
is_contiguous_
, which is not true.Properly set the flag by checking strides.