-
Notifications
You must be signed in to change notification settings - Fork 7.2k
Applied contiguous in decode_*
ops
#4898
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Description: - Applied `contiguous` on decoded output tensor in decode_jpeg and decode_png ops - Updated tests and docs Related to pytorch#4880
💊 CI failures summary and remediationsAs of commit 0ec7ce2 (more details on the Dr. CI page):
1 failure not recognized by patterns:
This comment was automatically generated by Dr. CI (expand for details).Please report bugs/suggestions to the (internal) Dr. CI Users group. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, thanks @vfdev-5. Nice performance boost.
Hey @datumbox! You merged this PR, but no labels were added. The list of valid labels is available at https://github.com/pytorch/vision/blob/main/.github/process_commit.py |
@NicolasHug Do you think it's worth adding |
I would just add enhancement + perf |
This was here on purpose for speed improvements on the reading (avoids copy). I would revert this PR, and work on improving the performance on the transforms instead so that they better handle channel_last format |
Description: - Applied `contiguous` on decoded output tensor in decode_jpeg and decode_png ops - Updated tests and docs Related to pytorch#4880
Summary: Description: - Applied `contiguous` on decoded output tensor in decode_jpeg and decode_png ops - Updated tests and docs Related to #4880 Reviewed By: datumbox Differential Revision: D32470473 fbshipit-source-id: 83ba2e1fccbfb414c66c1c6da7e516990aa7225f
Description:
contiguous
on decoded output tensor in decode_jpeg and decode_png opsRelated to #4880
Performance improvement on following transforms, see #4880 (comment)