Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

make padding layer converter more efficient #1470

Merged

Conversation

nvpohanh
Copy link
Contributor

Description

Copy of #1466 to bypass CLA issue.

In the current padding layer converter for version > 8.2, there are 3 padding layers: pre_pad + mid_pad + post_pad to do the converter. But please consider this case, we want to pad tensor from (2048, 628, 20) to (2048, 628, 32), the pre_pad and mid_pad can be erased because they are doing the opposite operation and waste time.
And consider that from version 8.2, the start of the slice layer can support negative. So let's use one padding layer to do this.

Fixes # (issue)

As described above, we can improve perf significantly.

Type of change

  • Perf improves, so no additional functional unit test is needed.

Checklist:

  • My code follows the style guidelines of this project (You can use the linters)
  • I have performed a self-review of my own code
  • I have commented my code, particularly in hard-to-understand areas and hacks
  • I have made corresponding changes to the documentation
  • I have added tests to verify my fix or my feature
  • New and existing unit tests pass locally with my changes
  • I have added the relevant labels to my PR in so that relevant reviewers are notified

@nvpohanh
Copy link
Contributor Author

nvpohanh commented Nov 22, 2022

Test failure not related to this PR:
https://app.circleci.com/pipelines/github/pytorch/TensorRT/1431/workflows/304ce5e5-966a-4c38-9ffa-58cc2ff6fa5c/jobs/6857

Looking in indexes: https://pypi.org/simple, https://download.pytorch.org/whl/nightly/cu116
ERROR: Could not find a version that satisfies the requirement torch==1.13.0.dev20220921+cu116 (from versions: 1.7.1, 1.8.0, 1.8.1, 1.9.0, 1.9.1, 1.10.0, 1.10.1, 1.10.2, 1.11.0, 1.12.0, 1.12.1, 1.13.0.dev20220924+cu116, 1.13.0.dev20220925+cu116, 1.13.0.dev20220926+cu116, 1.13.0.dev20220927+cu116, 1.13.0.dev20220928+cu116, 1.13.0.dev20220929+cu116, 1.13.0.dev20220930+cu116, 1.13.0.dev20221001+cu116, 1.13.0.dev20221002+cu116, 1.13.0.dev20221003+cu116, 1.13.0.dev20221004+cu116, 1.13.0.dev20221005+cu116, 1.13.0.dev20221006+cu116, 1.13.0, 1.14.0.dev20221007+cu116, 1.14.0.dev20221008+cu116, 1.14.0.dev20221009+cu116, 1.14.0.dev20221010+cu116, 1.14.0.dev20221011+cu116, 1.14.0.dev20221012+cu116, 1.14.0.dev20221013+cu116, 1.14.0.dev20221014+cu116, 1.14.0.dev20221015+cu116, 1.14.0.dev20221016+cu116, 1.14.0.dev20221017+cu116, 1.14.0.dev20221018+cu116, 1.14.0.dev20221019+cu116, 1.14.0.dev20221020+cu116, 1.14.0.dev20221021+cu116, 1.14.0.dev20221022+cu116, 1.14.0.dev20221023+cu116, 1.14.0.dev20221024+cu116, 1.14.0.dev20221025+cu116, 1.14.0.dev20221026+cu116, 1.14.0.dev20221027+cu116, 1.14.0.dev20221028+cu116, 1.14.0.dev20221029+cu116, 1.14.0.dev20221030+cu116, 1.14.0.dev20221031+cu116, 1.14.0.dev20221101+cu116, 1.14.0.dev20221102+cu116, 1.14.0.dev20221103+cu116, 1.14.0.dev20221104+cu116, 1.14.0.dev20221105+cu116, 1.14.0.dev20221106+cu116, 1.14.0.dev20221107+cu116, 1.14.0.dev20221108+cu116, 1.14.0.dev20221109+cu116, 1.14.0.dev20221110+cu116, 1.14.0.dev20221111+cu116, 1.14.0.dev20221112+cu116, 1.14.0.dev20221113+cu116, 1.14.0.dev20221114+cu116, 1.14.0.dev20221115+cu116, 1.14.0.dev20221116+cu116, 1.14.0.dev20221117+cu116, 1.14.0.dev20221118+cu116, 1.14.0.dev20221119+cu116, 1.14.0.dev20221120+cu116, 1.14.0.dev20221121+cu116)
ERROR: No matching distribution found for torch==1.13.0.dev20220921+cu116

@frank-wei
Copy link
Contributor

thanks @nvpohanh for the improvement. Let's wait for the #1479 to be landed and run the CI test again.

@nvpohanh nvpohanh force-pushed the moreEfficientPadConverter-nvpohanh branch from 055e3f5 to b7f0f7d Compare December 5, 2022 07:21
@nvpohanh
Copy link
Contributor Author

nvpohanh commented Dec 5, 2022

@frank-wei Could you approve this?

Copy link

@yinghai yinghai left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LG. Could you fix the lint issue?

@nvpohanh nvpohanh force-pushed the moreEfficientPadConverter-nvpohanh branch from b7f0f7d to 426fd04 Compare December 6, 2022 04:54
@nvpohanh
Copy link
Contributor Author

nvpohanh commented Dec 6, 2022

Reformatted the code

@nvpohanh
Copy link
Contributor Author

nvpohanh commented Dec 6, 2022

@yinghai @frank-wei FYI

@nvpohanh nvpohanh force-pushed the moreEfficientPadConverter-nvpohanh branch from 426fd04 to f4fe98b Compare December 6, 2022 04:56
Copy link
Contributor

@frank-wei frank-wei left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM! thanks!

@nvpohanh nvpohanh requested review from yinghai and removed request for narendasan, wushirong and 842974287 December 6, 2022 05:53
@frank-wei frank-wei merged commit 88fed13 into pytorch:master Dec 7, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants