-
Notifications
You must be signed in to change notification settings - Fork 331
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
make padding layer converter more efficient #1470
make padding layer converter more efficient #1470
Conversation
Test failure not related to this PR:
|
055e3f5
to
b7f0f7d
Compare
@frank-wei Could you approve this? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LG. Could you fix the lint issue?
b7f0f7d
to
426fd04
Compare
Reformatted the code |
@yinghai @frank-wei FYI |
426fd04
to
f4fe98b
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM! thanks!
Description
Copy of #1466 to bypass CLA issue.
In the current padding layer converter for version > 8.2, there are 3 padding layers: pre_pad + mid_pad + post_pad to do the converter. But please consider this case, we want to pad tensor from (2048, 628, 20) to (2048, 628, 32), the pre_pad and mid_pad can be erased because they are doing the opposite operation and waste time.
And consider that from version 8.2, the start of the slice layer can support negative. So let's use one padding layer to do this.
Fixes # (issue)
As described above, we can improve perf significantly.
Type of change
Checklist: