-
Notifications
You must be signed in to change notification settings - Fork 685
NXP backend: Improve cifarnet speed by removing the initial pading. #13279
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
NXP backend: Improve cifarnet speed by removing the initial pading. #13279
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/13279
Note: Links to docs will display an error until the docs builds have been completed. ✅ You can merge normally! (3 Unrelated Failures)As of commit 18cdd96 with merge base 72580d2 ( FLAKY - The following job failed but was likely due to flakiness present on trunk:
BROKEN TRUNK - The following jobs failed but was present on the merge base:👉 Rebase onto the `viable/strict` branch to avoid these failures
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
# Neutron Backend does not yet have passses for automated padding if number of channels does not | ||
# fit to Neutron constrains (#channels == #MAC units). So define the model explicitly tailored for Neutron-C-64. | ||
x = F.pad(x, (2, 2, 2, 2, 0, 5)) | ||
x = F.pad(x, (2, 2, 2, 2)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why is there remaining padding?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The remaining padding ensures that the output of convolution has the same size as the original x before padding. It has the same effect as using padding="same" in the convolutions.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This type of padding is fused later into convolution.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is this part of the original model definition?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes. The previous padding to 8 channels was due to Neutron NPU constrain. In the meanwhile the Neutron converter got the capability to autopad, hence it is not necessary anymore.
For the remaining padding, it might have been used the "same" padding in the Convolution https://docs.pytorch.org/docs/stable/generated/torch.nn.Conv2d.html.
Filed a ticket here: #13470
assert delegation_info.num_delegated_subgraphs == 1 | ||
assert delegation_info.num_non_delegated_nodes == 17 | ||
assert delegation_info.num_delegated_nodes == 42 | ||
assert delegation_info.num_non_delegated_nodes == 11 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice!
Failing tests are specified as flaky and known broken on trunk, merging |
…ytorch#13279) ### Summary NXP backend: Improve cifarnet speed by removing the initial pading. ### Test plan Update to test_remove_io_quant_ops_pass__cifarnet() is part of the diff.
Summary
NXP backend: Improve cifarnet speed by removing the initial pading.
Test plan
Update to test_remove_io_quant_ops_pass__cifarnet() is part of the diff.