Skip to content

Conversation

jirioc
Copy link
Collaborator

@jirioc jirioc commented Aug 11, 2025

Summary

NXP backend: Improve cifarnet speed by removing the initial pading.

Test plan

Update to test_remove_io_quant_ops_pass__cifarnet() is part of the diff.

Copy link

pytorch-bot bot commented Aug 11, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/13279

Note: Links to docs will display an error until the docs builds have been completed.

✅ You can merge normally! (3 Unrelated Failures)

As of commit 18cdd96 with merge base 72580d2 (image):

FLAKY - The following job failed but was likely due to flakiness present on trunk:

BROKEN TRUNK - The following jobs failed but was present on the merge base:

👉 Rebase onto the `viable/strict` branch to avoid these failures

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@meta-cla meta-cla bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Aug 11, 2025
@jirioc jirioc requested review from digantdesai and robert-kalmar and removed request for digantdesai August 11, 2025 13:19
@jirioc jirioc added ciflow/trunk release notes: nxp Changes to the NXP Neutron backend delegate labels Aug 11, 2025
@jirioc jirioc requested a review from JakeStevens August 11, 2025 13:21
# Neutron Backend does not yet have passses for automated padding if number of channels does not
# fit to Neutron constrains (#channels == #MAC units). So define the model explicitly tailored for Neutron-C-64.
x = F.pad(x, (2, 2, 2, 2, 0, 5))
x = F.pad(x, (2, 2, 2, 2))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is there remaining padding?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The remaining padding ensures that the output of convolution has the same size as the original x before padding. It has the same effect as using padding="same" in the convolutions.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This type of padding is fused later into convolution.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is this part of the original model definition?

Copy link
Collaborator

@robert-kalmar robert-kalmar Aug 18, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes. The previous padding to 8 channels was due to Neutron NPU constrain. In the meanwhile the Neutron converter got the capability to autopad, hence it is not necessary anymore.

For the remaining padding, it might have been used the "same" padding in the Convolution https://docs.pytorch.org/docs/stable/generated/torch.nn.Conv2d.html.

Filed a ticket here: #13470

assert delegation_info.num_delegated_subgraphs == 1
assert delegation_info.num_non_delegated_nodes == 17
assert delegation_info.num_delegated_nodes == 42
assert delegation_info.num_non_delegated_nodes == 11
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice!

@JakeStevens
Copy link
Contributor

Failing tests are specified as flaky and known broken on trunk, merging

@JakeStevens JakeStevens merged commit 9cfb684 into pytorch:main Aug 14, 2025
234 of 238 checks passed
@robert-kalmar robert-kalmar deleted the upstream/main-nxp/EIEX-458-upstream-cifarnet-model-speed-up branch August 18, 2025 09:07
agrima1304 pushed a commit to agrima1304/executorch that referenced this pull request Aug 26, 2025
…ytorch#13279)

### Summary
NXP backend: Improve cifarnet speed by removing the initial pading.

### Test plan
Update to test_remove_io_quant_ops_pass__cifarnet() is part of the diff.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ciflow/trunk CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. release notes: nxp Changes to the NXP Neutron backend delegate

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants