-
Notifications
You must be signed in to change notification settings - Fork 685
NXP backend: Improve cifarnet speed by removing the initial pading. #13279
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,4 +1,4 @@ | ||
# Copyright 2024 NXP | ||
# Copyright 2024-2025 NXP | ||
# | ||
# This source code is licensed under the BSD-style license found in the | ||
# LICENSE file in the root directory of this source tree. | ||
|
@@ -57,7 +57,7 @@ class CifarNetModel(nn.Module): | |
def __init__(self): | ||
super().__init__() | ||
|
||
self.conv1 = nn.Conv2d(8, 32, 5) | ||
self.conv1 = nn.Conv2d(3, 32, 5) | ||
self.conv2 = nn.Conv2d(32, 32, 5) | ||
self.conv3 = nn.Conv2d(32, 64, 5) | ||
self.pool1 = nn.MaxPool2d(2, 2) | ||
|
@@ -66,10 +66,7 @@ def __init__(self): | |
self.softmax = nn.Softmax(1) | ||
|
||
def forward(self, x): | ||
|
||
# Neutron Backend does not yet have passses for automated padding if number of channels does not | ||
# fit to Neutron constrains (#channels == #MAC units). So define the model explicitly tailored for Neutron-C-64. | ||
x = F.pad(x, (2, 2, 2, 2, 0, 5)) | ||
x = F.pad(x, (2, 2, 2, 2)) | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Why is there remaining padding? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. The remaining padding ensures that the output of convolution has the same size as the original x before padding. It has the same effect as using padding="same" in the convolutions. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This type of padding is fused later into convolution. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. is this part of the original model definition? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Yes. The previous padding to 8 channels was due to Neutron NPU constrain. In the meanwhile the Neutron converter got the capability to autopad, hence it is not necessary anymore. For the remaining padding, it might have been used the "same" padding in the Convolution https://docs.pytorch.org/docs/stable/generated/torch.nn.Conv2d.html. Filed a ticket here: #13470 |
||
x = self.conv1(x) | ||
x = self.pool1(x) | ||
|
||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice!