Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support for non-zero padding #438

Merged
merged 8 commits into from Mar 11, 2020
Merged

Add support for non-zero padding #438

merged 8 commits into from Mar 11, 2020

Conversation

lgeiger
Copy link
Member

@lgeiger lgeiger commented Feb 28, 2020

This adds support for non zero padding for QuantConv1D, QuantConv2D, QuantConv3D, QuantDepthwiseConv2D, QuantSeparableConv1D and QuantSeparableConv2D in order to better support larq/compute-engine#252

We can add support for locally connected layers and transposed convolutions in the future.

@lgeiger lgeiger added the feature label Feb 28, 2020
@lgeiger lgeiger requested a review from Feb 28, 2020
@@ -181,6 +182,7 @@ def __init__(
kernel_size,
strides=1,
padding="valid",
pad_values=0.0,
Copy link
Member Author

@lgeiger lgeiger Feb 28, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@larq/core how do you feel about pad_values vs padding_values?

Copy link
Contributor

@AdamHillier AdamHillier Feb 28, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm fine with either, no particular preference.

Copy link
Contributor

@leonoverweel leonoverweel left a comment

From an initial look this LGTM, but given how complex it is I think another few sets of eyes would be useful. CC @Tombana, given your experience with padding, can you take a look?

larq/layers_base.py Outdated Show resolved Hide resolved
@lgeiger lgeiger marked this pull request as ready for review Feb 28, 2020
@lgeiger lgeiger force-pushed the add-non-zero-padding branch 2 times, most recently from a1f1522 to 2057892 Compare Mar 2, 2020
Tombana
Tombana approved these changes Mar 2, 2020
Copy link
Contributor

@Tombana Tombana left a comment

Looks good to me. I'd say pad_value (singular) makes slightly more sense but since the tf op also uses it in plural form I think this is good.

Copy link
Contributor

@koenhelwegen koenhelwegen left a comment

Looks good! Only thing is I would prefer to test whether the layer actually uses non-zero padding (if I'm correct the test would now pass even if _is_native_padding() always returns True)

@lgeiger
Copy link
Member Author

@lgeiger lgeiger commented Mar 11, 2020

Only thing is I would prefer to test whether the layer actually uses non-zero padding (if I'm correct the test would now pass even if _is_native_padding() always returns True)

Good point. I added a check to make sure tf.pad gets called with the correct values in f03c714

@lgeiger lgeiger merged commit 1748ef3 into master Mar 11, 2020
8 of 9 checks passed
@lgeiger lgeiger deleted the add-non-zero-padding branch Mar 11, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feature
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants