Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support for non-zero padding #438

Merged
merged 8 commits into from
Mar 11, 2020
Merged

Add support for non-zero padding #438

merged 8 commits into from
Mar 11, 2020

Conversation

lgeiger
Copy link
Member

@lgeiger lgeiger commented Feb 28, 2020

This adds support for non zero padding for QuantConv1D, QuantConv2D, QuantConv3D, QuantDepthwiseConv2D, QuantSeparableConv1D and QuantSeparableConv2D in order to better support larq/compute-engine#252

We can add support for locally connected layers and transposed convolutions in the future.

@lgeiger lgeiger added the feature New feature or request label Feb 28, 2020
@lgeiger lgeiger requested a review from a team February 28, 2020 15:16
@@ -181,6 +182,7 @@ def __init__(
kernel_size,
strides=1,
padding="valid",
pad_values=0.0,
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@larq/core how do you feel about pad_values vs padding_values?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm fine with either, no particular preference.

Copy link
Contributor

@leonoverweel leonoverweel left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

From an initial look this LGTM, but given how complex it is I think another few sets of eyes would be useful. CC @Tombana, given your experience with padding, can you take a look?

larq/layers_base.py Outdated Show resolved Hide resolved
@lgeiger lgeiger marked this pull request as ready for review February 28, 2020 15:59
@lgeiger lgeiger force-pushed the add-non-zero-padding branch 2 times, most recently from a1f1522 to 2057892 Compare March 2, 2020 12:15
Copy link
Contributor

@Tombana Tombana left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good to me. I'd say pad_value (singular) makes slightly more sense but since the tf op also uses it in plural form I think this is good.

@lgeiger lgeiger force-pushed the add-non-zero-padding branch from 2057892 to c00da9b Compare March 10, 2020 11:22
Copy link
Contributor

@koenhelwegen koenhelwegen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good! Only thing is I would prefer to test whether the layer actually uses non-zero padding (if I'm correct the test would now pass even if _is_native_padding() always returns True)

@lgeiger
Copy link
Member Author

lgeiger commented Mar 11, 2020

Only thing is I would prefer to test whether the layer actually uses non-zero padding (if I'm correct the test would now pass even if _is_native_padding() always returns True)

Good point. I added a check to make sure tf.pad gets called with the correct values in f03c714

@lgeiger lgeiger merged commit 1748ef3 into master Mar 11, 2020
@lgeiger lgeiger deleted the add-non-zero-padding branch March 11, 2020 18:12
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feature New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants