Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[AVG POOL] Asymmetric padding (SAME) support. #1346

Merged
merged 7 commits into from
Jul 4, 2018

Conversation

srkreddy1238
Copy link
Contributor

@srkreddy1238 srkreddy1238 force-pushed the master branch 3 times, most recently from 5015a53 to 9d94df6 Compare June 28, 2018 05:13
@srkreddy1238
Copy link
Contributor Author

@tqchen & @Huyuwei Please have a look at this patch to support asymmetric padding with count_include_pad option.
padding attribute extended to accept 4 values by keeping 2 value logic intact.

Now the SAME padding option work perfectly for all operations.
Convolution : inserting pad operator before with pad_value = 0.
Maxpool : inserting pad operator before with pad_value=-inf
Avgpool : padding accepts (pad_top, pad_left, pad_bottom, pad_right)

@FrozenGene
Copy link
Member

Do you want to continue to support Convolution / Maxpool and so on for SAME padding to accept 4 values? I have implemented it in my codebase, but my company has its own open source rule and need long time to contribute. So if you want to implement continually, I am glad to see it. And you also need to update WORKLOADS. i.e.:

_Workload = namedtuple('Workload',
                       ['in_dtype', 'out_dtype', 'height', 'width', 'channel', 'multiplier',
                        'hkernel', 'wkernel', 'hpad', 'wpad', 'hstride', 'wstride'])

Here, it only has hpad / wpad,you should update to 4 elements. It also affects schedule. For example:

    HPAD, WPAD = wkl.hpad, wkl.wpad  # You should update
    HSTR, WSTR = wkl.hstride, wkl.wstride

    VH, VW = sch.vh, sch.vw
    BC = sch.bc
    VC = sch.vc

   # You should update
    TH = H + 2*HPAD 
    TW = W + 2*WPAD

If you would like to do it, I would like to do code review it. Because I have implemented and work perfectly.

@srkreddy1238
Copy link
Contributor Author

@FrozenGene BTW, Convolution and Maxpool need not require these changes as there is alternative to achieve the same.

@FrozenGene
Copy link
Member

You mean insert the pad operator? I think that is not an elegant solution. As @Huyuwei comment in this PR : dmlc/nnvm#472 this pad operator brings overhead. Extend it will make us convert much easier and clearer.

@srkreddy1238
Copy link
Contributor Author

srkreddy1238 commented Jun 28, 2018

I am open to extend the padding internal to Conv and Maxpool as well.

In any case there is no extra_pad in SAME option. We either insert pad at frontend (pad values will be [0,0] for actual operation) or we pass the pad values to the operation and the compute will add pad operation accordingly.

This PR is just accommodate count_include_pad effect with asymmetric padding (SAME).

@srkreddy1238
Copy link
Contributor Author

@Huyuwei & @tqchen any comment here ?

@Huyuwei
Copy link
Contributor

Huyuwei commented Jun 29, 2018

I think it's good to support 4 pad values for pool, both max and avg. conv2d may be left for next step after further discussion, since it requires a lot of change to the code as @FrozenGene commented.

@srkreddy1238 I will give detailed comments on this PR later this weekend.

@srkreddy1238
Copy link
Contributor Author

@Huyuwei can you have a look at this ?

@Huyuwei
Copy link
Contributor

Huyuwei commented Jul 2, 2018

Here is my suggestion:

  • On topi implementation level, always pass 4 pad values. It can be in the format of [top, left, bottom, right] or [[top, bottom], [left, right]]
  • On nnvm symbol level, support api like this: https://keras.io/layers/convolutional/#zeropadding2d
  • Support asymmetric padding for both avg and max pooling, to keep these two kinds of pooling consistent.

@tqchen tqchen added status: review in progress status: need update need update based on feedbacks labels Jul 3, 2018
     * topi : always accespt 4 padding values as (top, left, bottom, right)
     * nnvm accepts 1 / 2 / 4 padding inputs similar to keras.
     * Support both Max and Avg.
@srkreddy1238
Copy link
Contributor Author

@Huyuwei & @FrozenGene have a look at the new changes.

  • topi pad format fixed as [top, left, bottom, right]
  • nnvm accepts padding as (int) or (int, int) or (int, int, int, int)
  • both maxpool and avgpool carries the same syntax for padding.

"Padding support both symmetric and asymmetric as"
"one int : same padding used on all sides"
"two int : bottom, right will use same padding as top, left"
"four int : padding just as given");
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

four int : padding width in the order of top, left, bottom, right

"Padding support both symmetric and asymmetric as"
"one int : same padding used on all sides"
"two int : bottom, right will use same padding as top, left"
"four int : padding just as given");
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

four int : padding width in the order of top, left, bottom, right

pad[3] = pad[0];
} else if (param.padding.ndim() == 2) {
pad[0] *= 2;
pad[1] *= 2;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

better to use pad_h pad_w instead of pad[0] pad[1]

pad_h = pad[0] * 2
pad_w = pad[1] * 2

if (param.padding.ndim() == 1) {
pad[1] = pad[0];
pad[2] = pad[0];
pad[3] = pad[0];
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the logic here seems not correct, it should be:

pad_h = pad[0] * 2
pad_w = pad[0] * 2

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good catch, thanks.

@srkreddy1238
Copy link
Contributor Author

@Huyuwei review comments handled.

@@ -52,28 +52,25 @@ inline Tensor pool_impl(const Tensor& x,
CHECK(x->shape.size() >= 2) << "Pooling input must >= 2-D (H, W)";
CHECK_EQ(kernel_size.size(), 2) << "Pooling kernel_size must have 2 elements";
CHECK_EQ(stride_size.size(), 2) << "Pooling stride_size must have 2 elements";
CHECK_EQ(padding_size.size(), 2) << "Pooling padding_size must have 2 elements";
CHECK((padding_size.size() == 4)) << "Pooling padding_size must have 4 elements";
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

use CHECK_EQ

for pooling_type in ["MAX", "AVG"]:
for input_shape in [[2, 9, 10, 2], [2, 10, 9, 2]]:
for window_shape in [[1, 1], [2, 1], [2, 3]]:
if padding != "SAME":
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the testing script is not clean. I think it's ok to test a few important cases using the old style.

@srkreddy1238
Copy link
Contributor Author

@Huyuwei patched as per review. Please check.

@srkreddy1238
Copy link
Contributor Author

@Huyuwei thanks for the review.

@tqchen tqchen added status: accepted and removed status: need update need update based on feedbacks status: review in progress labels Jul 4, 2018
@tqchen tqchen merged commit 5f9c63a into apache:master Jul 4, 2018
tqchen pushed a commit to tqchen/tvm that referenced this pull request Jul 6, 2018
mnuyens pushed a commit to mnuyens/tvm that referenced this pull request Jul 10, 2018
sergei-mironov pushed a commit to sergei-mironov/tvm that referenced this pull request Aug 8, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants