Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[API2.0]Unify pooling function and add adaptive max pooling function #26483

Merged

Conversation

shippingwang
Copy link
Member

@shippingwang shippingwang commented Aug 20, 2020

PR types

New features

PR changes

APIs

Describe

This PR

  • add adaptive max pooling 2d 3d function
  • Uniform paddle pooling function, extract common function 1. _update_padding_nd which is consistent with conv function, 2. _expand_low_nd_padding to implement low dimension function by calling high dimension function(unsqueeze nd to n+1 d)
  • solve [API 2.0] add pool2d3d API,test=develop #26331 and [2.0API]Add adaptive_avg_pool_2/3d #26369 problem
    • fix typo
    • uniform name and order
    • add deprecated fluid function
    • add Shape field in layer/pooling.py
    • add ref link in functional/pooling.py
    • use x instead of input, out instead of output
  • uniform padding description, which is consistent with conv function.
    Example:
padding (string|int|list|tuple): The padding size. Padding could be in one of the following fo rms.
1. A string in ['valid', 'same'].
2. An int, which means the feature map is zero-padded by the size of `padding` on every side.
3. A list[int] or tuple(int) whose length is 1, which means the feature map is zero-padded by the size of `padding[0]` on every side.
4. A list[int] or tuple(int) whose length is 2. It has the form [pad_before, pad_after].
5. A list or tuple of pairs of integers. It has the form [[pad_before, pad_after], [pad_before, pad_after], ...]. Note that, the batch dimension and channel dimension should be [0,0] or (0,0).
he default value is 0.
  • support padding both front and back in 1d function, although it is NOT consistent with PyTorch, It is same as PaddlePaddle conv 1d function and other pooling Nd functions.
  • separate adaptive 1/2/3d max/avg unittest

@paddle-bot-old
Copy link

Thanks for your contribution!
Please wait for the result of CI firstly. See Paddle CI Manual for details.

@shippingwang shippingwang changed the title [API2.0][WIP]add adaptive max pooling [API2.0][WIP]Uniform pooling function and add adaptive max pooling function Aug 25, 2020
@shippingwang shippingwang changed the title [API2.0][WIP]Uniform pooling function and add adaptive max pooling function [API2.0]Uniform pooling function and add adaptive max pooling function Aug 26, 2020
WuHaobo
WuHaobo previously approved these changes Aug 27, 2020
Copy link
Contributor

@WuHaobo WuHaobo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Copy link
Contributor

@willthefrog willthefrog left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  • double check data_format
  • double check use_cudnn, also document reasoning behind use_cudnn default value choice.
  • also for future references, it would be nice to have some preliminary benchmark on 1d pooling in PR description

python/paddle/nn/functional/pooling.py Show resolved Hide resolved
'SAME' which is the padding algorithm. If pool padding size is a tuple or list,
it could be the following forms: `[pad_left, pad_right]`. If padding is non-zero,
then the input is implicitly zero-padded on both sides for padding number of points.
it must contain an integer.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

integers?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

1d is an integer

python/paddle/nn/functional/pooling.py Show resolved Hide resolved

padding = update_padding1d(padding, "avg")
# use 2d to implenment 1d should expand padding in advance.
padding = _expand_low_nd_padding(padding)

if in_dygraph_mode():
output = core.ops.pool2d(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

use_cudnn is different?
also, document the reasoning of use_cudnn default value.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It looks like a merged bug https://github.com/PaddlePaddle/Paddle/blob/develop/python/paddle/nn/functional/pooling.py#L247 , someone mixed up use_cudnn and exclusive params, I should and already correct it in the next commit.

ceil_mode=self.ceil_mode,
count_include_pad=self.count_include_pad,
divisor_override=self.divisor,
data_format=self.data_format,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

data_format works?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes

@@ -146,99 +168,69 @@ def avg_pool1d(x,
count_include_pad=True,
ceil_mode=False,
name=None):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

missing data format?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It supposes to add in this PR #26331, hmm I suggest refining it in the next PR

@shippingwang shippingwang changed the title [API2.0]Uniform pooling function and add adaptive max pooling function [API2.0]Unify pooling function and add adaptive max pooling function Aug 27, 2020
Copy link
Contributor

@willthefrog willthefrog left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

check weather custom pooling implementation is faster than cudnn for adaptive/global

  • if both faster, set use_cudnn to false for adaptive and global
  • if global is faster, set global_pooling to true and use_cudnn to false for adaptive pooling with output size 1

@shippingwang
Copy link
Member Author

shippingwang commented Aug 28, 2020

check weather custom pooling implementation is faster than cudnn for adaptive/global

  • if both faster, set use_cudnn to false for adaptive and global
  • if global is faster, set global_pooling to true and use_cudnn to false for adaptive pooling with output size 1

I will compare the performance of core.ops.[max/avg]_pool[2/3]d_with_index(), core.ops.pool[2/3]d (use_cudnn=Ture, use global_pooling=True), and core.ops.pool[2/3]d(use_cudnn=False, use_global_pooling=True) when the output size is 1 in adaptive [max/avg] pool[1/2/3]d function in next PR.

please feel free to review it again while CI is passing

Copy link
Contributor

@willthefrog willthefrog left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

need follow up for data format and cudnn, otherwise LGTM

Copy link
Contributor

@XiaoguangHu01 XiaoguangHu01 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LG API

return int(np.ceil((index + 1) * input_size / output_size))


def avg_pool1D_forward_naive(x,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

1D -> 1d

Copy link
Collaborator

@phlrain phlrain left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM for core.ops

@shippingwang shippingwang merged commit db68e08 into PaddlePaddle:develop Aug 29, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

6 participants