Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Torch] Add support for static uneven divisible AdaptiveAvgPool2d #3566

Merged
merged 21 commits into from
Aug 1, 2024

Conversation

yyp0
Copy link
Contributor

@yyp0 yyp0 commented Jul 27, 2024

The static uneven divisible AdaptiveAvgPool2d means that although the input size is not an integer multiple of ouput size, but the kernel and stride size can also be fixed (not dynamic). The derivation logic of kernel and stride size is consistent with torch/_decomp/decomposations.py:adaptive_avg_pool2d as described in the following:

  1. Stride Size
    Firstly , derive the start index in each reduce operation according to the output size (n), start_index = ([0, 1, ..., n - 1] * input_size) // output_size. For each index k, if k * (input_size % output_size) < output_size, then the current and previous stride keeps the same as input_size // output_size. So suppose (n-1) * (input_size % output_size) < output_size, the stride in the whole AdaptiveAvgPool2d process keeps static, as input_size // output_size.

  2. Kernel Size
    torch/_decomp/decomposations.py:adaptive_avg_pool2d calculates a static kernel size when the input/output sizes satisfy either of the two conditions, input_size % output_size == 0 or output_size % (input_size % output_size) == 0. Here if input_size % output_size == 0, then the kernel size equals input_size // output_size, otherwise input_size // output_size + 1.

Copy link
Collaborator

@qingyunqu qingyunqu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. Thanks for the efforts!

@qingyunqu qingyunqu merged commit 22cd444 into llvm:main Aug 1, 2024
3 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants