Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Max pool gradient fix #5506

Open
wants to merge 8 commits into
base: master
Choose a base branch
from
Open

Max pool gradient fix #5506

wants to merge 8 commits into from

Conversation

aam-at
Copy link
Contributor

@aam-at aam-at commented Feb 7, 2017

Fixes #4761

Updates cpu and gpu versions of max pooling to match cudnn behavior. Behavior is updated only for new gpuarray backend. Definition of max pool grad grad is also updated which now propagates only first value corresponding to the maximum (which also matches behavior of MaxPoolRop in #5323).

Note:
Previous implementation was using mapping from input to input to compute gradient (num_kernels is size of input). To match cudnn behavior output to input mapping is needed (num_kernels is size of output). For the latter case, zero initialization is necessary since there are one to many mapping from the output to the input. Additionally, cuda atomics are needed in case of overlapping regions.

Propagate only one value corresponding to the maximum in the input for both max
pool grad and max pool grad grad. Summing is still used, since when stride is
not equal to 1, one value in the input can be maximum for more than output.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

1 participant