Maxpool bwd#750
Merged
Merged
Conversation
Prevent error in bf16
…able_kernel into max-pool-bwd
Collaborator
Author
|
Similar to DeviceElementwiseImpl, |
qianfengz
reviewed
Jun 15, 2023
qianfengz
reviewed
Jun 15, 2023
qianfengz
reviewed
Jun 15, 2023
qianfengz
reviewed
Jun 15, 2023
qianfengz
reviewed
Jun 15, 2023
qianfengz
reviewed
Jun 15, 2023
qianfengz
reviewed
Jun 16, 2023
Contributor
qianfengz
left a comment
There was a problem hiding this comment.
For referene_pool_fwd.hpp and the Class name, suggest to include "nhwc", as the codes insides shows the supported layout is "NHWC"
qianfengz
reviewed
Jun 16, 2023
qianfengz
reviewed
Jun 16, 2023
qianfengz
reviewed
Jun 16, 2023
qianfengz
reviewed
Jun 16, 2023
qianfengz
reviewed
Jun 16, 2023
Remove useless header
qianfengz
previously approved these changes
Jun 16, 2023
qianfengz
approved these changes
Jun 19, 2023
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Maxpool backward is
dx = dy[index], which is same as Put operation fromhttps://pytorch.org/docs/stable/generated/torch.Tensor.put_.html
https://numpy.org/doc/stable/reference/generated/numpy.put.html
Hence, I implemented Put kernel and uses it to implement Maxpool backward.
However, if each sliding windows will be overlap (eg. window size = 3, window stride = 1), we need to use atomicAdd() to add the gradient. In this case, Maxppol backward is
dx += dy[index]So, my Put kernel let the user to specify the memory operation different than the usually used
setting the value.MemOp(Dx, Dy, index)
However, fp16x1 and bf16 does not support atomicAdd. In this case, Put kernel will output fp32 and another casting kernel is used to cast fp32 to fp16 and bf16.
Note: This P.R assumes all the input and output tensors are in packed memory space.