Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add decomposition for aten.native_batch_norm_backward op #675

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

Shukla-Gaurav
Copy link

This commit adds decomposition for the aten.native_batch_norm_backward
op.

Signed-Off-by: Gaurav Shukla gaurav@nod-labs.com

@facebook-github-bot
Copy link

Hi @Shukla-Gaurav!

Thank you for your pull request and welcome to our community.

Action Required

In order to merge any pull request (code, docs, etc.), we require contributors to sign our Contributor License Agreement, and we don't seem to have one on file for you.

Process

In order for us to review and merge your suggested changes, please sign at https://code.facebook.com/cla. If you are contributing on behalf of someone else (eg your employer), the individual CLA may not be sufficient and your employer may need to sign the corporate CLA.

Once the CLA is signed, our tooling will perform checks and validations. Afterwards, the pull request will be tagged with CLA signed. The tagging process may take up to 1 hour after signing. Please give it that time before contacting us about it.

If you have received this in error or have any questions, please contact us at cla@fb.com. Thanks!

@facebook-github-bot
Copy link

Thank you for signing our Contributor License Agreement. We can now accept your code for this (and any) Meta Open Source project. Thanks!

@Chillee
Copy link
Contributor

Chillee commented Apr 8, 2022

I think there are still some tests failing. For example, I don't believe push_back is a method on python lists :) https://github.com/pytorch/functorch/pull/675/files#diff-8e71d16136c7deeb8104d8d5bb0dd794c208f38856f9ced3448ab02f3e34c43aR522

To run the tests, run

pytest test/test_ops.py::TestDecompositionOpInfoCUDA  -k batch_norm

and

pytest test/test_ops.py::TestDecompositionOpInfoCPU  -k batch_norm

@Shukla-Gaurav Shukla-Gaurav force-pushed the gaurav/native_batch_norm_backward branch 4 times, most recently from 5d06c10 to cbeaf7a Compare April 11, 2022 17:18
@Shukla-Gaurav
Copy link
Author

@Chillee 'pytest test/test_ops.py::TestDecompositionOpInfoCPU -k batch_norm` is always passing for me locally.
I do following steps to test:

  1. pip install --pre torch torchvision-f https://download.pytorch.org/whl/nightly/cpu/torch_nightly.html --upgrade
  2. python setup.py install
  3. pytest test/test_ops.py::TestDecompositionOpInfoCPU -k batch_norm

Am I doing something wrong?

@Chillee
Copy link
Contributor

Chillee commented Apr 12, 2022

@Shukla-Gaurav Do you mean that it always passes, regardless of whether your decomposition is correct? Or are you asking why the decompositions are passing even though the CI fails?

If it's the second one, then there's some failures that are unrelated to you :)

This commit adds decomposition for the `aten.native_batch_norm_backward`
op.

Signed-Off-by: Gaurav Shukla <gaurav@nod-labs.com>
@Shukla-Gaurav Shukla-Gaurav force-pushed the gaurav/native_batch_norm_backward branch from cbeaf7a to bbb2eac Compare April 13, 2022 16:10
@Chillee
Copy link
Contributor

Chillee commented Apr 29, 2022

@Shukla-Gaurav btw, is this mergeable?

@Shukla-Gaurav
Copy link
Author

@Chillee Apologies for the delay. I will rebase and push. Thanks!

@Shukla-Gaurav
Copy link
Author

@Chillee It looks like all the decompositions has been moved/removed?

@Chillee
Copy link
Contributor

Chillee commented May 13, 2022

@Shukla-Gaurav Yes, can you re-open this PR on upstream PyTorch?

@Shukla-Gaurav
Copy link
Author

@Shukla-Gaurav Yes, can you re-open this PR on upstream PyTorch?

Sure, will do that :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants