Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Better testing plan for batching rule of backward ops #28

Closed
zou3519 opened this issue May 14, 2021 · 2 comments
Closed

Better testing plan for batching rule of backward ops #28

zou3519 opened this issue May 14, 2021 · 2 comments
Assignees

Comments

@zou3519
Copy link
Contributor

zou3519 commented May 14, 2021

There's not a methodological way to test the batching rule backward op right now through all of the edge cases

@Chillee
Copy link
Contributor

Chillee commented May 18, 2021

hmmmm..... does testing grad + our current opinfo vmap tests suffice?

@zou3519 zou3519 self-assigned this May 26, 2021
@zou3519
Copy link
Contributor Author

zou3519 commented May 26, 2021

I'm working on this. I'll probably add testing for all pairs of { vmap, vjp, None} x { vmap, vjp } (grad is really just a special case of vjp, but we didn't implement it like that, so maybe grad needs to go in there too).

This design is not the greatest because it implies that if we have N transforms then each OpInfo test will generate N ** 2 tests which seem bad for runtime. Maybe in the future we'll decide to test fewer pairs when we have more confidence in the dynamic layering mechanism or add some "structure" to our transforms to make it so that it's impossible for them to not compose.

@zou3519 zou3519 closed this as completed in fbcf8ef Jun 3, 2021
zou3519 added a commit to zou3519/pytorch that referenced this issue Jul 20, 2022
Plus some refactoring of the vmap testing to reuse functions between all
of the mentioned tests.

Fixes pytorch/functorch#28.
mikekgfb pushed a commit to mikekgfb/pytorch that referenced this issue Jul 21, 2022
Plus some refactoring of the vmap testing to reuse functions between all
of the mentioned tests.

Fixes pytorch/functorch#28.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants