-
Notifications
You must be signed in to change notification settings - Fork 25.6k
[generate_vmap_rule] Delete unused output_shapes #92024
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
We don't actually need `output_shapes` to implement `generate_vmap_rule=True` support for autograd.Function. - We need this in the vjp (backward) case because autograd automatically reduces grad_inputs to inputs and we need to replicate that behavior. In order to replicate that behavior, we recorded the original input shapes so we know how to reduce the grad_input. - There is no such behavior for forward-mode AD, so we don't need to pass an `output_shapes` to reductify. This PR simplifies the API of `reductify` and `reductify_leaf`. Instead of accepting `input_shape_without_bdim` and `allow_expanded_grad`, we now combine these into a single argument, `reduce_to_input_shape_without_bdim`. - if it is None, then we don't do anything - if it is not-None and a shape, then we will reduce the grad to the provided shape. Test Plan: - updated original unittests - wait for test suite [ghstack-poisoned]
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/92024
Note: Links to docs will display an error until the docs builds have been completed. ✅ No FailuresAs of commit d281147: This comment was automatically generated by Dr. CI and updates every 15 minutes. |
We don't actually need `output_shapes` to implement `generate_vmap_rule=True` support for autograd.Function. - We need this in the vjp (backward) case because autograd automatically reduces grad_inputs to inputs and we need to replicate that behavior. In order to replicate that behavior, we recorded the original input shapes so we know how to reduce the grad_input. - There is no such behavior for forward-mode AD, so we don't need to pass an `output_shapes` to reductify. This PR simplifies the API of `reductify` and `reductify_leaf`. Instead of accepting `input_shape_without_bdim` and `allow_expanded_grad`, we now combine these into a single argument, `reduce_to_input_shape_without_bdim`. - if it is None, then we don't do anything - if it is not-None and a shape, then we will reduce the grad to the provided shape. Test Plan: - updated original unittests - wait for test suite [ghstack-poisoned]
We don't actually need `output_shapes` to implement `generate_vmap_rule=True` support for autograd.Function. - We need this in the vjp (backward) case because autograd automatically reduces grad_inputs to inputs and we need to replicate that behavior. In order to replicate that behavior, we recorded the original input shapes so we know how to reduce the grad_input. - There is no such behavior for forward-mode AD, so we don't need to pass an `output_shapes` to reductify. This PR simplifies the API of `reductify` and `reductify_leaf`. Instead of accepting `input_shape_without_bdim` and `allow_expanded_grad`, we now combine these into a single argument, `reduce_to_input_shape_without_bdim`. - if it is None, then we don't do anything - if it is not-None and a shape, then we will reduce the grad to the provided shape. Test Plan: - updated original unittests - wait for test suite [ghstack-poisoned]
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good catch! Have a minor comment on the naming of the flag.
| def reductify(grad_input, grad_input_bdim, input_bdim, input_shape_without_bdim, batch_size, | ||
| allow_expanded_grad=True): | ||
| def reductify(grad_input, grad_input_bdim, input_bdim, batch_size, | ||
| reduce_to_input_shape_without_bdim=None): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: "reduce_to_input_shape_without_bdim" sounds like a bool because it begins with a verb (when it is supposed to be a shape). Just to shuffle that around, maybe "{target,input}_shape_without_bdim{_to_reduce_to,}" is better?
We don't actually need `output_shapes` to implement `generate_vmap_rule=True` support for autograd.Function. - We need this in the vjp (backward) case because autograd automatically reduces grad_inputs to inputs and we need to replicate that behavior. In order to replicate that behavior, we recorded the original input shapes so we know how to reduce the grad_input. - There is no such behavior for forward-mode AD, so we don't need to pass an `output_shapes` to reductify. This PR simplifies the API of `reductify` and `reductify_leaf`. Instead of accepting `input_shape_without_bdim` and `allow_expanded_grad`, we now combine these into a single argument, `reduce_to_input_shape_without_bdim`. - if it is None, then we don't do anything - if it is not-None and a shape, then we will reduce the grad to the provided shape. Test Plan: - updated original unittests - wait for test suite [ghstack-poisoned]
We don't actually need `output_shapes` to implement `generate_vmap_rule=True` support for autograd.Function. - We need this in the vjp (backward) case because autograd automatically reduces grad_inputs to inputs and we need to replicate that behavior. In order to replicate that behavior, we recorded the original input shapes so we know how to reduce the grad_input. - There is no such behavior for forward-mode AD, so we don't need to pass an `output_shapes` to reductify. This PR simplifies the API of `reductify` and `reductify_leaf`. Instead of accepting `input_shape_without_bdim` and `allow_expanded_grad`, we now combine these into a single argument, `reduce_to_input_shape_without_bdim`. - if it is None, then we don't do anything - if it is not-None and a shape, then we will reduce the grad to the provided shape. Test Plan: - updated original unittests - wait for test suite [ghstack-poisoned]
We don't actually need `output_shapes` to implement `generate_vmap_rule=True` support for autograd.Function. - We need this in the vjp (backward) case because autograd automatically reduces grad_inputs to inputs and we need to replicate that behavior. In order to replicate that behavior, we recorded the original input shapes so we know how to reduce the grad_input. - There is no such behavior for forward-mode AD, so we don't need to pass an `output_shapes` to reductify. This PR simplifies the API of `reductify` and `reductify_leaf`. Instead of accepting `input_shape_without_bdim` and `allow_expanded_grad`, we now combine these into a single argument, `reduce_to_input_shape_without_bdim`. - if it is None, then we don't do anything - if it is not-None and a shape, then we will reduce the grad to the provided shape. Test Plan: - updated original unittests - wait for test suite [ghstack-poisoned]
We don't actually need `output_shapes` to implement `generate_vmap_rule=True` support for autograd.Function. - We need this in the vjp (backward) case because autograd automatically reduces grad_inputs to inputs and we need to replicate that behavior. In order to replicate that behavior, we recorded the original input shapes so we know how to reduce the grad_input. - There is no such behavior for forward-mode AD, so we don't need to pass an `output_shapes` to reductify. This PR simplifies the API of `reductify` and `reductify_leaf`. Instead of accepting `input_shape_without_bdim` and `allow_expanded_grad`, we now combine these into a single argument, `reduce_to_input_shape_without_bdim`. - if it is None, then we don't do anything - if it is not-None and a shape, then we will reduce the grad to the provided shape. Test Plan: - updated original unittests - wait for test suite [ghstack-poisoned]
Stack from ghstack:
We don't actually need
output_shapesto implementgenerate_vmap_rule=Truesupport for autograd.Function.reduces grad_inputs to inputs and we need to replicate that behavior.
In order to replicate that behavior, we recorded the original input
shapes so we know how to reduce the grad_input.
pass an
output_shapesto reductify.This PR simplifies the API of
reductifyandreductify_leaf. Insteadof accepting
input_shape_without_bdimandallow_expanded_grad, wenow combine these into a single argument,
reduce_to_input_shape_without_bdim.provided shape.
Test Plan: