-
Notifications
You must be signed in to change notification settings - Fork 25.7k
Added batch rules for _upsample_bi*2d_aa and _upsample_bi*2d_aa_backward #110172
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/110172
Note: Links to docs will display an error until the docs builds have been completed. ⏳ No Failures, 1 PendingAs of commit 4f4dc11 with merge base 0511df0 ( This comment was automatically generated by Dr. CI and updates every 15 minutes. |
494d2b3 to
a4584cc
Compare
Description: - Added batch rules for _upsample_bi*2d_aa and _upsample_bi*2d_aa_backward - Added few more test cases into sample_inputs_upsample_aten
a4584cc to
4f4dc11
Compare
| "aten::to.dtype_layout", | ||
| "aten::to.other", | ||
| "aten::upsample_bicubic2d.vec", | ||
| "aten::_upsample_bicubic2d_aa.vec", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see that this is consistent with batching rule for other upsampling ops but
given that these ops are composite, I think it would make sense to decompose them instead of adding a batching rule (i.e. adding it to BatchRulesDecomposition). I am curious to know if decomposing works.
If it works, we should decompose the existing ones as well.
cc: @zou3519
Thanks!
Ref:
pytorch/aten/src/ATen/native/native_functions.yaml
Lines 12191 to 12193 in c71a64c
| - func: _upsample_bicubic2d_aa.vec(Tensor input, SymInt[]? output_size, bool align_corners, float[]? scale_factors) -> Tensor | |
| python_module: nn | |
| autogen: _upsample_bicubic2d_aa.vec_out |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks Kshiteej, if I understand correctly the idea is to do
// in BatchRulesModules.cpp
EXISTING_BDIM(_upsample_bilinear2d_aa);
// and in BatchRulesDecompositions.cpp
OP_DECOMPOSE2(_upsample_bilinear2d_aa, vec);however this leads to a compilation problem which comes from SymInt vs long types incompatibility: Native functions declares one arg as SymInt[]? but implementation uses OptionalIntArrayRef instead of OptionalSymIntArrayRef. Adapting the signature everywhere would require significant changes in this function and also underlying functions like compute_output_size which is also used in quantized upsampling... I propose to explore that and apply in a follow-up PR.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see, thanks for looking into it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@vfdev-5, are you planning on doing this follow-up? This PR means that a single vmap on this operator works correctly, but compositions of vmap and grad won't
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes, let me try to do that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, thanks @vfdev-5
|
@pytorchbot merge |
Merge failedReason: Approval needed from one of the following: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK, but we should really decompose it, otherwise this means that it doesn't support compositions of vmap+grad
|
@pytorchbot merge |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
Follow-up PR to #110172 Pull Request resolved: #110333 Approved by: https://github.com/zou3519
Description:
_upsample_bi*2d_aaand_upsample_bi*2d_aa_backwardsample_inputs_upsample_aten