New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[inductor] Added smooth_l1_loss
refs
#102077
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/102077
Note: Links to docs will display an error until the docs builds have been completed. ✅ No FailuresAs of commit c6ade94: This comment was automatically generated by Dr. CI and updates every 15 minutes. |
a11e917
to
9a728ab
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just a minor point, otherwise this LGTM. Approved contingent on that and the tests passing
827078e
to
46b3cb8
Compare
reduction = _get_string_reduction_arg(size_average=size_average, reduce=reduce) | ||
_check_reduction_value(reduction) | ||
|
||
if beta == 0.0: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why do you need this conditional?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Python nn functional API does the same:
pytorch/torch/nn/functional.py
Lines 3242 to 3245 in 76af221
if beta == 0.0: | |
return torch._C._nn.l1_loss(expanded_input, expanded_target, _Reduction.get_enum(reduction)) | |
else: | |
return torch._C._nn.smooth_l1_loss(expanded_input, expanded_target, _Reduction.get_enum(reduction), beta) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think we have enough logic as to optimise this example out in practice, as it would mean that we need to prove that (input-target).abs()
is not negative. The conditional is alright for now.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
wdym? .abs()
is non-negative. Functional API does this due to some numeric discrepancies in backward, this doesn't apply here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok, my point simply comes from a perf perspective, wheere we would be computing both branches of the where and just using one, but probably LLVM should be able to catch the < 0
after .abs()
and optimise it out.
That being said, I still think that keeping this closer to core is better, as we could think of eventually registering this operation and simply differentiating through it to get its backward. This beta==0
specialisation would make sure that this works in that case, as it does in master.
46b3cb8
to
c6ade94
Compare
@pytorchbot merge |
Merge failedReason: This PR needs a If not, please add the To add a label, you can comment to pytorchbot, for example For more information, see Details for Dev Infra teamRaised by workflow job |
@pytorchbot merge |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
Added
smooth_l1_loss
to refs + testscc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @lezcano