-
Notifications
You must be signed in to change notification settings - Fork 21.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add complex support for torch.nn.L1Loss #46640
Conversation
[ghstack-poisoned]
ghstack-source-id: d3a1d45b86cff29790b5aeb0b590cb3b54f49a5b Pull Request resolved: #46640
🔗 Helpful links
💊 CI failures summary and remediationsAs of commit 326a8d7 (more details on the Dr. CI page): Commit 326a8d7 was recently pushed. Waiting for builds... This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.Please report bugs/suggestions to the (internal) Dr. CI Users group. |
return apply_loss_reduction(loss, reduction); | ||
} | ||
|
||
Tensor& l1_loss_out(Tensor&result, const Tensor& input, const Tensor& target, int64_t reduction) { | ||
auto diff = input.sub(target); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is interesting. In the previous code input - target
is out of place if Reduction::None, but inplace otherwise. With this change it will always be out of place.
I'm not sure why that distinction existed previously, but it seems like this change can preserve the inplace subtraction in the else
clause?
@@ -7056,11 +7056,12 @@ def test_pointwise_loss_broadcast(self): | |||
# https://github.com/pytorch/pytorch/issues/27692 reports | |||
# that l1_loss get a wrong result for big batch size | |||
def test_l1_loss_correct(self): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there no other L1 loss test that should be updated?
This is making me thing we should (eventually) create test_losses.py.
Summary: Building on top of the work of anjali411 (#46640) Things added in this PR: 1. Modify backward and double-backward formulas 2. Add complex support for `new module tests` and criterion tests (and add complex tests for L1) 3. Modify some existing tests to support complex Pull Request resolved: #49912 Reviewed By: zhangguanheng66 Differential Revision: D25853036 Pulled By: soulitzer fbshipit-source-id: df619f1b71c450ab2818eb17804e0c55990aa8ad
Looks like this PR hasn't been updated in a while so we're going to go ahead and mark this as |
Stack from ghstack:
TODO:
l1_loss_backward