-
Notifications
You must be signed in to change notification settings - Fork 25.6k
swapped uses of torch.norm with torch.linalg.norm as per deprecation #44796
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
🔗 Helpful links
💊 CI failures summary and remediationsAs of commit d04d7ab (more details on the Dr. CI page): Commit d04d7ab was recently pushed. Waiting for builds... This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.Please report bugs/suggestions to the (internal) Dr. CI Users group. |
14d4af2
to
575969c
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The p
kwargs need to be updated to ord
. Otherwise, looks good
6429fe7
to
6e78a65
Compare
still lots of failed tests. surely this couldn't have come from changing the "p" to "ord" as the underlying functionality is unchanged? |
5025eb8
to
2505a9e
Compare
c498038
to
ed05651
Compare
Sorry for the late reply, I wasn't getting notifications on this PR for some reason. The functionality had to be changed in some cases because
Also, the matrix norm can only be calculated with a few possible values of It looks like most of the test failures are due to the added matrix norm functionality, so Comparing the documentation for |
Perhaps it would be helpful if I add a short guide to |
When the 2d norm differs I suppose the reference value should be changed to match the matrix norm, not the test one right? I suppose using flatten on the test value would be testing vector norm instead matrix norm and therefore not be a useful test? Also when matrix norm fails because the order doesn't make any sense (eg ord=0.5) would we then flatten just so that it works, or do we remove the test case? e.g.: https://github.com/pytorch/pytorch/blob/master/test/test_nn.py#L1715
very short, yes please |
No, I think we really do want vector norms in these cases. These test failures are coming from operations that were implemented with For instance, in the case of
Yes, flattening is the right way to go here. I'll submit an issue and start writing a |
thanks! btw regarding the difference between the 2 functions and which norm they compute on 2 dims, I think the docs say that they basically do the exact same thing, am I wrong? From: https://pytorch.org/docs/master/linalg.html?highlight=norm#torch.linalg.norm
From: https://pytorch.org/docs/master/generated/torch.norm.html?highlight=norm#torch.norm
am I losing it? is the docs wrong? |
You're right, the documentation is misleading. I think the "matrix norm" that |
yes I can see that now, thanks! Regardless what the docs say, it's clear what the behaviour is, so I need to fix the tests. |
what about the definition of torch.Tensor.norm ? https://pytorch.org/docs/master/tensors.html?highlight=norm#torch.Tensor.norm |
still torch.norm for the forseeable future. |
Closing due to lack of updates. |
Fixes #{issue number}