Skip to content

Conversation

kurtamohler
Copy link
Collaborator

@kurtamohler kurtamohler commented Jan 19, 2021

Also upgrades linalg.norm's autograd and jit tests to OpInfo

Fixes #48842

@kurtamohler kurtamohler force-pushed the pytorch-matrix-norm-autograd-tests branch 2 times, most recently from f4b6f9e to bc8d118 Compare January 19, 2021 19:41
@codecov
Copy link

codecov bot commented Jan 19, 2021

Codecov Report

Merging #50746 (bc8d118) into master (8b501df) will decrease coverage by 0.14%.
The diff coverage is 88.02%.

@@            Coverage Diff             @@
##           master   #50746      +/-   ##
==========================================
- Coverage   80.66%   80.51%   -0.15%     
==========================================
  Files        1913     1913              
  Lines      208091   208151      +60     
==========================================
- Hits       167849   167586     -263     
- Misses      40242    40565     +323     

@kurtamohler kurtamohler force-pushed the pytorch-matrix-norm-autograd-tests branch 2 times, most recently from df54937 to 2a9cfed Compare January 19, 2021 23:57
@kurtamohler
Copy link
Collaborator Author

I've replaced linalg.norm's autograd and jit tests with an OpInfo entry. Should be ready for review

Copy link
Contributor

@anjali411 anjali411 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

left some minor comments, but looks good otherwise :D thank you and let me know once you link the issue for pow inline.

@kurtamohler kurtamohler force-pushed the pytorch-matrix-norm-autograd-tests branch from 2a9cfed to 2e52e1d Compare January 20, 2021 03:58
@kurtamohler
Copy link
Collaborator Author

Thanks @anjali411, I've made those changes. I made an issue for pow and for max.

CI was showing a failing mypy test related to my changes. I'm not sure yet if it's fixed it or not, we'll see.

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@anjali411 has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@anjali411
Copy link
Contributor

Hey @kurtamohler could you rebase? I think there might be merge conflicts due to #50667 which landed earlier today

@kurtamohler
Copy link
Collaborator Author

@anjali411, done. If there were conflicts, git handled them automatically for me

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@anjali411 has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@facebook-github-bot
Copy link
Contributor

@anjali411 merged this pull request in c082e21.

facebook-github-bot pushed a commit that referenced this pull request Jan 28, 2021
Summary:
Follow up to #50746

I accidentally missed the `ord=-inf` case in the OpInfo for `torch.linalg.norm` when I wrote it.

Pull Request resolved: #51233

Reviewed By: malfet

Differential Revision: D26117160

Pulled By: anjali411

fbshipit-source-id: af921c1d8004783612b3a477ae2025a82860ff4e
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Support autograd in torch.svd with complex inputs

6 participants