New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Enable torch.autograd typechecks #44451
Enable torch.autograd typechecks #44451
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@malfet has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@malfet has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
💊 CI failures summary and remediationsAs of commit 20e76c5 (more details on the Dr. CI page):
2 failures confirmed as flaky and can be ignored:
ci.pytorch.org: 1 failedThis comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.Please report bugs/suggestions on the GitHub issue tracker or post in the (internal) Dr. CI Users group. This comment has been revised 14 times. |
Codecov Report
@@ Coverage Diff @@
## master #44451 +/- ##
=========================================
Coverage ? 68.05%
=========================================
Files ? 382
Lines ? 49468
Branches ? 0
=========================================
Hits ? 33666
Misses ? 15802
Partials ? 0 Continue to review full report at Codecov.
|
torch/autograd/__init__.py
Outdated
|
||
grad_tensors = _make_grads(tensors, grad_tensors) | ||
grad_tensors__ = _make_grads(tensors, grad_tensors_) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not convinced this helps for code readability...
Maybe just name one grad_tensors
and the other grad_tensors_filled
/populated ?
Also the result type of _make_grads
is the same no? Why can't you just re-use the same variable as before?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's a good suggestion, let me rename just that.
And the types are not exactly the same:
grad_tensors is constructed as list of nullable tensors, but _make_grads returns a tuple.
But let me change code a bit to actually construct tuple here as well.
8ee2da2
to
2aa7cb7
Compare
It sovles missing types problem when one tries to type annotate autograd
2aa7cb7
to
20e76c5
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@malfet has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
To help with further typing, move dynamically added native contributions from
torch.autograd
totorch._C._autograd
Fix invalid error handling pattern in
pytorch/torch/csrc/autograd/init.cpp
Lines 13 to 15 in 89ac30a
PyImport_ImportModule
already raises Python exception and nullptr should be returned to properly propagate the to Python runtime.And all native methods/types in
torch/autograd/__init.py
aftertorch._C._init_autograd()
has been calledUse f-strings instead of
.format
in test_type_hints.pyFixes #44450