Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enable torch.autograd typechecks #44451

Closed

Conversation

malfet
Copy link
Contributor

@malfet malfet commented Sep 10, 2020

To help with further typing, move dynamically added native contributions from torch.autograd to torch._C._autograd
Fix invalid error handling pattern in

auto tensor_module = THPObjectPtr(PyImport_ImportModule("torch.tensor"));
if (!tensor_module)
throw python_error();

PyImport_ImportModule already raises Python exception and nullptr should be returned to properly propagate the to Python runtime.

And all native methods/types in torch/autograd/__init.py after torch._C._init_autograd() has been called
Use f-strings instead of .format in test_type_hints.py
Fixes #44450

@malfet malfet added module: typing Related to mypy type annotations triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module labels Sep 10, 2020
@malfet malfet changed the title Malfet/move autograd native contributions Enable torch.autograd typechecks Sep 10, 2020
Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@malfet has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@malfet has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@dr-ci
Copy link

dr-ci bot commented Sep 10, 2020

💊 CI failures summary and remediations

As of commit 20e76c5 (more details on the Dr. CI page):


  • 1/1 failures possibly* introduced in this PR
    • 1/1 non-CircleCI failure(s)

2 failures confirmed as flaky and can be ignored:

  • pytorch_windows_vs2019_py36_cuda11.0_build
  • pytorch_windows_vs2019_py36_cuda10.1_build

ci.pytorch.org: 1 failed


This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions on the GitHub issue tracker or post in the (internal) Dr. CI Users group.

See how this bot performed.

This comment has been revised 14 times.

@codecov
Copy link

codecov bot commented Sep 10, 2020

Codecov Report

❗ No coverage uploaded for pull request base (master@f9a0d0c). Click here to learn what that means.
The diff coverage is n/a.

Impacted file tree graph

@@            Coverage Diff            @@
##             master   #44451   +/-   ##
=========================================
  Coverage          ?   68.05%           
=========================================
  Files             ?      382           
  Lines             ?    49468           
  Branches          ?        0           
=========================================
  Hits              ?    33666           
  Misses            ?    15802           
  Partials          ?        0           

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update f9a0d0c...20e76c5. Read the comment docs.


grad_tensors = _make_grads(tensors, grad_tensors)
grad_tensors__ = _make_grads(tensors, grad_tensors_)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not convinced this helps for code readability...
Maybe just name one grad_tensors and the other grad_tensors_filled/populated ?

Also the result type of _make_grads is the same no? Why can't you just re-use the same variable as before?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's a good suggestion, let me rename just that.
And the types are not exactly the same:
grad_tensors is constructed as list of nullable tensors, but _make_grads returns a tuple.
But let me change code a bit to actually construct tuple here as well.

torch/csrc/autograd/init.cpp Show resolved Hide resolved
@malfet malfet force-pushed the malfet/move-autograd-native-contributions branch from 8ee2da2 to 2aa7cb7 Compare September 10, 2020 15:03
@malfet malfet force-pushed the malfet/move-autograd-native-contributions branch from 2aa7cb7 to 20e76c5 Compare September 10, 2020 17:01
Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@malfet has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@malfet malfet deleted the malfet/move-autograd-native-contributions branch September 10, 2020 20:45
@facebook-github-bot
Copy link
Contributor

@malfet merged this pull request in 4bead64.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Merged module: typing Related to mypy type annotations triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Enable torch.autograd typechecks during CI
6 participants