Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Remove hacky_wrapper from VariableType and TraceType #44005

Closed
wants to merge 5 commits into from

Conversation

smessmer
Copy link
Contributor

@smessmer smessmer commented Sep 2, 2020

Stack from ghstack:

Previously, VariableType and TraceType kernels were still written in the legacy way, i.e. they took one TensorOptions argument instead of scattered dtype, layout, device, pin_memory, and they used hacky_wrapper to be callable.

Now with this PR, variable and tracing kernels are written in the new way and no hacky_wrapper is needed for them.

This only affects ops with use_c10_dispatcher: full.

Differential Revision: D23466042

Previously, VariableType and TraceType kernels were still written in the legacy way, i.e. they took one TensorOptions argument instead of scattered dtype, layout, device, pin_memory,  and they used hacky_wrapper to be callable. This caused a re-wrapping step. Calling into a Variable or Tracing kernel required taking the individual scattered arguments, packing them into a TensorOptions, and the kernel itself then likely re-dispatched, scattering those again into individual arguments.

Now with this PR, variable and tracing kernels are written in the new way and no hacky_wrapper or rewrapping is needed for them.

Differential Revision: [D23466042](https://our.internmc.facebook.com/intern/diff/D23466042/)

[ghstack-poisoned]
smessmer added a commit that referenced this pull request Sep 2, 2020
Previously, VariableType and TraceType kernels were still written in the legacy way, i.e. they took one TensorOptions argument instead of scattered dtype, layout, device, pin_memory,  and they used hacky_wrapper to be callable. This caused a re-wrapping step. Calling into a Variable or Tracing kernel required taking the individual scattered arguments, packing them into a TensorOptions, and the kernel itself then likely re-dispatched, scattering those again into individual arguments.

Now with this PR, variable and tracing kernels are written in the new way and no hacky_wrapper or rewrapping is needed for them.

Differential Revision: [D23466042](https://our.internmc.facebook.com/intern/diff/D23466042/)

ghstack-source-id: 111210090
Pull Request resolved: #44005
@codecov
Copy link

codecov bot commented Sep 2, 2020

Codecov Report

Merging #44005 into gh/smessmer/252/base will increase coverage by 0.00%.
The diff coverage is n/a.

Impacted file tree graph

@@                  Coverage Diff                  @@
##           gh/smessmer/252/base   #44005   +/-   ##
=====================================================
  Coverage                 68.06%   68.06%           
=====================================================
  Files                       393      393           
  Lines                     50918    50918           
=====================================================
+ Hits                      34655    34657    +2     
+ Misses                    16263    16261    -2     
Impacted Files Coverage Δ
torch/utils/_benchmark/utils/common.py 79.33% <0.00%> (+1.65%) ⬆️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update dc67b47...b68b323. Read the comment docs.

Previously, VariableType and TraceType kernels were still written in the legacy way, i.e. they took one TensorOptions argument instead of scattered dtype, layout, device, pin_memory,  and they used hacky_wrapper to be callable.

Now with this PR, variable and tracing kernels are written in the new way and no hacky_wrapper is needed for them.

Differential Revision: [D23466042](https://our.internmc.facebook.com/intern/diff/D23466042/)

[ghstack-poisoned]
smessmer added a commit that referenced this pull request Sep 2, 2020
Pull Request resolved: #44005

Previously, VariableType and TraceType kernels were still written in the legacy way, i.e. they took one TensorOptions argument instead of scattered dtype, layout, device, pin_memory,  and they used hacky_wrapper to be callable.

Now with this PR, variable and tracing kernels are written in the new way and no hacky_wrapper is needed for them.
ghstack-source-id: 111266814

Differential Revision: [D23466042](https://our.internmc.facebook.com/intern/diff/D23466042/)
@dr-ci
Copy link

dr-ci bot commented Sep 2, 2020

💊 CI failures summary and remediations

As of commit b68b323 (more details on the Dr. CI page):


💚 💚 Looks good so far! There are no failures yet. 💚 💚


This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions on the GitHub issue tracker or post in the (internal) Dr. CI Users group.

See how this bot performed.

This comment has been revised 9 times.

Copy link
Contributor

@bhosmer bhosmer left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A few minor things inline, plus one thing about tracing TensorOptions (I think this relates to a semantics issue you mentioned in chat, but I thought the answer was that we could scatter them into addInput()).

Also I'd add to the PR description that the modification is specifically for use_c10_dispatcher: full.

Note: I think both build failures come down to

Sep 02 19:19:41 -- Build files have been written to: /var/lib/jenkins/workspace/build_test_custom_build/predictor
Sep 02 19:19:41 + make
Sep 02 19:19:41 Scanning dependencies of target Predictor
Sep 02 19:19:41 [ 50%] Building CXX object CMakeFiles/Predictor.dir/predictor.cpp.o
Sep 02 19:19:41 [100%] Linking CXX executable Predictor
Sep 02 19:19:44 [100%] Built target Predictor
Sep 02 19:19:44 + run_predictor
Sep 02 19:19:44 + cd /var/lib/jenkins/workspace/build_test_custom_build/predictor
Sep 02 19:19:44 + ./Predictor /var/lib/jenkins/workspace/build_test_custom_build/MobileNetV2.pt
Sep 02 19:19:44 terminate called after throwing an instance of 'torch::jit::ErrorReport'
Sep 02 19:19:44   what():  
Sep 02 19:19:44 
Sep 02 19:19:44 aten::_convolution(Tensor input, Tensor weight, Tensor? bias, int[] stride, int[] padding, int[] dilation, bool transposed, int[] output_padding, int groups, bool benchmark, bool deterministic, bool cudnn_enabled) -> (Tensor):
Sep 02 19:19:44 Expected at most 12 arguments but found 13 positional arguments.

tools/autograd/gen_variable_type.py Show resolved Hide resolved
tools/autograd/gen_variable_type.py Outdated Show resolved Hide resolved
tools/autograd/gen_variable_type.py Show resolved Hide resolved
tools/autograd/gen_variable_type.py Show resolved Hide resolved
tools/autograd/gen_variable_type.py Outdated Show resolved Hide resolved
Previously, VariableType and TraceType kernels were still written in the legacy way, i.e. they took one TensorOptions argument instead of scattered dtype, layout, device, pin_memory,  and they used hacky_wrapper to be callable.

Now with this PR, variable and tracing kernels are written in the new way and no hacky_wrapper is needed for them.

This only affects ops with `use_c10_dispatcher: full`.

Differential Revision: [D23466042](https://our.internmc.facebook.com/intern/diff/D23466042/)

[ghstack-poisoned]
Previously, VariableType and TraceType kernels were still written in the legacy way, i.e. they took one TensorOptions argument instead of scattered dtype, layout, device, pin_memory,  and they used hacky_wrapper to be callable.

Now with this PR, variable and tracing kernels are written in the new way and no hacky_wrapper is needed for them.

This only affects ops with `use_c10_dispatcher: full`.

Differential Revision: [D23466042](https://our.internmc.facebook.com/intern/diff/D23466042/)

[ghstack-poisoned]
Previously, VariableType and TraceType kernels were still written in the legacy way, i.e. they took one TensorOptions argument instead of scattered dtype, layout, device, pin_memory,  and they used hacky_wrapper to be callable.

Now with this PR, variable and tracing kernels are written in the new way and no hacky_wrapper is needed for them.

This only affects ops with `use_c10_dispatcher: full`.

Differential Revision: [D23466042](https://our.internmc.facebook.com/intern/diff/D23466042/)

[ghstack-poisoned]
@facebook-github-bot
Copy link
Contributor

This pull request has been merged in 043bd51.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants