Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[quant][fx] Update name of packed weight attributes #51259

Closed
wants to merge 4 commits into from

Conversation

supriyar
Copy link
Contributor

@supriyar supriyar commented Jan 28, 2021

Stack from ghstack:

Summary:

Store the FQN of the module that is using the packed weights (the quantized op)

In the case of fusion we update the scope mapping to store the module path of the fused node.

Test Plan:
python test/test_quantization.py test_packed_weight_fused_op

Reviewers:

Subscribers:

Tasks:

Tags:

Differential Revision: D26117964

Summary:

Store the FQN of the module that is using the packed weights (the quantized op)

In the case of fusion we update the scope mapping to store the module path of the fused node.

Test Plan:
python test/test_quantization.py test_packed_weight_fused_op

Reviewers:

Subscribers:

Tasks:

Tags:

[ghstack-poisoned]
@facebook-github-bot
Copy link
Contributor

facebook-github-bot commented Jan 28, 2021

💊 CI failures summary and remediations

As of commit 3d85bc5 (more details on the Dr. CI page):


💚 💚 Looks good so far! There are no failures yet. 💚 💚


This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions to the (internal) Dr. CI Users group.

Summary:

Store the FQN of the module that is using the packed weights (the quantized op)

In the case of fusion we update the scope mapping to store the module path of the fused node.

Test Plan:
python test/test_quantization.py test_packed_weight_fused_op

Reviewers:

Subscribers:

Tasks:

Tags:

Differential Revision: [D26117964](https://our.internmc.facebook.com/intern/diff/D26117964)

[ghstack-poisoned]
supriyar added a commit that referenced this pull request Jan 28, 2021
Summary:

Store the FQN of the module that is using the packed weights (the quantized op)

In the case of fusion we update the scope mapping to store the module path of the fused node.

Test Plan:
python test/test_quantization.py test_packed_weight_fused_op

Reviewers:

Subscribers:

Tasks:

Tags:

ghstack-source-id: 9e8ff959d4b16ef44285005454b6029020e3be00
Pull Request resolved: #51259
@codecov
Copy link

codecov bot commented Jan 28, 2021

Codecov Report

Merging #51259 (70515f7) into gh/supriyar/217/base (0fb1ee6) will increase coverage by 0.00%.
The diff coverage is 100.00%.

@@                  Coverage Diff                  @@
##           gh/supriyar/217/base   #51259   +/-   ##
=====================================================
  Coverage                 80.56%   80.57%           
=====================================================
  Files                      1931     1931           
  Lines                    210722   210729    +7     
=====================================================
+ Hits                     169775   169791   +16     
+ Misses                    40947    40938    -9     

'call_function', qconv_op, qconv_args, kwargs)
quantizer.node_name_to_scope[op.name] = quantizer.node_name_to_scope[self.conv_node.name]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can you add a comment here? same for other occurrences

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually can you also add a TODO here:

"TODO: may need to change the key to Node regenerate the map in each transformation since we might not be able to rely on the name"

Summary:

Store the FQN of the module that is using the packed weights (the quantized op)

In the case of fusion we update the scope mapping to store the module path of the fused node.

Test Plan:
python test/test_quantization.py test_packed_weight_fused_op

Reviewers:

Subscribers:

Tasks:

Tags:

Differential Revision: [D26117964](https://our.internmc.facebook.com/intern/diff/D26117964)

[ghstack-poisoned]
Summary:

Store the FQN of the module that is using the packed weights (the quantized op)

In the case of fusion we update the scope mapping to store the module path of the fused node.

Test Plan:
python test/test_quantization.py test_packed_weight_fused_op

Reviewers:

Subscribers:

Tasks:

Tags:

Differential Revision: [D26117964](https://our.internmc.facebook.com/intern/diff/D26117964)

[ghstack-poisoned]
supriyar added a commit that referenced this pull request Jan 29, 2021
Summary:

Store the FQN of the module that is using the packed weights (the quantized op)

In the case of fusion we update the scope mapping to store the module path of the fused node.

Test Plan:
python test/test_quantization.py test_packed_weight_fused_op

Reviewers:

Subscribers:

Tasks:

Tags:

ghstack-source-id: ef2ad948fe8e5c58067004ec93de01d9fe4ce688
Pull Request resolved: #51259
@facebook-github-bot
Copy link
Contributor

This pull request has been merged in 916af89.

@facebook-github-bot facebook-github-bot deleted the gh/supriyar/217/head branch February 1, 2021 15:19
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants