Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fx quant: do not observe bias on F.conv #49623

Closed
wants to merge 7 commits into from

Conversation

vkuzo
Copy link
Contributor

@vkuzo vkuzo commented Dec 19, 2020

Stack from ghstack:

Summary:

Ensures that conv bias is not observed in a F.conv{n}d call. This should be a small speedup in PTQ, and will change numerics (in a good way) for QAT if someone is using F.conv.

Test Plan:

python test/test_quantization.py TestQuantizeFxOps.test_conv_functional_bias_not_observed

Reviewers:

Subscribers:

Tasks:

Tags:

Differential Revision: D25652856

Summary:

Ensures that conv bias is not observed in a `F.conv{n}d` call.

Test Plan:

Reviewers:

Subscribers:

Tasks:

Tags:

[ghstack-poisoned]
@facebook-github-bot
Copy link
Contributor

facebook-github-bot commented Dec 19, 2020

💊 CI failures summary and remediations

As of commit 9a9304c (more details on the Dr. CI page):



❄️ 1 failure tentatively classified as flaky

but reruns have not yet been triggered to confirm:

See CircleCI build pytorch_ios_12_0_0_x86_64_build (1/1)

Step: "Update Homebrew" (full log | diagnosis details | 🔁 rerun) ❄️

curl: (56) LibreSSL SSL_read: SSL_ERROR_SYSCALL, errno 54
 * [new branch]          dependabot/bundler/Library/Homebrew/sorbet-0.5.6173 -> origin/dependabot/bundler/Library/Homebrew/sorbet-0.5.6173
 + 844476160...53ef74f67 master          -> origin/master  (forced update)
 * [new branch]          zsh-fpath-fixes -> origin/zsh-fpath-fixes
+ git reset --hard origin/master
HEAD is now at 53ef74f67 Merge pull request #10068 from reitermarkus/bitbucket-strategy
+ export HOMEBREW_NO_AUTO_UPDATE=1
+ HOMEBREW_NO_AUTO_UPDATE=1
+ brew unlink parallel
�[34m==>�[0m �[1mDownloading https://homebrew.bintray.com/bottles-portable-ruby/portable-ruby-2.6.3_2.yosemite.bottle.tar.gz�[0m
#=#=#                                                                         
##O#- #                                                                       
##O=#  #                                                                      
#=#=-#  #                                                                     
-#O#- #   #                                                                   
-=#=#   #   #                                                                 
-=O#- #  #    #                                                               
-=O=#  #   #   #                                                              
-=O=-#  #    #   #                                                            
-=O=- #   #   #     #                                                         
-=O=-   #   #   #     #                                                       
-=O=-    #    #    #    #                                                     
-=O=-      #   #     #    #                                                   
-=O=-        #   #     #     #                                                
-=O=-         #     #    #     #                                              
-=O=-           #     #    #     #                                            
-=O=-              #    #     #     #                                         
-=O=-                #    #     #     #                                       
-=O=-                  #     #     #     #                                    
-=O=-                    #     #     #     #                                  
-=O=-                      #     #     #      #                               
-=O=-                         #     #     #     #                             
-=O=-                           #     #     #     #                           
-=O=-                              #     #     #     #                        
-=O=-                                #     #     #     #                      
-=O=-                                  #      #    #     #                    
-=O=-                                     #     #     #    #                  
-=O=-                                       #     #     #    #                
-=O=-                                          #     #    #    #              
-=O=-                                            #     #    #    #            
-=O=-                                              #     #    #    #          
-=O=-                                                 #    #    #   #         
-=O=-                                                   #    #    #   #       
-=O=-                                                     #    #   #   #      
-=O=-                                                       #    #   #  #     
-=O=-                                                         #    #  #  #    
-=O=-                                                           #   #   # #   

########                                                                  12.5%
#############                                                             18.7%
#############                                                             18.8%
#############                                                             19.3%
##############                                                            19.5%
##############                                                            20.0%
##############                                                            20.2%
##############                                                            20.4%
##############                                                            20.5%
###############                                                           21.2%
###############                                                           21.7%
###############                                                           22.1%
curl: (56) LibreSSL SSL_read: SSL_ERROR_SYSCALL, errno 54
�[4;31mError�[0m: Checksum mismatch.
Expected: b065e5e3783954f3e65d8d3a6377ca51649bfcfa21b356b0dd70490f74c6bd86
  Actual: bcaf672c77935530c82ca95813d502e7d57ffb88d332f3ff7e7fab785b324804
 Archive: /Users/distiller/Library/Caches/Homebrew/portable-ruby-2.6.3_2.yosemite.bottle.tar.gz
To retry an incomplete download, remove the file above.
�[4;31mError�[0m: Failed to install Homebrew Portable Ruby (and your system version is too old)!


Exited with code exit status 1

---
This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions to the (internal) Dr. CI Users group.

This comment has been revised 22 times.

Summary:

Ensures that conv bias is not observed in a `F.conv{n}d` call.

Test Plan:

Reviewers:

Subscribers:

Tasks:

Tags:

[ghstack-poisoned]
Summary:

(not ready for review)

Ensures that conv bias is not observed in a `F.conv{n}d` call.

Test Plan:

Reviewers:

Subscribers:

Tasks:

Tags:

[ghstack-poisoned]
vkuzo added a commit that referenced this pull request Dec 19, 2020
Summary:

Ensures that conv bias is not observed in a `F.conv{n}d` call.

Test Plan:

Reviewers:

Subscribers:

Tasks:

Tags:

ghstack-source-id: ecaed70854cde1df3edf0e1f0c309b40f4763cd5
Pull Request resolved: #49623
Summary:

(not ready for review)

Ensures that conv bias is not observed in a `F.conv{n}d` call.

Test Plan:

Reviewers:

Subscribers:

Tasks:

Tags:

Differential Revision: [D25652856](https://our.internmc.facebook.com/intern/diff/D25652856)

[ghstack-poisoned]
Summary:

Ensures that conv bias is not observed in a `F.conv{n}d` call.  This should be a small speedup in PTQ, and will change numerics (in a good way) for QAT if someone is using `F.conv`.

Test Plan:

```
python test/test_quantization.py TestQuantizeFxOps.test_conv_functional_bias_not_observed
```

Reviewers:

Subscribers:

Tasks:

Tags:

Differential Revision: [D25652856](https://our.internmc.facebook.com/intern/diff/D25652856)

[ghstack-poisoned]
torch.nn.functional.conv3d : [2],
}

def node_arg_is_bias(node: Node, arg: Any) -> bool:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: shall we share the code of this function with node_arg_is_weight as well

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't feel too strongly. The definition diverges in the next PR though so not sure if worth it.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lmk if a change is needed here given the context from the next PR, I'm flexible

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sounds good, we can keep the current approach

quantized_nodes[dim])

@skipIfNoFBGEMM
def test_conv_functional_bias_not_observed(self):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

optional: can we merge this test with previous test and call it "test_conv"? we have "test_linear" that checks all combinations of is_module/is_not_module and has_rleu/doesn't have relu and relu module/functional. I think that would be a more exhaustive testing.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hmm, that was intentional, there are benefits when each test case is separate. This allows for faster debugging when chasing down an individual issue (so there is no need to comment out the unrelated test cases in the large loop). Perhaps we can align on a way to both share code and also test one thing at a time, lmk your preference.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we should do exhaustive tests on what we want to support first. in this particular case, the problem is that we did not test functional conv before. Testing for special cases that we encounter in model might be too ad hoc and it will be hard to know what is covered and what is not in the future.

I think we can first make sure our unit test cover all possible modules/ops etc. we want to support, and then if there is any corner case we can cover with these ad hoc tests (typically should be some interaction between ops + qconfig instead of one op). one such example is here: https://github.com/pytorch/pytorch/blob/master/test/quantization/test_quantize_fx.py#L538

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

regarding separating test cases I think as long as there is no code duplication it should be OK.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sure, I made the functional test more comprehensive.

Regarding no code duplication, I think it's good to optimize for in general, but code readability and ease of use is something to consider as well. Some of the current test utils seem to bias on the "minimize code reuse" side while sacrificing readability, at least for people who were not the code authors. Maybe in the future we can consider on ways to do both.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sure sounds good

Summary:

Ensures that conv bias is not observed in a `F.conv{n}d` call.  This should be a small speedup in PTQ, and will change numerics (in a good way) for QAT if someone is using `F.conv`.

Test Plan:

```
python test/test_quantization.py TestQuantizeFxOps.test_conv_functional_bias_not_observed
```

Reviewers:

Subscribers:

Tasks:

Tags:

Differential Revision: [D25652856](https://our.internmc.facebook.com/intern/diff/D25652856)

[ghstack-poisoned]
Summary:

Ensures that conv bias is not observed in a `F.conv{n}d` call.  This should be a small speedup in PTQ, and will change numerics (in a good way) for QAT if someone is using `F.conv`.

Test Plan:

```
python test/test_quantization.py TestQuantizeFxOps.test_conv_functional_bias_not_observed
```

Reviewers:

Subscribers:

Tasks:

Tags:

Differential Revision: [D25652856](https://our.internmc.facebook.com/intern/diff/D25652856)

[ghstack-poisoned]
@facebook-github-bot
Copy link
Contributor

This pull request has been merged in c3a7591.

@facebook-github-bot facebook-github-bot deleted the gh/vkuzo/195/head branch December 26, 2020 15:18
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants