Skip to content

Commit

Permalink
Update on "[reland][quant][fix] Add bias once in conv_fused (#48593)"
Browse files Browse the repository at this point in the history
Summary:
Previously _conv_forward will add self.bias to the result, so bias is added twice in qat ConvBn module
this PR added a bias argument to _conv_forward and _conv_forward is called with zero bias
in ConvBn module

fixes: #48514

Test Plan: Imported from OSS

Reviewed By: raghuramank100

Differential Revision: [D25249175](https://our.internmc.facebook.com/intern/diff/D25249175)

[ghstack-poisoned]
  • Loading branch information
jerryzh168 committed Dec 1, 2020
1 parent 6731ea7 commit 118c96f
Show file tree
Hide file tree
Showing 2 changed files with 8 additions and 2 deletions.
5 changes: 4 additions & 1 deletion test/quantization/test_qat_module.py
Expand Up @@ -110,7 +110,10 @@ def _forward(self, input):
running_std = torch.sqrt(self.running_var + self.eps)
scale_factor = self.gamma / running_std
scaled_weight = self.weight * scale_factor.reshape([-1, 1, 1, 1])
zero_bias = torch.zeros_like(self.bias)
if self.bias:
zero_bias = torch.zeros_like(self.bias)
else:
zero_bias = torch.zeros(self.out_channels, device=scaled_weight.device())
conv = self._conv_forward(input, self.weight_fake_quant(scaled_weight), zero_bias)

if self.training and not self.freeze_bn:
Expand Down
5 changes: 4 additions & 1 deletion torch/nn/intrinsic/qat/modules/conv_fused.py
Expand Up @@ -94,7 +94,10 @@ def _forward(self, input):
bias_shape[1] = -1
scaled_weight = self.weight_fake_quant(self.weight * scale_factor.reshape(weight_shape))
# this does not include the conv bias
zero_bias = torch.zeros_like(self.bias)
if self.bias:
zero_bias = torch.zeros_like(self.bias)
else:
zero_bias = torch.zeros(self.out_channels, device=scaled_weight.device())
conv = self._conv_forward(input, scaled_weight, zero_bias)
conv_orig = conv / scale_factor.reshape(bias_shape)
if self.bias is not None:
Expand Down

0 comments on commit 118c96f

Please sign in to comment.