Skip to content

Commit

Permalink
注释的公式bug修正
Browse files Browse the repository at this point in the history
注释的公式bug修正
  • Loading branch information
Grasshlw committed Sep 16, 2020
1 parent a1b9bbf commit e5d6e63
Showing 1 changed file with 4 additions and 3 deletions.
7 changes: 4 additions & 3 deletions spikingjelly/clock_driven/ann2snn/parser.py
Original file line number Diff line number Diff line change
Expand Up @@ -65,6 +65,7 @@ def parse(self, model, log_dir):
2、对于Softmax,使用ReLU进行替代,Softmax关于某个输入变量是单调递增的,意味着ReLU并不会对输出的正确性造成太大影响。
3、对于BatchNorm,将其参数吸收进对应的参数化模块,其中:BatchNorm1d默认其上一个模块为Linear,BatchNorm2d默认其上一个模块为Conv2d。
假定BatchNorm的参数为 :math:`\\gamma` (BatchNorm.weight), :math:`\\beta` (BatchNorm.bias), :math:`\\mu` (BatchNorm.running_mean) , :math:`\\sigma` (BatchNorm.running_var running_var开根号)。具体参数定义详见 ``torch.nn.batchnorm`` 。参数模块(例如Linear)具有参数 :math:`W` 和 :math:`b` 。BatchNorm参数吸收就是将BatchNorm的参数通过运算转移到参数模块的 :math:`W`和 :math:`b` 中,使得数据输入新模块的输出和有BatchNorm时相同。
对此,新模型的 :math:`\\bar{W}` 和 :math:`\\bar{b}` 公式表示为:
Expand Down Expand Up @@ -241,7 +242,7 @@ def normalize_model(self,norm_tensor,log_dir,robust=False):
.. math::
\\hat{W} = W * \\frac{\\lambda_{pre}}{\\lambda}
归一化后的偏置 :math:`\\hat{b}` 为:
归一化后的偏置 :math:`\\hat{b}` 为:
.. math::
\\hat{b} = b / \\lambda
Expand Down Expand Up @@ -270,7 +271,7 @@ def normalize_model(self,norm_tensor,log_dir,robust=False):
.. math::
\\hat{W} = W * \\frac{\\lambda_{pre}}{\\lambda}
The normalized bias :math:`\hat{b}` is:
The normalized bias :math:`\\hat{b}` is:
.. math::
\\hat{b} = b / \\lambda
Expand Down Expand Up @@ -317,4 +318,4 @@ def normalize_model(self,norm_tensor,log_dir,robust=False):
if hasattr(m, 'bias') and m.bias is not None:
m.bias.data = m.bias.data / self.activation_range[relu_output_layer]
last_lambda = self.activation_range[relu_output_layer]
i += 1
i += 1

0 comments on commit e5d6e63

Please sign in to comment.