Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Is it reasonable to get a threshold for all bn layers? #35

Open
zihaozhang9 opened this issue Dec 23, 2019 · 1 comment
Open

Is it reasonable to get a threshold for all bn layers? #35

zihaozhang9 opened this issue Dec 23, 2019 · 1 comment

Comments

@zihaozhang9
Copy link

When calculating the threshold, the weight ordering of all bn layers is used. Is this reasonable?

Is there such a phenomenon:
① The first value of the network is closer to the image pixel value, and the last layer is closer to the category probability. bn's weight is not necessarily the same.
② There is a shortcut in the middle of the network. After the two convolution pixel values are superimposed, the weight parameter becomes larger. May affect bn's weight.

在计算阈值时,将使用所有bn层的权重排序。 这合理吗?
是否存在这样的现象:
①网络最前面的数值,更靠近图像像素值,最后一层更靠近类别概率。bn的weight不一定分布相同。
②在网络中间有shortcut,两个卷积像素值叠加后,weight参数变大。可能会影响bn的weight。

issues
期待您的回复。十分感谢。

@Eric-mingjie
Copy link
Owner

我们使用的就是原始的network slimming的方法,code实现了slimming的原始论文 https://arxiv.org/abs/1708.06519

slimming中全局排序bn在实验上被证明是简捷并有效的方法,对于你提的问题, 我们没有比较过每个bn层的scalng fcator,不过很有意义和值得研究,更精细的pruning可能会有更好的结果。

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants