Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

用GhostModule替换Conv2d,loss降的很慢? #81

Closed
yc-cui opened this issue Feb 4, 2022 · 8 comments
Closed

用GhostModule替换Conv2d,loss降的很慢? #81

yc-cui opened this issue Feb 4, 2022 · 8 comments

Comments

@yc-cui
Copy link

yc-cui commented Feb 4, 2022

我直接将efficientnet里面的MBConvBlock中的Conv2d替换为GhostModule:
Conv2d(in_channels=inp, out_channels=oup, kernel_size=1, bias=False)
替换为
GhostModule(inp, oup)
其他参数不变,为什么损失比以前收敛的更慢了,一直降不下来?请问需要修改其他什么参数吗?

@iamhankai
Copy link
Member

试试GhostModule( inp, oup, kernel_size=1, ratio=2, dw_size=3, stride=1, relu=False)

另外,EfficientNet里PWConv换成GhostModule,DWConv就不要再换成GhostModule了

@yc-cui
Copy link
Author

yc-cui commented Feb 8, 2022

试试GhostModule( inp, oup, kernel_size=1, ratio=2, dw_size=3, stride=1, relu=False)

另外,EfficientNet里PWConv换成GhostModule,DWConv就不要再换成GhostModule了

是的,我只把PWConv替换成GhostModule。按您给的参数还是不行诶,和Conv2d差别很大

@iamhankai
Copy link
Member

网络首尾层的Conv不要换,影响较大。另外,log能发来看看吗?

@yc-cui
Copy link
Author

yc-cui commented Feb 8, 2022

网络首尾层的Conv不要换,影响较大。另外,log能发来看看吗?

首尾的1x1卷积比较耗时吧,我想用GhostModule替换掉。我想要牺牲精度提高EfficientNet的效率,如果不替换首尾,那该替换哪呢?

@yc-cui
Copy link
Author

yc-cui commented Feb 8, 2022

这是替换之前的:

loading annotations into memory...
Done (t=0.01s)
creating index...
index created!
loading annotations into memory...
Done (t=0.00s)
creating index...
index created!
[Warning] Ignoring Error(s) in loading state_dict for EfficientDetBackbone:
	size mismatch for classifier.header.pointwise_conv.conv.weight: copying a param with shape torch.Size([810, 64, 1, 1]) from checkpoint, the shape in current model is torch.Size([63, 64, 1, 1]).
	size mismatch for classifier.header.pointwise_conv.conv.bias: copying a param with shape torch.Size([810]) from checkpoint, the shape in current model is torch.Size([63]).
[Warning] Don't panic if you see this, this might be because you load a pretrained weights with different number of classes. The rest of the weights should be loaded already.
[Info] loaded weights: efficientdet-d0.pth, resuming checkpoint from step: 0
[Info] freezed backbone
Step: 55. Epoch: 0/10. Iteration: 56/56. Cls loss: 1.52058. Reg loss: 2.57542. T
Val. Epoch: 0/10. Classification loss: 2.12950. Regression loss: 3.11589. Total loss: 5.24539
Step: 99. Epoch: 1/10. Iteration: 44/56. Cls loss: 1.35428. Reg loss: 3.01937. Tcheckpoint...
Step: 111. Epoch: 1/10. Iteration: 56/56. Cls loss: 1.15300. Reg loss: 3.10199.
Val. Epoch: 1/10. Classification loss: 1.50583. Regression loss: 2.84335. Total loss: 4.34918
Step: 167. Epoch: 2/10. Iteration: 56/56. Cls loss: 0.96853. Reg loss: 2.74884.
Val. Epoch: 2/10. Classification loss: 1.06936. Regression loss: 2.83312. Total loss: 3.90248
Step: 199. Epoch: 3/10. Iteration: 32/56. Cls loss: 0.98485. Reg loss: 2.56581. checkpoint...
Step: 223. Epoch: 3/10. Iteration: 56/56. Cls loss: 0.86978. Reg loss: 2.86461.
Val. Epoch: 3/10. Classification loss: 0.87215. Regression loss: 2.57996. Total loss: 3.45211
Step: 279. Epoch: 4/10. Iteration: 56/56. Cls loss: 0.88644. Reg loss: 2.45166.
Val. Epoch: 4/10. Classification loss: 0.81143. Regression loss: 2.66739. Total loss: 3.47882
Step: 299. Epoch: 5/10. Iteration: 20/56. Cls loss: 0.77803. Reg loss: 2.22479. checkpoint...
Step: 335. Epoch: 5/10. Iteration: 56/56. Cls loss: 0.77692. Reg loss: 2.67725.
Val. Epoch: 5/10. Classification loss: 0.77923. Regression loss: 2.72656. Total loss: 3.50579
Step: 391. Epoch: 6/10. Iteration: 56/56. Cls loss: 0.77824. Reg loss: 1.80291.
Val. Epoch: 6/10. Classification loss: 0.75823. Regression loss: 2.68391. Total loss: 3.44214
Step: 399. Epoch: 7/10. Iteration: 8/56. Cls loss: 0.83059. Reg loss: 2.75574. Tcheckpoint...
Step: 447. Epoch: 7/10. Iteration: 56/56. Cls loss: 0.72287. Reg loss: 2.84142.
Val. Epoch: 7/10. Classification loss: 0.72159. Regression loss: 2.79952. Total loss: 3.52111
Step: 499. Epoch: 8/10. Iteration: 52/56. Cls loss: 0.76601. Reg loss: 2.46491. checkpoint...
Step: 503. Epoch: 8/10. Iteration: 56/56. Cls loss: 0.73658. Reg loss: 2.11717.
Val. Epoch: 8/10. Classification loss: 0.70479. Regression loss: 2.62323. Total loss: 3.32802
Step: 559. Epoch: 9/10. Iteration: 56/56. Cls loss: 0.65757. Reg loss: 2.30679.
Val. Epoch: 9/10. Classification loss: 0.69079. Regression loss: 2.78529. Total loss: 3.47609

替换之后的:

loading annotations into memory...
Done (t=0.01s)
creating index...
index created!
loading annotations into memory...
Done (t=0.00s)
creating index...
index created!
[Warning] Ignoring Error(s) in loading state_dict for EfficientDetBackbone:
	size mismatch for classifier.header.pointwise_conv.conv.weight: copying a param with shape torch.Size([810, 64, 1, 1]) from checkpoint, the shape in current model is torch.Size([63, 64, 1, 1]).
	size mismatch for classifier.header.pointwise_conv.conv.bias: copying a param with shape torch.Size([810]) from checkpoint, the shape in current model is torch.Size([63]).
[Warning] Don't panic if you see this, this might be because you load a pretrained weights with different number of classes. The rest of the weights should be loaded already.
[Info] loaded weights: efficientdet-d0.pth, resuming checkpoint from step: 0
[Info] freezed backbone
Step: 55. Epoch: 0/500. Iteration: 56/56. Cls loss: 1.55586. Reg loss: 2.86230.
Val. Epoch: 0/500. Classification loss: 2.05592. Regression loss: 56.78297. Total loss: 58.83890
Step: 99. Epoch: 1/500. Iteration: 44/56. Cls loss: 1.39613. Reg loss: 2.59735. checkpoint...
Step: 111. Epoch: 1/500. Iteration: 56/56. Cls loss: 1.35521. Reg loss: 2.11275.
Val. Epoch: 1/500. Classification loss: 1.70027. Regression loss: 12.24190. Total loss: 13.94217
Step: 167. Epoch: 2/500. Iteration: 56/56. Cls loss: 1.39917. Reg loss: 3.25899.
Val. Epoch: 2/500. Classification loss: 1.40038. Regression loss: 5.59421. Total loss: 6.99460
Step: 199. Epoch: 3/500. Iteration: 32/56. Cls loss: 1.14606. Reg loss: 2.25805.checkpoint...
Step: 223. Epoch: 3/500. Iteration: 56/56. Cls loss: 0.98865. Reg loss: 2.36145.
Val. Epoch: 3/500. Classification loss: 1.18235. Regression loss: 3.75098. Total loss: 4.93333
Step: 279. Epoch: 4/500. Iteration: 56/56. Cls loss: 1.20481. Reg loss: 2.65963.
Val. Epoch: 4/500. Classification loss: 1.11369. Regression loss: 3.21690. Total loss: 4.33059
Step: 299. Epoch: 5/500. Iteration: 20/56. Cls loss: 1.13203. Reg loss: 2.45407.checkpoint...
Step: 335. Epoch: 5/500. Iteration: 56/56. Cls loss: 1.07970. Reg loss: 2.34846.
Val. Epoch: 5/500. Classification loss: 1.06054. Regression loss: 2.98431. Total loss: 4.04484
Step: 391. Epoch: 6/500. Iteration: 56/56. Cls loss: 1.38627. Reg loss: 2.23130.
Val. Epoch: 6/500. Classification loss: 1.00038. Regression loss: 2.97780. Total loss: 3.97818
Step: 399. Epoch: 7/500. Iteration: 8/56. Cls loss: 1.18486. Reg loss: 3.14593. checkpoint...
Step: 447. Epoch: 7/500. Iteration: 56/56. Cls loss: 0.94545. Reg loss: 2.42121.
Val. Epoch: 7/500. Classification loss: 0.96258. Regression loss: 3.05248. Total loss: 4.01506
Step: 499. Epoch: 8/500. Iteration: 52/56. Cls loss: 0.97633. Reg loss: 2.60206.checkpoint...
Step: 503. Epoch: 8/500. Iteration: 56/56. Cls loss: 0.91663. Reg loss: 2.54093.
Val. Epoch: 8/500. Classification loss: 0.93287. Regression loss: 3.02960. Total loss: 3.96247
Step: 559. Epoch: 9/500. Iteration: 56/56. Cls loss: 0.92465. Reg loss: 2.67289.
Val. Epoch: 9/500. Classification loss: 0.93854. Regression loss: 2.97227. Total loss: 3.91081
Step: 599. Epoch: 10/500. Iteration: 40/56. Cls loss: 0.99782. Reg loss: 2.43815checkpoint...

替换前后我都跑了500个epoch,替换之前效果挺好的,替换之后的loss几乎就降不下来了。

@iamhankai
Copy link
Member

替换后的backbone也得先imagenet预训练,然后再在detection上finetune吧

@yc-cui
Copy link
Author

yc-cui commented Feb 8, 2022

替换后的backbone也得先imagenet预训练,然后再在detection上finetune吧

哦哦,好 那么这个将原来的1x1卷积替换为GhostModule应该是可行的吧

@iamhankai
Copy link
Member

我觉得可行,不会掉点那么多的

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants