You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thanks for sharing the work. Do you use same parameter settings in the https://github.com/liuzhuang13/DenseNetCaffe for optimization? I ask it because the learning rate of the reference one is so high (about 0.1). In additions, do you get the performance using your prototxt it for CIFAR dataset? Thanks
The text was updated successfully, but these errors were encountered:
Thanks. I think the learning rate depends on dataset and solver optimization method.
In all prototxts, the dropout did not use. Did you try to add the dropout as 0.2 in BC model and get the result? How much it improve?
I have trained DenseNet on CIFAR10 and ImageNet, and lr=0.1 works for me.
I did not try to use dropout for DenseNet.
You'd better ask the author. @liuzhuang13
Thanks for sharing the work. Do you use same parameter settings in the https://github.com/liuzhuang13/DenseNetCaffe for optimization? I ask it because the learning rate of the reference one is so high (about 0.1). In additions, do you get the performance using your prototxt it for CIFAR dataset? Thanks
The text was updated successfully, but these errors were encountered: