Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

why there is only two softmax layer in prototxt #6

Closed
ZhiweiYan-96 opened this issue Dec 14, 2017 · 5 comments
Closed

why there is only two softmax layer in prototxt #6

ZhiweiYan-96 opened this issue Dec 14, 2017 · 5 comments

Comments

@ZhiweiYan-96
Copy link

hi~

I have read your paper, but I only find two softmaxloss layer is ' res_e1_train_val.prototxt' . But there are three softmaxloss Layers in paper. May be it is due to 'iterative training' ? Thanks for your help.

@hshustc
Copy link
Contributor

hshustc commented Dec 15, 2017

In the iterative training, there are two losses involved. And in most cases, it is enough to set max_iter=1 to obtain the perform gain. Besides, the way to train with max_iter=2 is similar.

@ZhiweiYan-96
Copy link
Author

So if the max_iter=2. I need three prototxt files which one of these is used in ite_0 and the other two are used in ite_2k and ite2kpulus1 ?

@ZhiweiYan-96
Copy link
Author

The other problem is when three losslayer are needed? The visualization of DualNet has three losslayers in supplementary material. thxs

@hshustc
Copy link
Contributor

hshustc commented Dec 15, 2017

  1. In each training phase, only one prototxt is needed in my implementation.
  2. Three loss layers are need in the jointly finetuning. The loss function is presented in the paper.

@ZhiweiYan-96
Copy link
Author

thanks for your response! i will read it again.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants