-
Notifications
You must be signed in to change notification settings - Fork 316
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
A question about pre-train model? #40
Comments
|
@davheld I according to your code(tracker.prototxt and solver.prototxt ,iteration 500000) and use the train.cpp to train the network use dataset(2014_DET and ALOV300+), the train loss value is not convergent, and oscillation between on 20 ~ 50 finally.so that have a very pool tracking performance. Can you give me some advise? Thks |
It sounds like you are overfitting. Just to be sure - I don't train the conv layers at all, those are pre-trained using CaffeNet. |
yet, i only use your code and prototxt( run the train.cpp, keep params of solver.prototxt and tracker.prototxt) to re-train the network, i don't change anymore. the convolutional layer is from CaffeNet, and lr_mult is set 0 no change. |
How do you create the pre-trained network? |
the pretrian param from your offer address http://cs.stanford.edu/people/davheld/public/GOTURN/weights_init/tracker_init.caffemodel, i dont change the prototxt, i only want to run train.cpp code to get tracker_iter_500000.caffemodel, then can test the tracker |
That's odd, not sure. |
I have the same problem, changing val_ratio from 0.2 to 0 in "loader/loader_alov.cpp" may help, but still , model trained by myself doesn't perform as good as pre-train model. |
not convergent? oscillation? |
The oscillation is normal and simply occurs because the training evaluation is occurring on mini-batches which are randomly sampled at each iteration. However, the numbers that you listed seem lower than what I remember so I believe that you are overfitting, although I am not sure why. |
Try reducing the learning rate? The oscillations in that graph look fairly large. Although if you are using the default learning rate then this is unusual that you should need to change it. Also, make sure that all convolutional layers are fixed (e.g. in both streams of the network). |
@OuYag your words"lr_mult is set 0 no change." I don't think it is right, lr_mult set to 0 means no learning rate. |
@Jiangfeng-Xiong @OuYag do you solve the issue? I have the same problem, the test loss value is between 10 and 20. I guess it is overfitting, however changing lr or batchsize cannot reduce losses |
Hi, I want to know how to evaluate your performance? |
Dear davheld:
I have a question about how do you get the pretrain model, can you give some detials? Question as follows:
(1) Which dataset ( such:ILSVRC2014_DET or ILSVRC2014_CLS) to pretrain the convolutional layer?
(2) If I only use the dataset of ALOV300+ and 2014_DET to train the net get regression value , or don't do the step of pretrain convolution layer, will decresed the tracking performance?
(3) When you train the siamese net, the two network branch will shared the layer params? or has independence parameters?
Looking forward to your reply, best wish!
The text was updated successfully, but these errors were encountered: