-
Notifications
You must be signed in to change notification settings - Fork 553
In training, why the loss is decreasing, while the val loss is increasing? #90
Comments
I'm observing the same phenomena. Is there some fix to this? @kitterive - What initial weights are you using? Also how are you normalizing the coordinates? |
I also observe the same behavior. However, I was able to get a val loss of 1.4 after 20 epochs and afterwards the val loss started increasing. |
@oarriaga - In that case, which model weights did you use to finetune it VGG16 ( with top removed) or the caffe converted SSD weights ? |
I used the pre-trained weights provided in the README file, which I believe are the weights from an older original implementation in caffe. |
Hi @meetshah1995 |
I am seeing the same issue with training off of the MS COCO data set of images. I was following the training example form SSD_training.ipynb |
@meetshah1995 I have trained SSD with only the VGG16 weights and it was overfiting after ~20 epochs my lowest validation loss was of 1.4. I believe that better results can be obtained from the correct implementation of the |
Hi, @oarriaga Can you show your training log? I want to know loss after 120k iterations. |
I am seeing the same issue while training my own datasets. |
My minimum loss is also around 1.39 ~ 1.4. |
I user VOCtest_06-Nov-2007 dataset , first I user get_data_from_XML.py convert xml ground truth to VOC2007.pkl, and use it to train the network, In the training , I found the loss is decreasing, while the the val loss is increasing , is it overfit?
The text was updated successfully, but these errors were encountered: