New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
High loss when fine-tuning a pre-trained detection model #4944
Comments
When fine-tuning, we only load the lower layers in the feature extractor. The higher layers (box predictors) don't have their parameters copied over, and thus will give wrong predictions. |
Thank you for the quick answer. I find this information very helpful. Is there any simple way to copy over all parameters? Also, isn't this behavior unwanted when people use the checkpoint files to resume a started training process? |
+1 Thanks in advance. |
The way to copy over all saved variables (not just the lower layers) is to insert |
Thank you, that information was very helpful to me. I would consider this issue closed. I think, one might want to put that parameter into a comment in the example config file. |
lewfish, I too am very grateful for your informative answer! |
The global_step was non-zero for me as described above when using TF 1.8, https://github.com/tensorflow/models/tree/3a624c, this config file backend.config.txt, and a |
@netanel-s I had the same issue where my I had to change a line in model_lib.py such that I don't think this is a good solution in the long term, as it would undoubtedly have side-effects for other scenarios. Let me know if there's a better way to properly handle this sort of situation. |
Hi, Were you running from object_detection/legacy/train.py or object_detection/model_main.py ? |
System information
Hi,
I'm trying to refine a model from the model-detecion-zoo (tried different ones, last one ssdlite_mobilenet_v2_coco). I am not changing any of the categories, I'm simply testing, if the model performance can be improved by introducing more/new data. But when i start the training, losses start out very high (about 300) and become drastically lower with only few epochs. This happens even, if i use the coco training data without any alterations, which is surprising, since the model should perform well from the beginning with this sort of data. I tried training without any fine_tuning checkpoints at all and got the same numbers.
It seems to me, that the training pipeline as described in the tutorial (https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/running_locally.md) does not actually load the pre-trained parameters.
What am i doing wrong?
Kind regards
The text was updated successfully, but these errors were encountered: