Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

High loss when fine-tuning a pre-trained detection model #4944

Closed
bsaendig opened this issue Jul 30, 2018 · 9 comments
Closed

High loss when fine-tuning a pre-trained detection model #4944

bsaendig opened this issue Jul 30, 2018 · 9 comments
Assignees

Comments

@bsaendig
Copy link

System information

  • What is the top-level directory of the model you are using: research/object_detection
  • Have I written custom code: no
  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Linux Ubuntu 16.04
  • TensorFlow installed from (source or binary): binary
  • TensorFlow version (use command below): 1.8.0 (tensorflow-gpu 1.3.0)
  • Bazel version (if compiling from source):
  • CUDA/cuDNN version: 8.0/7.1
  • GPU model and memory:[GeForce GT 740]
  • Exact command to reproduce: using the object_detection/model_main.py as described in tutorials

Hi,
I'm trying to refine a model from the model-detecion-zoo (tried different ones, last one ssdlite_mobilenet_v2_coco). I am not changing any of the categories, I'm simply testing, if the model performance can be improved by introducing more/new data. But when i start the training, losses start out very high (about 300) and become drastically lower with only few epochs. This happens even, if i use the coco training data without any alterations, which is surprising, since the model should perform well from the beginning with this sort of data. I tried training without any fine_tuning checkpoints at all and got the same numbers.

It seems to me, that the training pipeline as described in the tutorial (https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/running_locally.md) does not actually load the pre-trained parameters.

What am i doing wrong?

Kind regards

@derekjchow
Copy link
Contributor

When fine-tuning, we only load the lower layers in the feature extractor. The higher layers (box predictors) don't have their parameters copied over, and thus will give wrong predictions.

@bsaendig
Copy link
Author

bsaendig commented Jul 31, 2018

Thank you for the quick answer. I find this information very helpful. Is there any simple way to copy over all parameters?
I am generating rendered training data and trying to figure out, if it is possible to improve precision with it. I keep all of COCO's object categories so the trained box predictors should still be viable.

Also, isn't this behavior unwanted when people use the checkpoint files to resume a started training process?
I think this might be related to #2952

@netanel-s
Copy link

+1
IMO loading parameters of the box predictors should be a default (or at least possible) when the categories are untouched or taken to be a subset of the original categories set.

Thanks in advance.

@lewfish
Copy link

lewfish commented Aug 1, 2018

When fine-tuning, we only load the lower layers in the feature extractor. The higher layers (box predictors) don't have their parameters copied over, and thus will give wrong predictions.

The way to copy over all saved variables (not just the lower layers) is to insert load_all_detection_checkpoint_vars: true into the training config file (beneath the fine_tune_checkpoint field). However, this also has the undesired (for this use case) effect of copying over the step number. This means that if you trained a model for 1000 steps, create the pretrained model zip file, and then use it in a new training job, the new training job will start training at step 1000.

@bsaendig
Copy link
Author

bsaendig commented Aug 2, 2018

Thank you, that information was very helpful to me. I would consider this issue closed. I think, one might want to put that parameter into a comment in the example config file.

@bsaendig bsaendig closed this as completed Aug 2, 2018
@netanel-s
Copy link

lewfish, I too am very grateful for your informative answer!
I tried your solution, and it did load the remaining variables, however it did not load global_step. It says:
WARNING:root:Variable [global_step] is not available in checkpoint
and then the global_step count evidently starts from 0.
Is there anything else to be aware of?

@lewfish
Copy link

lewfish commented Aug 8, 2018

lewfish, I too am very grateful for your informative answer!
I tried your solution, and it did load the remaining variables, however it did not load global_step. It says:
WARNING:root:Variable [global_step] is not available in checkpoint
and then the global_step count evidently starts from 0.
Is there anything else to be aware of?

The global_step was non-zero for me as described above when using TF 1.8, https://github.com/tensorflow/models/tree/3a624c, this config file backend.config.txt, and a model.tar.gz file only containing model.ckpt.data-00000-of-00001, model.ckpt.index and model.ckpt.meta. Other setups might result in the situation you describe but I'm not sure.

@yycho0108
Copy link

yycho0108 commented Apr 23, 2019

@netanel-s I had the same issue where my global_step was not restored with the message WARNING:root:Variable [global_step] is not available in checkpoint, despite specifying load_all_detection_checkpoint_vars: true in pipeline.config. Since learning_rate parameters depended on global_step, this bug meant that it was not possible to properly resume training.

I had to change a line in model_lib.py such that include_global_step argument was manually set to True, after which I was able to resume training normally.

resume

I don't think this is a good solution in the long term, as it would undoubtedly have side-effects for other scenarios. Let me know if there's a better way to properly handle this sort of situation.

@Praveenk8051
Copy link

@netanel-s I had the same issue where my global_step was not restored with the message WARNING:root:Variable [global_step] is not available in checkpoint, despite specifying load_all_detection_checkpoint_vars: true in pipeline.config. Since learning_rate parameters depended on global_step, this bug meant that it was not possible to properly resume training.

I had to change a line in model_lib.py such that include_global_step argument was manually set to True, after which I was able to resume training normally.

resume

I don't think this is a good solution in the long term, as it would undoubtedly have side-effects for other scenarios. Let me know if there's a better way to properly handle this sort of situation.

Hi,

Were you running from object_detection/legacy/train.py or object_detection/model_main.py ?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants