Skip to content

Detection precision is worse after fine tune with pre-trained model #3384

@rogercw

Description

@rogercw

System information

  • What is the top-level directory of the model you are using: object_detection
  • Have I written custom code (as opposed to using a stock example script provided in TensorFlow): yes, but not much
  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04): UbuntuMATE 16.04
  • TensorFlow installed from (source or binary): binary
  • TensorFlow version (use command below): 1.4.1
  • Bazel version (if compiling from source):
  • CUDA/cuDNN version: V8.0.61
  • GPU model and memory: GeForce GTX 1080 Ti / 10.91GiB
  • Exact command to reproduce:

Describe the problem

I downloaded the pre-trained model of ssd-inception_v2_coco from detection model zoo, and set the "fine_tune_checkpoint" in "ssd_inception_v2_coco.config" to it. I use script "train.py" following the instruction in tutorial to train the network. After 200000 steps of training, "export_inference_graph.py" is used to export the model. However, the detection precision is getting worse (not just slightly different) with the re-trained model in the evaluation, comparing with directly using pre-trained model. May I know whether it is expected or a bug? If it is not a bug, how could we re-train a model to have better or at least similar detection precision as pre-trained model? Thanks in advance.

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions