Skip to content

evaluating model with eager_eval_loop #9150

@DanilKonon

Description

@DanilKonon

I am trying to evaluate my resnet model with eager_eval_loop function from model_lib_v2.py.

In my tf record I have about 11 000 images. Yet when I try to use this function, it doesn't work. As far as I understand the problem is in 790th line of model_lib_v2.py: for i, (features, labels) in enumerate(eval_dataset):.

The problem is that eval_dataset is infinite, its iterator doesn't exhaust.
Here is a part of my config with eval data. I tried to set max_evals and sample_1_of_n_examples, but it doesn't help.

eval_config {
  metrics_set: "coco_detection_metrics"
  include_metrics_per_category: false
  batch_size: 1
  max_evals: 1
  use_moving_averages: false
}

eval_input_reader {
  label_map_path: "./logo_label_map.pbtxt"
  shuffle: true
  sample_1_of_n_examples: 5
  tf_record_input_reader {
    input_path: "./test.record"
  }
}

Here is a code which I am trying to run in the notebook. I follow tutorial for fine-tuning.
Here pipeline_config_path and model_dir are paths.
Detection_model variable is gotten from the notebook with fine-tuning.

from object_detection import model_lib, model_lib_v2

MODEL_BUILD_UTIL_MAP = model_lib.MODEL_BUILD_UTIL_MAP

get_configs_from_pipeline_file = MODEL_BUILD_UTIL_MAP[
      'get_configs_from_pipeline_file']

configs = get_configs_from_pipeline_file(
      pipeline_config_path, config_override=None)

eval_config = configs['eval_config']

summary_writer = tf.compat.v2.summary.create_file_writer(
    os.path.join(model_dir, 'eval', 'some_new_eval_2'))

with summary_writer.as_default():
    model_lib_v2.eager_eval_loop(
        detection_model,
        configs,
        eval_input,
        use_tpu=False,
        postprocess_on_cpu=True,
        global_step=num_steps
    )

So, why doesn't my eval_dataset end?

Metadata

Metadata

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions