Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The following classes have no ground truth #39879

Closed
TekayaNidham opened this issue May 26, 2020 · 9 comments
Closed

The following classes have no ground truth #39879

TekayaNidham opened this issue May 26, 2020 · 9 comments
Assignees
Labels
comp:model Model related issues stat:awaiting response Status - Awaiting response from author TF 1.15 for issues seen on TF 1.15 type:bug Bug

Comments

@TekayaNidham
Copy link

System information

  • Have I written custom code (as opposed to using a stock example script provided in TensorFlow):

  • OS Platform and Distribution : Linux Ubuntu 18.04

  • TensorFlow installed from (source or binary): binary

  • TensorFlow version (use command below): 1.15.0

  • Python version: 3.7.4

  • CUDA/cuDNN version: 10.2

  • GPU model and memory: GeForce GTX 1050

Describe the current behavior
hey guys, i'm trying to train an object detection 3 classes model using resnet101 faster rcnn using train.py from legacy folder from object detection api,
the losses looks very good but when running eval.py i get a very low mAP of the 3rd one only
with this warning :
object_detection_evaluation.py:1279] The following classes have no ground truth examples: [1 2]

label map :

item {
  id: 1
  name: 'ooredoo'
  id: 2
  name: 'tt'
  id: 3
  name: 'orange'
}

config file :

# Faster R-CNN with Resnet-101 (v1), configuration for MSCOCO Dataset.
# Users should configure the fine_tune_checkpoint field in the train config as
# well as the label_map_path and input_path fields in the train_input_reader and
# eval_input_reader. Search for "PATH_TO_BE_CONFIGURED" to find the fields that
# should be configured.

model {
  faster_rcnn {
    num_classes: 3
    image_resizer {
      keep_aspect_ratio_resizer {
        min_dimension: 600
        max_dimension: 1024
      }
    }
    feature_extractor {
      type: 'faster_rcnn_resnet101'
      first_stage_features_stride: 16
    }
    first_stage_anchor_generator {
      grid_anchor_generator {
        scales: [0.25, 0.5, 1.0, 2.0]
        aspect_ratios: [0.5, 1.0, 2.0]
        height_stride: 16
        width_stride: 16
      }
    }
    first_stage_box_predictor_conv_hyperparams {
      op: CONV
      regularizer {
        l2_regularizer {
          weight: 0.0
        }
      }
      initializer {
        truncated_normal_initializer {
          stddev: 0.01
        }
      }
    }
    first_stage_nms_score_threshold: 0.0
    first_stage_nms_iou_threshold: 0.7
    first_stage_max_proposals: 300
    first_stage_localization_loss_weight: 2.0
    first_stage_objectness_loss_weight: 1.0
    initial_crop_size: 14
    maxpool_kernel_size: 2
    maxpool_stride: 2
    second_stage_box_predictor {
      mask_rcnn_box_predictor {
        use_dropout: false
        dropout_keep_probability: 1.0
        fc_hyperparams {
          op: FC
          regularizer {
            l2_regularizer {
              weight: 0.0
            }
          }
          initializer {
            variance_scaling_initializer {
              factor: 1.0
              uniform: true
              mode: FAN_AVG
            }
          }
        }
      }
    }
    second_stage_post_processing {
      batch_non_max_suppression {
        score_threshold: 0.0
        iou_threshold: 0.6
        max_detections_per_class: 100
        max_total_detections: 300
      }
      score_converter: SOFTMAX
    }
    second_stage_localization_loss_weight: 2.0
    second_stage_classification_loss_weight: 1.0
  }
}

train_config: {
  batch_size: 1
  optimizer {
    momentum_optimizer: {
      learning_rate: {
        manual_step_learning_rate {
          initial_learning_rate: 0.0003
          schedule {
            step: 300
            learning_rate: .00003
          }
          schedule {
            step: 600
            learning_rate: .000003
          }
        }
      }
      momentum_optimizer_value: 0.9
    }
    use_moving_average: false
  }
  gradient_clipping_by_norm: 10.0
  fine_tune_checkpoint: "faster_rcnn_resnet101_coco_2018_01_28/model.ckpt"
  from_detection_checkpoint: true
  data_augmentation_options {
    random_horizontal_flip {
    }
  }
}

train_input_reader: {
  tf_record_input_reader {
    input_path: "data/train.record"
  }
  label_map_path: "data/comm.pbtxt"
}

eval_config: {
  num_examples: 22
  # Note: The below line limits the evaluation process to 10 evaluations.
  # Remove the below line to evaluate indefinitely.
  max_evals: 10
}

eval_input_reader: {
  tf_record_input_reader {
    input_path: "data/test.record"
  }
  label_map_path: "data/comm.pbtxt"
  shuffle: false
  num_readers: 1
}

already checked tensorflow/models#1936 and tensorflow/models#1696

ps : using labelImg to csv then to tf records

this is the code i'm using for to_csv :

import os
import glob
import pandas as pd
import xml.etree.ElementTree as ET


def xml_to_csv(path):
    xml_list = []
    for xml_file in glob.glob(path + '/*.xml'):
        tree = ET.parse(xml_file)
        root = tree.getroot()
        for member in root.findall('object'):
            value = (root.find('filename').text,
                     int(root.find('size')[0].text),
                     int(root.find('size')[1].text),
                     member[0].text,
                     int(member[4][0].text),
                     int(member[4][1].text),
                     int(member[4][2].text),
                     int(member[4][3].text)
                     )
            xml_list.append(value)
    column_name = ['filename', 'width', 'height', 'class', 'xmin', 'ymin', 'xmax', 'ymax']
    xml_df = pd.DataFrame(xml_list, columns=column_name)
    return xml_df


def main():
  for directory in ['train','test'] :
    image_path = os.path.join(os.getcwd(), 'images/{}'.format(directory))
    xml_df = xml_to_csv(image_path)
    xml_df.to_csv('data/{}_labels.csv'.format(directory), index=None)
    print('Successfully converted xml to csv.')


main()
@Saduf2019 Saduf2019 added the TF 1.15 for issues seen on TF 1.15 label May 26, 2020
@Saduf2019
Copy link
Contributor

@TekayaNidham
Can you please share complete indented code such that i can replicate the issue faced, or please share a colab gist such that we could see the error faced.

@Saduf2019 Saduf2019 added the stat:awaiting response Status - Awaiting response from author label May 26, 2020
@TekayaNidham
Copy link
Author

code to generate tfrecords:


"""
Usage:
  # From tensorflow/models/
  # Create train data:
  python generate_tfrecord.py --csv_input=data/train_labels.csv  --output_path=train.record

  # Create test data:
  python generate_tfrecord.py --csv_input=data/test_labels.csv  --output_path=test.record
"""
from __future__ import division
from __future__ import print_function
from __future__ import absolute_import

import os
import io
import pandas as pd
import tensorflow as tf

from PIL import Image
from object_detection.utils import dataset_util
from collections import namedtuple, OrderedDict

flags = tf.app.flags
flags.DEFINE_string('csv_input', '', 'Path to the CSV input')
flags.DEFINE_string('output_path', '', 'Path to output TFRecord')
flags.DEFINE_string('image_dir', '', 'Path to images')
FLAGS = flags.FLAGS


# TO-DO replace this with label map
def class_text_to_int(row_label):
    if row_label == 'ooredoo':
        return 1
    elif row_label == 'tt' :
        return 2
    elif row_label == 'orange' :
        return 3
    else:
        None


def split(df, group):
    data = namedtuple('data', ['filename', 'object'])
    gb = df.groupby(group)
    return [data(filename, gb.get_group(x)) for filename, x in zip(gb.groups.keys(), gb.groups)]


def create_tf_example(group, path):
    with tf.gfile.GFile(os.path.join(path, '{}'.format(group.filename)), 'rb') as fid:
        encoded_jpg = fid.read()
    encoded_jpg_io = io.BytesIO(encoded_jpg)
    image = Image.open(encoded_jpg_io)
    width, height = image.size

    filename = group.filename.encode('utf8')
    image_format = b'jpg'
    xmins = []
    xmaxs = []
    ymins = []
    ymaxs = []
    classes_text = []
    classes = []

    for index, row in group.object.iterrows():
        xmins.append(row['xmin'] / width)
        xmaxs.append(row['xmax'] / width)
        ymins.append(row['ymin'] / height)
        ymaxs.append(row['ymax'] / height)
        classes_text.append(row['class'].encode('utf8'))
        classes.append(class_text_to_int(row['class']))

    tf_example = tf.train.Example(features=tf.train.Features(feature={
        'image/height': dataset_util.int64_feature(height),
        'image/width': dataset_util.int64_feature(width),
        'image/filename': dataset_util.bytes_feature(filename),
        'image/source_id': dataset_util.bytes_feature(filename),
        'image/encoded': dataset_util.bytes_feature(encoded_jpg),
        'image/format': dataset_util.bytes_feature(image_format),
        'image/object/bbox/xmin': dataset_util.float_list_feature(xmins),
        'image/object/bbox/xmax': dataset_util.float_list_feature(xmaxs),
        'image/object/bbox/ymin': dataset_util.float_list_feature(ymins),
        'image/object/bbox/ymax': dataset_util.float_list_feature(ymaxs),
        'image/object/class/text': dataset_util.bytes_list_feature(classes_text),
        'image/object/class/label': dataset_util.int64_list_feature(classes),
    }))
    return tf_example


def main(_):
    writer = tf.python_io.TFRecordWriter(FLAGS.output_path)
    path = os.path.join(FLAGS.image_dir)
    examples = pd.read_csv(FLAGS.csv_input)
    grouped = split(examples, 'filename')
    for group in grouped:
        tf_example = create_tf_example(group, path)
        writer.write(tf_example.SerializeToString())

    writer.close()
    output_path = os.path.join(os.getcwd(), FLAGS.output_path)
    print('Successfully created the TFRecords: {}'.format(output_path))


if __name__ == '__main__':
    tf.app.run()

@TekayaNidham
Copy link
Author

this is what i get when running eval.py :

INFO:tensorflow:Detection visualizations written to summary with tag image-0.
I0526 18:22:36.651726 140527797831488 eval_util.py:237] Detection visualizations written to summary with tag image-0.
W0526 18:22:36.653803 140527797831488 object_detection_evaluation.py:335] image b'1.jpg' does not have groundtruth difficult flag specified
INFO:tensorflow:Creating detection visualizations.
I0526 18:22:41.508312 140527797831488 eval_util.py:172] Creating detection visualizations.
INFO:tensorflow:Detection visualizations written to summary with tag image-1.
I0526 18:22:41.540313 140527797831488 eval_util.py:237] Detection visualizations written to summary with tag image-1.
INFO:tensorflow:Creating detection visualizations.
I0526 18:22:44.626757 140527797831488 eval_util.py:172] Creating detection visualizations.
INFO:tensorflow:Detection visualizations written to summary with tag image-2.
I0526 18:22:44.701496 140527797831488 eval_util.py:237] Detection visualizations written to summary with tag image-2.
INFO:tensorflow:Creating detection visualizations.
I0526 18:22:47.727808 140527797831488 eval_util.py:172] Creating detection visualizations.
INFO:tensorflow:Detection visualizations written to summary with tag image-3.
I0526 18:22:47.795775 140527797831488 eval_util.py:237] Detection visualizations written to summary with tag image-3.
INFO:tensorflow:Creating detection visualizations.
I0526 18:22:50.807934 140527797831488 eval_util.py:172] Creating detection visualizations.
INFO:tensorflow:Detection visualizations written to summary with tag image-4.
I0526 18:22:50.882759 140527797831488 eval_util.py:237] Detection visualizations written to summary with tag image-4.
INFO:tensorflow:Creating detection visualizations.
I0526 18:22:53.847764 140527797831488 eval_util.py:172] Creating detection visualizations.
INFO:tensorflow:Detection visualizations written to summary with tag image-5.
I0526 18:22:53.957002 140527797831488 eval_util.py:237] Detection visualizations written to summary with tag image-5.
INFO:tensorflow:Creating detection visualizations.
I0526 18:22:57.017548 140527797831488 eval_util.py:172] Creating detection visualizations.
INFO:tensorflow:Detection visualizations written to summary with tag image-6.
I0526 18:22:57.141085 140527797831488 eval_util.py:237] Detection visualizations written to summary with tag image-6.
INFO:tensorflow:Creating detection visualizations.
I0526 18:23:00.066134 140527797831488 eval_util.py:172] Creating detection visualizations.
INFO:tensorflow:Detection visualizations written to summary with tag image-7.
I0526 18:23:00.121052 140527797831488 eval_util.py:237] Detection visualizations written to summary with tag image-7.
INFO:tensorflow:Creating detection visualizations.
I0526 18:23:03.224129 140527797831488 eval_util.py:172] Creating detection visualizations.
INFO:tensorflow:Detection visualizations written to summary with tag image-8.
I0526 18:23:03.330128 140527797831488 eval_util.py:237] Detection visualizations written to summary with tag image-8.
INFO:tensorflow:Creating detection visualizations.
I0526 18:23:07.910117 140527797831488 eval_util.py:172] Creating detection visualizations.
INFO:tensorflow:Detection visualizations written to summary with tag image-9.
I0526 18:23:08.325047 140527797831488 eval_util.py:237] Detection visualizations written to summary with tag image-9.
INFO:tensorflow:Running eval batches done.
I0526 18:23:53.889015 140527797831488 eval_util.py:370] Running eval batches done.
INFO:tensorflow:# success: 22
I0526 18:23:53.889818 140527797831488 eval_util.py:375] # success: 22
INFO:tensorflow:# skipped: 0
I0526 18:23:53.890330 140527797831488 eval_util.py:376] # skipped: 0
W0526 18:23:53.890964 140527797831488 object_detection_evaluation.py:1279] The following classes have no ground truth examples: [1 2]
I0526 18:23:53.913546 140527797831488 object_detection_evaluation.py:1311] average_precision: 0.285256
/home/milos/Documents/research/object_detection/object_detection/utils/metrics.py:145: RuntimeWarning: invalid value encountered in true_divide
num_images_correctly_detected_per_class / num_gt_imgs_per_class)
INFO:tensorflow:Writing metrics to tf summary.
I0526 18:23:54.238987 140527797831488 eval_util.py:84] Writing metrics to tf summary.
INFO:tensorflow:Losses/Loss/BoxClassifierLoss/classification_loss: 0.022590
I0526 18:23:54.239214 140527797831488 eval_util.py:91] Losses/Loss/BoxClassifierLoss/classification_loss: 0.022590
INFO:tensorflow:Losses/Loss/BoxClassifierLoss/localization_loss: 0.065883
I0526 18:23:54.239452 140527797831488 eval_util.py:91] Losses/Loss/BoxClassifierLoss/localization_loss: 0.065883
INFO:tensorflow:Losses/Loss/RPNLoss/localization_loss: 0.086048
I0526 18:23:54.239606 140527797831488 eval_util.py:91] Losses/Loss/RPNLoss/localization_loss: 0.086048
INFO:tensorflow:Losses/Loss/RPNLoss/objectness_loss: 0.201416
I0526 18:23:54.239904 140527797831488 eval_util.py:91] Losses/Loss/RPNLoss/objectness_loss: 0.201416
INFO:tensorflow:PascalBoxes_PerformanceByCategory/AP@0.5IOU/orange: 0.285256
I0526 18:23:54.240060 140527797831488 eval_util.py:91] PascalBoxes_PerformanceByCategory/AP@0.5IOU/orange: 0.285256
INFO:tensorflow:PascalBoxes_Precision/mAP@0.5IOU: 0.285256
I0526 18:23:54.240170 140527797831488 eval_util.py:91] PascalBoxes_Precision/mAP@0.5IOU: 0.285256

@TekayaNidham
Copy link
Author

@Saduf2019 all the posted issues with is this error says it's data related, here's how i prepared my dataset, can't figure out what have i done wrong, did it this way before and it worked fine

@Saduf2019
Copy link
Contributor

@TekayaNidham
The gist shared does not replicate the issue, please share it after you run and face the error.

@TekayaNidham
Copy link
Author

@Saduf2019 while doing it i ran into another error i solved before by copying object detection folder in my workspace, but on gist didn't seem to work, could you please check it

@Saduf2019 Saduf2019 added comp:model Model related issues and removed stat:awaiting response Status - Awaiting response from author labels May 28, 2020
@Saduf2019 Saduf2019 assigned gowthamkpr and unassigned Saduf2019 May 28, 2020
@gowthamkpr
Copy link

@TekayaNidham Please post this issue in tensorflow/models or in stackoverflow as this issue is primarily a models issue. Thanks!

@gowthamkpr gowthamkpr added the stat:awaiting response Status - Awaiting response from author label May 31, 2020
@google-ml-butler
Copy link

Are you satisfied with the resolution of your issue?
Yes
No

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
comp:model Model related issues stat:awaiting response Status - Awaiting response from author TF 1.15 for issues seen on TF 1.15 type:bug Bug
Projects
None yet
Development

No branches or pull requests

3 participants