Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

the metrics are too small #18

Open
Crispinli opened this issue Jun 15, 2019 · 35 comments
Open

the metrics are too small #18

Crispinli opened this issue Jun 15, 2019 · 35 comments

Comments

@Crispinli
Copy link

Crispinli commented Jun 15, 2019

When I train the TBPP model with my own data, the metrics are too small, such as follow:

loss: 17.0426 - conf_loss: 0.0283 - loc_loss: 16.7599 - precision: 4.2830e-04 - recall: 0.0217 - accuracy: 3.8194e-04 - fmeasure: 7.5787e-04 - num_pos: 5828.8542 - num_neg: 3663963.1458

How can I fix it?

@Crispinli
Copy link
Author

Please help me.

@Crispinli
Copy link
Author

I trained the model with two gpus.

@mvoelk
Copy link
Owner

mvoelk commented Jun 16, 2019

How does your data look like and how large is your dataset?

@Crispinli
Copy link
Author

Crispinli commented Jun 17, 2019

How does your data look like and how large is your dataset?

image

The above image is the sample of my dataset, when I train the tbpp model, the metrics will get smaller and smaller.

@Crispinli
Copy link
Author

Crispinli commented Jun 17, 2019

How does your data look like and how large is your dataset?

My train set has 8000+ images

@Crispinli
Copy link
Author

I modified the TBPP_train.ipynb, the follow is the modified code:

#!/usr/bin/env python
# coding: utf-8

import numpy as np
import keras
import time
import os
import pickle
import os.path as osp
from keras.callbacks import ModelCheckpoint

from tbpp_model import TBPP_SSD, TBPP_DenseNet
from tbpp_utils import PriorUtil
from ssd_data import InputGenerator
from tbpp_training import TBPPFocalLoss
from utils.model import load_weights
from utils.training import Logger
from keras.utils import multi_gpu_model

import tensorflow as tf
from keras import backend as K


os.environ['CUDA_VISIBLE_DEVICES'] = '1'
# config = tf.ConfigProto()
# config.gpu_options.per_process_gpu_memory_fraction = 1 # 可根据image_size和batch_size调整该比例
# session = tf.Session(config=config)
# K.set_session(session)


def train():
    '''
    train the tbpp_ssd model
    :return:
    '''
    model_backbone = 'SSD'  # DenseNet

    # get dataset
    with open('../data/gt_train_util_fangben_8782.pkl', 'rb') as f:
        gt_util = pickle.load(f, encoding='utf-8')
    # split dataset
    gt_util_train, gt_util_val = gt_util.split(split=0.9)
    if model_backbone == 'SSD':
        # tbpp + ssd
        model = TBPP_SSD(input_shape=(1024, 1024, 3), softmax=False)
        weights_path = '../saved_model/ssd512_coco_weights_fixed.hdf5'
        freeze = ['conv1_1', 'conv1_2',
                  'conv2_1', 'conv2_2',
                  'conv3_1', 'conv3_2', 'conv3_3']
        batch_size = 12
        experiment = 'tbpp_ssd_1024_fangben'
    else:
        model = TBPP_DenseNet(input_shape=(1024, 1024, 3), softmax=False)
        weights_path = None
        freeze = []
        batch_size = 8
        experiment = 'tbpp_densenet_1024_fangben'
    # utils of prior boxes
    prior_util = PriorUtil(model)
    # load the pre-trained weights
    if weights_path is not None:
        load_weights(model, weights_path)
    # set epoch
    epochs = 100
    initial_epoch = 0
    # data generator
    gen_train = InputGenerator(gt_util_train, prior_util, batch_size, model.image_size)
    gen_val = InputGenerator(gt_util_val, prior_util, batch_size, model.image_size)
    # frozen layers
    for layer in model.layers:
        layer.trainable = not layer.name in freeze
    # checkpoint directory
    checkdir = '../model/' + time.strftime('%Y%m%d%H%M') + '_' + experiment
    if not os.path.exists(checkdir):
        os.makedirs(checkdir)
    # optimizer
    optim = keras.optimizers.Adam(lr=1e-3, beta_1=0.9, beta_2=0.999, epsilon=0.001, decay=0.0)
    # weight decay L2
    regularizer = keras.regularizers.l2(5e-4)
    for l in model.layers:
        if l.__class__.__name__.startswith('Conv'):
            l.kernel_regularizer = regularizer
    # loss function
    loss = TBPPFocalLoss()
    # compile the model
    # model = multi_gpu_model(model, gpus=2)
    model.compile(optimizer=optim, loss=loss.compute, metrics=loss.metrics)
    model.summary()
    # training iterations
    history = model.fit_generator(
        gen_train.generate(),
        steps_per_epoch=int(gen_train.num_batches),
        epochs=epochs,
        verbose=1,
        callbacks=[
            ModelCheckpoint(osp.join(checkdir, 'weights_' + experiment + '_{epoch:04d}_{val_loss:%4f}.h5'), verbose=1),
            Logger(checkdir)],
        validation_data=gen_val.generate(),
        validation_steps=gen_val.num_batches,
        class_weight=None,
        max_queue_size=1,
        workers=1,
        initial_epoch=initial_epoch)

    from utils.model import calc_memory_usage, count_parameters

    count_parameters(model)
    calc_memory_usage(model)

    # frequency of class instance in local ground truth, used for weightning the focal loss
    s = np.zeros(gt_util.num_classes)
    for i in range(1000):  # range(gt_util.num_samples):
        egt = prior_util.encode(gt_util.data[i])
        s += np.sum(egt[:, -gt_util.num_classes:], axis=0)
    sn = np.asarray(np.sum(s)) / s
    print(np.array(sn, dtype=np.int32))
    print(sn / np.sum(sn))


if __name__ == "__main__":
    from dataset_generator import GTUtility
    train()

@kapitsa2811
Copy link

hi @Crispinli, can you please upload your code on git and share. I am trying to regenerate your issue but having lot of errors with current implementation.

@Crispinli
Copy link
Author

Crispinli commented Jun 17, 2019

hi @Crispinli, can you please upload your code on git and share. I am trying to regenerate your issue but having lot of errors with current implementation.

Hello, @kapitsa2811 , my dataset can not be uploaded for some reasons. But I can tell you what I had modified. I used the code below to generate my train sets and trained the model with the code posted above. And then I got this issue.

The data_generator.py:

import os.path as osp
import numpy as np
import os
from thirdparty.get_image_size import get_image_size
from ssd_data import BaseGTUtility


class GTUtility(BaseGTUtility):
    """
    Utility for ICDAR2015 (International Conference on Document Analysis and Recognition) Focused Scene Text dataset.
    # Arguments
        data_path: Path to ground truth and image data.
        test: Boolean for using training or test set.
        polygon: Return oriented boxes defined by their four corner points.
    """

    def __init__(self, data_path, is_train=True):
        super(GTUtility, self).__init__()
        self.data_path = data_path
        if is_train:
            gt_path = osp.join(self.data_path, 'txt')
            image_path = osp.join(self.data_path, 'image')
        else:
            gt_path = osp.join(self.data_path, 'txt')
            image_path = osp.join(self.data_path, 'image')
        self.gt_path = gt_path
        self.image_path = image_path
        self.classes = ['Background', 'Text']
        self.image_names = []
        self.data = []
        self.text = []
        names = os.listdir(self.image_path)
        for image_name in names:
            img_width, img_height = get_image_size(osp.join(image_path, image_name))
            boxes = []
            text = []
            gt_file_name = osp.splitext(image_name)[0] + '.txt'
            with open(osp.join(gt_path, gt_file_name), 'r', encoding='utf-8') as f:
                for line in f:
                    line_split = line.strip().split(',')
                    box = [float(_) for _ in line_split[:8]]
                    box[0] /= img_width
                    box[1] /= img_height
                    box[2] /= img_width
                    box[3] /= img_height
                    box[4] /= img_width
                    box[5] /= img_height
                    box[6] /= img_width
                    box[7] /= img_height
                    box = box + [1]
                    boxes.append(box)
                    text.append(line_split[9])
            boxes = np.asarray(boxes)
            self.image_names.append(image_name)
            self.data.append(boxes)
            self.text.append(text)
        self.init()


if __name__ == '__main__':
    import pickle

    is_train = False
    data_path = '../data/fangben/train' if is_train else '../data/fangben/test'
    file_name = '../data/gt_train_util_fangben_8782.pkl' if is_train else '../data/gt_test_util_fangben_900.pkl'

    gt_util = GTUtility(data_path, is_train=is_train)
    print('dataset numbers:', len(gt_util.image_names))

    print('save to %s...' % file_name)
    pickle.dump(gt_util, open(file_name, 'wb'))
    print('done!')

@Crispinli
Copy link
Author

Crispinli commented Jun 17, 2019

hi @Crispinli, can you please upload your code on git and share. I am trying to regenerate your issue but having lot of errors with current implementation.

By the way, my dataset is like 'icdar15'. And I don't modified the other files.

@mvoelk
Copy link
Owner

mvoelk commented Jun 18, 2019

Try the folowing

regularizer = keras.regularizers.l2(5e-4)

loss = TBPPFocalLoss(lambda_conf=1000.0, lambda_offsets=1.0)

and maybe you could give feedback if you find better values ​​for the lambdas.

I will also change that in the notebook.

@Crispinli
Copy link
Author

Ok. I will try these lambdas and give you the feedbacks. Thank you very much.

@Crispinli
Copy link
Author

Try the folowing

regularizer = keras.regularizers.l2(5e-4)
loss = TBPPFocalLoss(lambda_conf=1000.0, lambda_offsets=1.0)

and maybe you could give feedback if you find better values ​​for the lambdas.

I will also change that in the notebook.

With your suggested lambdas, the metrics are always 0. I can't solve it.

@mvoelk
Copy link
Owner

mvoelk commented Jun 19, 2019

Probably I trained the model with lambda_conf=100.0 and later changed the value by some intuition to 10.0.

Yesterday, I tried to train a TBPP-DenseNet model with 10.0 and got low f-measur. At the moment I am training a model with 10000.0, which let expect higher recall compared to the published one.

In general, it seems that the focal loss demands for higher values.

@mvoelk
Copy link
Owner

mvoelk commented Jun 19, 2019

@Crispinli I would visualize some samples with the plotting methods in GTUtility to see whether they make sense or not. What have you changed in tbpp_model.py?

I would also perform the experiments with lower input size and only train a final version with 1024x1024. Training with 512x512 is four times faster. Seee also #10...

@Crispinli
Copy link
Author

@Crispinli I would visualize some samples with the plotting methods in GTUtility to see whether they make sense or not. What have you changed in tbpp_model.py?

I would also perform the experiments with lower input size and only train a final version with 1024x1024. Training with 512x512 is four times faster. Seee also #10...

I didn't change tbpp_model.py. Besides, I train the tbpp model with 1024*1024 images and the backbone is SSD512.

@par93vin
Copy link

Hi, when i want to tarin the TBPP model with my own data i get this error: "missing layer max_pooling9", and also metrics are too small, do you have any idea for this problem?

@mvoelk
Copy link
Owner

mvoelk commented Jul 20, 2019

"missing layer max_pooling9" should be no problem since it has no parameters... In which context?

#2?

@par93vin
Copy link

par93vin commented Jul 20, 2019

I used this model "TBPP textboxes++ +densenet" With weights you provided for text detection for persian texts images, it is detecting texts perfectly, except it ignores dots, i just want to fine tune this model with my own data that is generated in the Synthtext form, using your weights for initializing the model.
The problem is that at first steps precision, recall and other metrics are zero!
thank you in advance.

@mvoelk
Copy link
Owner

mvoelk commented Jul 22, 2019

@par93vin With context I meant some piece of code...

@maozezhong
Copy link

@Crispinli Hi, I have the same problem. Have you solve this issue?

@Crispinli
Copy link
Author

@Crispinli Hi, I have the same problem. Have you solve this issue?

Sorry, no...

@mvoelk
Copy link
Owner

mvoelk commented Jul 28, 2019

@Crispinli Did you try an input of 512x512? I never trained with 1024x1024...

@Crispinli
Copy link
Author

@Crispinli Did you try an input of 512x512? I never trained with 1024x1024...

Yes, but nothing changed.

@maozezhong
Copy link

With the pretrained model provided by @mvoelk , I got high recall while very very low precision...
like precison=0.0001, reall= 0.98+. And use this trained model. I got many boxes in one image, not make sense...

@mvoelk
Copy link
Owner

mvoelk commented Aug 1, 2019

@maozezhong prior_util.decode(..., confidence_threshold=0.35)?

@maozezhong
Copy link

@mvoelk Yes, I mean during training, I got the situation like precison=0.0001, recall=0.98+

And by the way, to achieve the performance below:
trained and tested on subsets of SynthText
threshold 0.35
precision 0.984
recall 0.890
f-measure 0.934

  1. how many epoch you trained?
  2. how many data?
  3. what is the lambda_conf when you train

@mvoelk
Copy link
Owner

mvoelk commented Aug 1, 2019

@maozezhong See code and log provided with the weights.

@maozezhong
Copy link

@mvoelk Thanks. I have other questions.

  1. what the model.scale mean in ssd_detectors/tbpp_model.py line 110
  2. why the box_shift need to * 0.5? in ssd_detectors/ssd_utils.py line 219

@mvoelk
Copy link
Owner

mvoelk commented Aug 5, 2019

@maozezhong

  1. is not conform with the paper. I found that due to the large aspect ratio, smaller prior boxes fit better to the text instances and the receptive fields.
  2. is only a question of definition. I changed this to avoid confusion. 58e7cdc

@maozezhong
Copy link

@mvoelk thanks!
by the way , in your code, the anchor density is 3, right? due to average, why not set 0.25 -> 0.33 in model.shifts = [[(0.0, -0.25)] * 6 + [(0.0, 0.25)] * 6] * num_maps

@mvoelk
Copy link
Owner

mvoelk commented Aug 6, 2019

@maozezhong I'm not completely sure what you mean by anchor density... In this case, two sets of prior boxes per location. Each with 6 different aspect ratios. One is shifted up and one down.

@maozezhong
Copy link

@mvoelk OK, Thanks. anchor density in my option means how many sets of prior boxes per location. In your case, it's 2, I am wrong before.

@maozezhong
Copy link

@mvoelk What is equation (4) in ssd_detectors/ssd_utils.py line 299. Any reference paper? Thanks

@mvoelk
Copy link
Owner

mvoelk commented Aug 9, 2019

@maozezhong SSD paper?!

@maozezhong
Copy link

@mvoelk my bad.. lol

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants