Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Training with new database #12

Closed
DonghyunK opened this issue Feb 10, 2017 · 7 comments
Closed

Training with new database #12

DonghyunK opened this issue Feb 10, 2017 · 7 comments

Comments

@DonghyunK
Copy link

DonghyunK commented Feb 10, 2017

Hi,

I want to train the pre-trained model with other databases.

I just want to train a model with below script.

python issegm/voc.py --gpus 0,1,2,3 --split train --data-root ${New_database} --output output --model ${New_database}_rna-a1_cls19 --batch-images 16 --crop-size 500 --origin-size 2048 --scale-rate-range 0.7,1.3 --weights models/ilsvrc-cls_rna-a1_cls1000_ep-0001.params --lr-type fixed --base-lr 0.0016 --to-epoch 140 --kvstore local --prefetch-threads 4 --prefetcher thread --backward-do-mirror

What parts of codes do I have to modify? It is not easy to understand whole codes.

And, Do I have to use the same sizes of crop and origin (-crop-size 500 --origin-size 2048) in order to use pretrained weight?

Could you please explain it for me?

Thanks.

@itijyou
Copy link
Owner

itijyou commented Feb 10, 2017

Thanks. I will make this part clearer, with examples.

python issegm/voc.py --gpus 0,1,2,3 --split train --data-root ${New_database} --output output --model ${New_database}_rna-a1_cls${Number_of_classes} --batch-images 16 --crop-size 500 --origin-size 2048 --scale-rate-range 0.7,1.3 --weights models/ilsvrc-cls_rna-a1_cls1000_ep-0001.params --lr-type fixed --base-lr 0.0016 --to-epoch 140 --kvstore local --prefetch-threads 4 --prefetcher thread --backward-do-mirror

Changing --crop-size and/or --origin-size as you like is supposed to be OK.
Besides,

  1. prepare split files and save them into issegm/data/${New_database};
  2. specify --cache-images 0 if your new data are too large to hold in host memory.

@DonghyunK
Copy link
Author

@itijyou Thank you for the reply.

May I ask more question?

1. dataset definition?

There is a part like , line number 250 in voc.py
if dataset == 'ade20k': num_classes = model_specs.get('classes', 150) label_2_id = np.arange(-1, 150) label_2_id[0] = 255 id_2_label = np.arange(1, 256+1) id_2_label[255] = 0 valid_labels = range(1, 150+1) # if args.split == 'test': cmap_path = None # max_shape = np.array((2100, 2100)) if model_specs.get('balanced', False) and args.split == 'trainval': meta['image_classes']['trainval'] = meta['image_classes']['train'] + meta['image_classes']['val']

I think I need to define something here for a new dataset. I dont know what I have to do.

2. A format of label

A format of label is different for each dataset. So what format is allowed? (e.g. (wxh and intensity=label) or (wxhxc intensity =[label, label, label])

Thank you!

@itijyou
Copy link
Owner

itijyou commented Feb 10, 2017

@DonghyunK

  1. Yes. I thought there were a default setting in this function (but apparently not).
    In most cases, adding something like this should suffice.
elif dataset == 'pascal-context':
        num_classes = model_specs.get('classes', 60)
        valid_labels = range(num_classes)
        #
        max_shape = np.array((500, 500))
  1. wxh and intensity=label

@DonghyunK
Copy link
Author

@itijyou

Thank you, I can train it now.

Could you please tell me how I can check an accuracy on a validation set for each epoch?

MXNet is totally new to me, so that I am sorry to ask you many questions.

Thanks.

@itijyou
Copy link
Owner

itijyou commented Feb 10, 2017

Specify the `eval_data' parameter when calling mod.fit

For MXNet related questions, it'd be better to ask at the MXNet project.

@DonghyunK
Copy link
Author

@itijyou

I appreciate and thank you for the answers.

Since I resolved this issue, I close this issue.

@czzerone
Copy link

Thanks. I will make this part clearer, with examples.

python issegm/voc.py --gpus 0,1,2,3 --split train --data-root ${New_database} --output output --model ${New_database}_rna-a1_cls${Number_of_classes} --batch-images 16 --crop-size 500 --origin-size 2048 --scale-rate-range 0.7,1.3 --weights models/ilsvrc-cls_rna-a1_cls1000_ep-0001.params --lr-type fixed --base-lr 0.0016 --to-epoch 140 --kvstore local --prefetch-threads 4 --prefetcher thread --backward-do-mirror

Changing --crop-size and/or --origin-size as you like is supposed to be OK.
Besides,

  1. prepare split files and save them into issegm/data/${New_database};
  2. specify --cache-images 0 if your new data are too large to hold in host memory.

hello,i follow this command to train the model on the pascal voc dataset, but when I test the model after training ,the prediction is all black. do you know how to deal with it??
Thanks.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants