Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Very low testing mIoU #4

Closed
Fansiee opened this issue Feb 26, 2017 · 10 comments
Closed

Very low testing mIoU #4

Fansiee opened this issue Feb 26, 2017 · 10 comments

Comments

@Fansiee
Copy link

Fansiee commented Feb 26, 2017

I train the model on VOC2012 segmentation, using 250 epoch and 20 batch_size.
The training accu is 0.985~.(It should be the pixel accu, right?)
But the testing mIoU is just 0.56025, which has a big gap with the result in the paper.
What's your result?
And maybe the hyperparameter is not optimal?

[ 0.9077976 0.75288631 0.47242709 0.58043887 0.50712961 0.47588975
  0.72664924 0.68814923 0.66110743 0.20972129 0.53752346 0.31540707
  0.5731418 0.50105971 0.64968748 0.76978092 0.43357291 0.5765887
  0.32090376 0.61638412 0.49687383]
meanIOU: 0.560625
pixel acc: 0.898940
@aurora95
Copy link
Owner

Which model did you use? And can you list your hyperparameters? The hyperparameters I left in the code were used for resnet.
Also, how many training images are there in your dataset? The dataset used for those papers should include about 10k images for training.

@Fansiee
Copy link
Author

Fansiee commented Feb 27, 2017

My hyperparameters is same as your except the batch_size is 20.
And the model is resnet50, same as in your code.
I just changed the file path and the batch_size.
The training images number is 1400+, I don't use the SegmentationClassAUG.
I am training on the SegAug label right now.
I will update my result later.
And what's your result in the original Seg label and the SegAug label?

@Fansiee
Copy link
Author

Fansiee commented Feb 27, 2017

Update:
I run the model in SegmentationAug data using 25 epoch and 32 batch_size, and keeping other hyperparameters unchangeable.
The testing mIoU is 0.661076.

It seems acceptable.
And maybe I can boost it by training more epoches.

Could you please tell me your results?
I want to compare with yours.
Thanks.

@aurora95
Copy link
Owner

Hmm... Actually, I got 65.75 mIOU using 25 epoch and 32 batch size. Your result is slightly better than mine...

@Fansiee
Copy link
Author

Fansiee commented Feb 28, 2017

I run more 25 epoches based on the 0.661076 model, but the result is almost the same.
BTW, thank you for your code of the FCN in keras, its very helpful for me!

@aurora95
Copy link
Owner

@Fansiee Glad to help :)

@ahundt
Copy link
Collaborator

ahundt commented Mar 27, 2017

@Fansiee Aren't the classes a bit different in the augmented pascal VOC?

I assume you're discussing these:

    # original PASCAL VOC 2012
    # wget http://host.robots.ox.ac.uk/pascal/VOC/voc2012/VOCtrainval_11-May-2012.tar # 2 GB
    # berkeley augmented Pascal VOC
    # wget http://www.eecs.berkeley.edu/Research/Projects/CS/vision/grouping/semantic_contours/benchmark.tgz # 1.3 GB

Could you perhaps create a pull request with your code changes to run what you used?

@aurora95
Copy link
Owner

@ahundt I think your link is correct and the classes should be the same. Just add all augmented data to your training set and change the hyper parameters, and you should be able to reproduce the result.

@JeremyFxx
Copy link

Hi, @aurora95 @ahundt
I also got a low testing mIoU with all the default setting in train.py using AtrousFCN_Resnet50_16s.

IOU:
[0.88228786 0.80926575 0.35438381 0.65577184 0.4825115 0.37554799
0.70274595 0.68739528 0.65441523 0.25122642 0.55061205 0.29527007
0.58391853 0.56140901 0.68124345 0.70453396 0.37578266 0.61919163
0.36438029 0.70442827 0.61359128]
meanIOU: 0.567139
pixel acc: 0.885961

I noticed that there were two issues have been discussed:

  1. SegmentationClassAUG, namely the berkeley augmented Pascal VOC should be utilized;
  2. the batch should be changed to 32 instead of 16, and the training epochs should be changed to 25 instead of 250.

Now I have the following question:

  1. Following the instruction, I used @ahundt 's tf-image-segmentation to prepare the datasets by running the python data_pascal_voc.py pascal_voc_setup. And in the train.py, the default setting for the dataset dataset = 'VOC2012_BERKELEY' is remained. Do these settings ensure that I am using the berkeley augmented Pascal VOC?

  2. I tried to increase the batch size to 32. However, the memory error occurred. What type of GPU are you using for the training? Did you use multi-gpu?

  3. The default learning rate scheduler is 'power_decay', with the lr_base = 0.01 * (float(batch_size) / 16). Should I change any of these settings?

@Tyler-D
Copy link

Tyler-D commented Jan 23, 2019

@JeremyFxx Similar low mean IOU here (~58%). And I've adapted the repo to Keras 2.2.4 ---- not sure if this is the reason for lower mIOU.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants