Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unfair experimental settings. (DeepLab v3+ vs. DeepLab v2) #3

Open
zhijiew opened this issue Oct 4, 2021 · 10 comments
Open

Unfair experimental settings. (DeepLab v3+ vs. DeepLab v2) #3

zhijiew opened this issue Oct 4, 2021 · 10 comments

Comments

@zhijiew
Copy link

zhijiew commented Oct 4, 2021

In your experiments, your method used DeepLab v3+ as the backbone to compare with other methods that use DeepLab v2, which is totally unfair. Can you report the results based on DeepLab v2?

@zhijiew zhijiew changed the title Unfair experimental settings. Unfair experimental settings. (DeepLab v3+ vs. DeepLab v2) Oct 4, 2021
@sharat29ag
Copy link

Yes please update the results.

@sharat29ag
Copy link

For the warmup you have used AdaptNet which uses DeeplabV2 as the Segmentation model, was that also changed to DeeplabV3+? If yes can you please report the initial result after warmup?

@munanning
Copy link
Owner

munanning commented Oct 27, 2021

Sorry for the delay. @zhijiew @sharat29ag
The result on DeeplabV2 is implemented. Beyond expectation, the final result is 65.16 (mIoU), better than 64.89 (mIoU) with DeeplabV3+.
Here is the origin training log:
2020-11-01 12:14:55,783 INFO Iter 15000 Loss: 0.1928
2020-11-01 12:14:55,784 INFO Iter 15000 Source Loss: 0.0000
2020-11-01 12:14:55,785 INFO Overall Acc: : 0.9418029923712912
2020-11-01 12:14:55,785 INFO Mean Acc : : 0.7246892190272104
2020-11-01 12:14:55,785 INFO FreqW Acc : : 0.893493677898871
2020-11-01 12:14:55,786 INFO Mean IoU : : 0.6516297024999936
2020-11-01 12:14:55,786 INFO 0: 0.9693999962471279
2020-11-01 12:14:55,786 INFO 1: 0.7685028486723834
2020-11-01 12:14:55,787 INFO 2: 0.8895210625419824
2020-11-01 12:14:55,787 INFO 3: 0.33887047797652403
2020-11-01 12:14:55,787 INFO 4: 0.47570431561588006
2020-11-01 12:14:55,787 INFO 5: 0.5278077449295415
2020-11-01 12:14:55,788 INFO 6: 0.5979519384158949
2020-11-01 12:14:55,788 INFO 7: 0.6525470266354098
2020-11-01 12:14:55,788 INFO 8: 0.9034188835393306
2020-11-01 12:14:55,788 INFO 9: 0.506204355591972
2020-11-01 12:14:55,789 INFO 10: 0.9236311508393851
2020-11-01 12:14:55,789 INFO 11: 0.7771101286799015
2020-11-01 12:14:55,789 INFO 12: 0.5364837525760412
2020-11-01 12:14:55,789 INFO 13: 0.9200637629029278
2020-11-01 12:14:55,790 INFO 14: 0.5212897908120947
2020-11-01 12:14:55,790 INFO 15: 0.5616047800928549
2020-11-01 12:14:55,790 INFO 16: 0.27918147195439025
2020-11-01 12:14:55,790 INFO 17: 0.5421158030330913
2020-11-01 12:14:55,790 INFO 18: 0.6895550564431461
2020-11-01 12:14:57,555 INFO Best iou until now is 0.6516297024999936

Our training settings is based on the CAG_UDA (Nips 2019, https://github.com/RogerZhangzz/CAG_UDA). However, the author has written the DeeplabV3+ to DeeplabV2. The only thing I can do is to report my result and framework honestly.
And here is the origin codes and files for DeeplabV2: https://drive.google.com/drive/folders/1DnwzCabQnUbDeg6xOvkAeJZFG9V95g6n?usp=sharing

@munanning
Copy link
Owner

For the warmup you have used AdaptNet which uses DeeplabV2 as the Segmentation model, was that also changed to DeeplabV3+? If yes can you please report the initial result after warmup?

The reported warmup model is a DeeplabV3+ model. For a fair comparison, I reused the warmup model from the CAG_UDA (Nips 2019, https://github.com/RogerZhangzz/CAG_UDA), just here

The origin report is here:
INFO:ptsemseg:Mean IoU : : 0.4253894928867592
INFO:ptsemseg:0: 0.882774709553075
INFO:ptsemseg:1: 0.44709892359884384
INFO:ptsemseg:2: 0.8203015163263928
INFO:ptsemseg:3: 0.314210933498157
INFO:ptsemseg:4: 0.24375954741277261
INFO:ptsemseg:5: 0.37366047858246787
INFO:ptsemseg:6: 0.3780563158565676
INFO:ptsemseg:7: 0.2466602204997033
INFO:ptsemseg:8: 0.8171751060949118
INFO:ptsemseg:9: 0.29433954349025376
INFO:ptsemseg:10: 0.7082770339997182
INFO:ptsemseg:11: 0.5259153279163223
INFO:ptsemseg:12: 0.2275386269806841
INFO:ptsemseg:13: 0.8395305067066231
INFO:ptsemseg:14: 0.1847198116100887
INFO:ptsemseg:15: 0.28772643642333734
INFO:ptsemseg:16: 0.1779052576284909
INFO:ptsemseg:17: 0.1595831186206701
INFO:ptsemseg:18: 0.15316695004934425

I hope my reply can dispel your doubts @sharat29ag .

@sharat29ag
Copy link

For the warmup you have used AdaptNet which uses DeeplabV2 as the Segmentation model, was that also changed to DeeplabV3+? If yes can you please report the initial result after warmup?

The reported warmup model is a DeeplabV3+ model. For a fair comparison, I reused the warmup model from the CAG_UDA (Nips 2019, https://github.com/RogerZhangzz/CAG_UDA), just here

The origin report is here: INFO:ptsemseg:Mean IoU : : 0.4253894928867592 INFO:ptsemseg:0: 0.882774709553075 INFO:ptsemseg:1: 0.44709892359884384 INFO:ptsemseg:2: 0.8203015163263928 INFO:ptsemseg:3: 0.314210933498157 INFO:ptsemseg:4: 0.24375954741277261 INFO:ptsemseg:5: 0.37366047858246787 INFO:ptsemseg:6: 0.3780563158565676 INFO:ptsemseg:7: 0.2466602204997033 INFO:ptsemseg:8: 0.8171751060949118 INFO:ptsemseg:9: 0.29433954349025376 INFO:ptsemseg:10: 0.7082770339997182 INFO:ptsemseg:11: 0.5259153279163223 INFO:ptsemseg:12: 0.2275386269806841 INFO:ptsemseg:13: 0.8395305067066231 INFO:ptsemseg:14: 0.1847198116100887 INFO:ptsemseg:15: 0.28772643642333734 INFO:ptsemseg:16: 0.1779052576284909 INFO:ptsemseg:17: 0.1595831186206701 INFO:ptsemseg:18: 0.15316695004934425

I hope my reply can dispel your doubts @sharat29ag .

Thank you for the response will look to it.

@luyvlei
Copy link

luyvlei commented Nov 24, 2021

Sorry for the delay. @zhijiew @sharat29ag The result on DeeplabV2 is implemented. Beyond expectation, the final result is 65.16 (mIoU), better than 64.89 (mIoU) with DeeplabV3+. Here is the origin training log: 2020-11-01 12:14:55,783 INFO Iter 15000 Loss: 0.1928 2020-11-01 12:14:55,784 INFO Iter 15000 Source Loss: 0.0000 2020-11-01 12:14:55,785 INFO Overall Acc: : 0.9418029923712912 2020-11-01 12:14:55,785 INFO Mean Acc : : 0.7246892190272104 2020-11-01 12:14:55,785 INFO FreqW Acc : : 0.893493677898871 2020-11-01 12:14:55,786 INFO Mean IoU : : 0.6516297024999936 2020-11-01 12:14:55,786 INFO 0: 0.9693999962471279 2020-11-01 12:14:55,786 INFO 1: 0.7685028486723834 2020-11-01 12:14:55,787 INFO 2: 0.8895210625419824 2020-11-01 12:14:55,787 INFO 3: 0.33887047797652403 2020-11-01 12:14:55,787 INFO 4: 0.47570431561588006 2020-11-01 12:14:55,787 INFO 5: 0.5278077449295415 2020-11-01 12:14:55,788 INFO 6: 0.5979519384158949 2020-11-01 12:14:55,788 INFO 7: 0.6525470266354098 2020-11-01 12:14:55,788 INFO 8: 0.9034188835393306 2020-11-01 12:14:55,788 INFO 9: 0.506204355591972 2020-11-01 12:14:55,789 INFO 10: 0.9236311508393851 2020-11-01 12:14:55,789 INFO 11: 0.7771101286799015 2020-11-01 12:14:55,789 INFO 12: 0.5364837525760412 2020-11-01 12:14:55,789 INFO 13: 0.9200637629029278 2020-11-01 12:14:55,790 INFO 14: 0.5212897908120947 2020-11-01 12:14:55,790 INFO 15: 0.5616047800928549 2020-11-01 12:14:55,790 INFO 16: 0.27918147195439025 2020-11-01 12:14:55,790 INFO 17: 0.5421158030330913 2020-11-01 12:14:55,790 INFO 18: 0.6895550564431461 2020-11-01 12:14:57,555 INFO Best iou until now is 0.6516297024999936

Our training settings is based on the CAG_UDA (Nips 2019, https://github.com/RogerZhangzz/CAG_UDA). However, the author has written the DeeplabV3+ to DeeplabV2. The only thing I can do is to report my result and framework honestly. And here is the origin codes and files for DeeplabV2: https://drive.google.com/drive/folders/1DnwzCabQnUbDeg6xOvkAeJZFG9V95g6n?usp=sharing

Is there any mistake, I use the model of DeepLabV3+ from https://github.com/RogerZhangzz/CAG_UDA to train cityscape, and got the following result:
2021-11-19 08:53:38,453 INFO Mean IoU : : 0.7482995820400281
2021-11-19 08:53:38,453 INFO Mean IoU (16) : : 0.752068355974944
2021-11-19 08:53:38,453 INFO Mean IoU (13) : : 0.7987122101405965
2021-11-19 08:53:38,454 INFO 0: 0.9765658041915719
2021-11-19 08:53:38,454 INFO 1: 0.8224862674039519
2021-11-19 08:53:38,455 INFO 2: 0.9112979273421086
2021-11-19 08:53:38,455 INFO 3: 0.5291329863290484
2021-11-19 08:53:38,455 INFO 4: 0.5701830669298439
2021-11-19 08:53:38,456 INFO 5: 0.5505189105124582
2021-11-19 08:53:38,456 INFO 6: 0.6062819018186969
2021-11-19 08:53:38,456 INFO 7: 0.7005523425411562
2021-11-19 08:53:38,456 INFO 8: 0.9159354501634949
2021-11-19 08:53:38,457 INFO 9: 0.6386477854657843
2021-11-19 08:53:38,457 INFO 10: 0.940211061631527
2021-11-19 08:53:38,457 INFO 11: 0.778340032763567
2021-11-19 08:53:38,458 INFO 12: 0.5697296981527584
2021-11-19 08:53:38,458 INFO 13: 0.9381809896493891
2021-11-19 08:53:38,458 INFO 14: 0.7726736052796462
2021-11-19 08:53:38,459 INFO 15: 0.8678234591230659
2021-11-19 08:53:38,459 INFO 16: 0.7732769724159989
2021-11-19 08:53:38,459 INFO 17: 0.6281585092931333
2021-11-19 08:53:38,460 INFO 18: 0.7276952877533335
2021-11-19 08:53:40,287 INFO Best iou until now is 0.7482995820400281
I train the model(Using ImageNet pretrained backbone) with the following Hyperparameter:

train_iters: 90000

optimizer:
    name: 'SGD'
    lr: 0.001
    weight_decay: 5.0e-4
    momentum: 0.9

lr_schedule:
    name: 'poly_lr'
    T_max: 90000

dataset:
    name: cityscapes
    rootpath: dataset/CityScape
    split: train
    img_rows: 1024
    img_cols: 2048
    batch_size: 5
    img_norm: True
    mean: [0.485, 0.456, 0.406]
    std: [0.229, 0.224, 0.225]
    n_class: 19

 augmentations:
     gamma: 0.2
     brightness: 0.5
     saturation: 0.5
     contrast: 0.5
     rcrop: [1024, 512]
     hflip: 0.5

I using amp to save memory so that I can train the model with single 2080ti.

@sharat29ag
Copy link

Sorry for the delay. @zhijiew @sharat29ag The result on DeeplabV2 is implemented. Beyond expectation, the final result is 65.16 (mIoU), better than 64.89 (mIoU) with DeeplabV3+. Here is the origin training log: 2020-11-01 12:14:55,783 INFO Iter 15000 Loss: 0.1928 2020-11-01 12:14:55,784 INFO Iter 15000 Source Loss: 0.0000 2020-11-01 12:14:55,785 INFO Overall Acc: : 0.9418029923712912 2020-11-01 12:14:55,785 INFO Mean Acc : : 0.7246892190272104 2020-11-01 12:14:55,785 INFO FreqW Acc : : 0.893493677898871 2020-11-01 12:14:55,786 INFO Mean IoU : : 0.6516297024999936 2020-11-01 12:14:55,786 INFO 0: 0.9693999962471279 2020-11-01 12:14:55,786 INFO 1: 0.7685028486723834 2020-11-01 12:14:55,787 INFO 2: 0.8895210625419824 2020-11-01 12:14:55,787 INFO 3: 0.33887047797652403 2020-11-01 12:14:55,787 INFO 4: 0.47570431561588006 2020-11-01 12:14:55,787 INFO 5: 0.5278077449295415 2020-11-01 12:14:55,788 INFO 6: 0.5979519384158949 2020-11-01 12:14:55,788 INFO 7: 0.6525470266354098 2020-11-01 12:14:55,788 INFO 8: 0.9034188835393306 2020-11-01 12:14:55,788 INFO 9: 0.506204355591972 2020-11-01 12:14:55,789 INFO 10: 0.9236311508393851 2020-11-01 12:14:55,789 INFO 11: 0.7771101286799015 2020-11-01 12:14:55,789 INFO 12: 0.5364837525760412 2020-11-01 12:14:55,789 INFO 13: 0.9200637629029278 2020-11-01 12:14:55,790 INFO 14: 0.5212897908120947 2020-11-01 12:14:55,790 INFO 15: 0.5616047800928549 2020-11-01 12:14:55,790 INFO 16: 0.27918147195439025 2020-11-01 12:14:55,790 INFO 17: 0.5421158030330913 2020-11-01 12:14:55,790 INFO 18: 0.6895550564431461 2020-11-01 12:14:57,555 INFO Best iou until now is 0.6516297024999936

Our training settings is based on the CAG_UDA (Nips 2019, https://github.com/RogerZhangzz/CAG_UDA). However, the author has written the DeeplabV3+ to DeeplabV2. The only thing I can do is to report my result and framework honestly. And here is the origin codes and files for DeeplabV2: https://drive.google.com/drive/folders/1DnwzCabQnUbDeg6xOvkAeJZFG9V95g6n?usp=sharing

Hi, @munanning please share the initial weights for V2. Also the selection list contains 297 images, please share list of 150 images for fair comparison.

@munanning
Copy link
Owner

munanning commented Jan 17, 2022

@sharat29ag
The answer is 'the same'.
The weight can be found in Adaptset (https://github.com/wasidennis/AdaptSegNet).
The list is the same as the V3 version.

@sharat29ag
Copy link

Thanks.

@sharat29ag
Copy link

@munanning does the warmup weights at sgate1 for GTA->City and Synthia->City were same?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants