Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fast_SCNN based on the original config yaml cannot reproduce the mIoU? #18

Closed
nemonameless opened this issue Feb 11, 2020 · 11 comments
Closed

Comments

@nemonameless
Copy link

Without any change, I train with 8 gpus but only get 59.0 mIoU on cityscapes. But using your trained fast_scnn_segmentron.pth can reproduce 68.9
Did you change something later about the config yaml?

Thanks for your nice work!

@LikeLy-Journey
Copy link
Owner

LikeLy-Journey commented Feb 11, 2020

so your batch size is 12 * 8? I only use batch 12 with 1 gpus which is consistent with paper.

@pawopawo
Copy link

so your batch size is 32? I only use batch 12 with 3 gpus which is consistent with paper.

The BATCH_SIZE in the config file refers to the total batch_size or the batch_size on each gpu? My understanding is that this parameter is the batch_size on each gpu

@LikeLy-Journey
Copy link
Owner

LikeLy-Journey commented Feb 12, 2020

so your batch size is 32? I only use batch 12 with 3 gpus which is consistent with paper.

The BATCH_SIZE in the config file refers to the total batch_size or the batch_size on each gpu? My understanding is that this parameter is the batch_size on each gpu

you are right, it is the batch_size on each gpu, i updated my reply. so only need 1 gpus to make total batch size be 12.

@pawopawo
Copy link

so your batch size is 32? I only use batch 12 with 3 gpus which is consistent with paper.

The BATCH_SIZE in the config file refers to the total batch_size or the batch_size on each gpu? My understanding is that this parameter is the batch_size on each gpu

you are right, it is the batch_size on each gpu, i updated my reply. so only need 1 gpus to make total batch size be 12.

Thank you for your reply. Are all the parameters in the configs file trained by 1 gpu by default?

@nemonameless
Copy link
Author

so your batch size is 32? I only use batch 12 with 3 gpus which is consistent with paper.

The BATCH_SIZE in the config file refers to the total batch_size or the batch_size on each gpu? My understanding is that this parameter is the batch_size on each gpu

you are right, it is the batch_size on each gpu, i updated my reply. so only need 1 gpus to make total batch size be 12.

Only one GPU training and bs=12 is so slow, If I train with 4 GPUs and thus the batchsize=48? Any else should I change to reproduce 68.9 by training with 4gpus and bs=48 ?

@LikeLy-Journey
Copy link
Owner

so your batch size is 32? I only use batch 12 with 3 gpus which is consistent with paper.

The BATCH_SIZE in the config file refers to the total batch_size or the batch_size on each gpu? My understanding is that this parameter is the batch_size on each gpu

you are right, it is the batch_size on each gpu, i updated my reply. so only need 1 gpus to make total batch size be 12.

Only one GPU training and bs=12 is so slow, If I train with 4 GPUs and thus the batchsize=48? Any else should I change to reproduce 68.9 by training with 4gpus and bs=48 ?

you can set batch_size=3 in config file, and use 4gpu to keep total batch size=12

@LikeLy-Journey
Copy link
Owner

so your batch size is 32? I only use batch 12 with 3 gpus which is consistent with paper.

The BATCH_SIZE in the config file refers to the total batch_size or the batch_size on each gpu? My understanding is that this parameter is the batch_size on each gpu

you are right, it is the batch_size on each gpu, i updated my reply. so only need 1 gpus to make total batch size be 12.

Thank you for your reply. Are all the parameters in the configs file trained by 1 gpu by default?

no, i only trained 3 or 4 models. i think keep total batch size=12~16 will be ok.

@nemonameless
Copy link
Author

nemonameless commented Feb 12, 2020

so your batch size is 32? I only use batch 12 with 3 gpus which is consistent with paper.

The BATCH_SIZE in the config file refers to the total batch_size or the batch_size on each gpu? My understanding is that this parameter is the batch_size on each gpu

you are right, it is the batch_size on each gpu, i updated my reply. so only need 1 gpus to make total batch size be 12.

Only one GPU training and bs=12 is so slow, If I train with 4 GPUs and thus the batchsize=48? Any else should I change to reproduce 68.9 by training with 4gpus and bs=48 ?

you can set batch_size=3 in config file, and use 4gpu to keep total batch size=12

I mean bs=12 training is so slow, I just want train with larger batchsize . If I use bs=48 (12*4) and change lr=0.045x4 , can I reproduce 68.9?

@LikeLy-Journey
Copy link
Owner

so your batch size is 32? I only use batch 12 with 3 gpus which is consistent with paper.

The BATCH_SIZE in the config file refers to the total batch_size or the batch_size on each gpu? My understanding is that this parameter is the batch_size on each gpu

you are right, it is the batch_size on each gpu, i updated my reply. so only need 1 gpus to make total batch size be 12.

Only one GPU training and bs=12 is so slow, If I train with 4 GPUs and thus the batchsize=48? Any else should I change to reproduce 68.9 by training with 4gpus and bs=48 ?

you can set batch_size=3 in config file, and use 4gpu to keep total batch size=12

I mean bs=12 training is so slow, I just want train with larger batchsize . If I use bs=48 (12*4) and change lr=0.045x4 , can I reproduce 68.9?

i have not tried this hyper parameters.

@y-kl8
Copy link

y-kl8 commented Mar 8, 2020

hi, I see you get the trained model, dou you know where can i obtain the trained cityscapes_deeplabv3_plus_mobilenet model?

@LikeLy-Journey
Copy link
Owner

hi, I see you get the trained model, dou you know where can i obtain the trained cityscapes_deeplabv3_plus_mobilenet model?

https://github.com/LikeLy-Journey/SegmenTron#real-time-models
https://github.com/LikeLy-Journey/SegmenTron/releases

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants