-
Notifications
You must be signed in to change notification settings - Fork 483
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Reproducing Cityscapes #30
Comments
I find this sentence in the paper: "The base learning rate is set to 0.01 for Cityscapes dataset and 0.001 for others. " But in the repo, the lr is recommended as 0.03. So I'm also confused as you. Looking forward the reply from the author.
|
When training, the result is not the real mIoU, you need run test.py to obtain real mIoU, In addition, we |
@junfu1115 Why training stage mIoU results is not real? Can you explain about this? Thank you. |
@LiuPearl1 I think it may because the images for test are from val dataset. But the results reported from test dataset. |
@LiuPearl1 In training phase, we evalute the model with only a part of image, while whole image in test phase. |
Thank you. I get it. |
|
Running the command supplied by the repo,
CUDA_VISIBLE_DEVICES=0,1,2,3 python train.py --dataset cityscapes --model danet --backbone resnet101 --checkname danet101 --base-size 1024 --crop-size 768 --epochs 240 --batch-size 8 --lr 0.003 --workers 2 --multi-grid --multi-dilation 4 8 16
returned a mIOU of 0.735 at the end of 240 epochs. There is probably some randomness in running through the dataset, but I'm surprised it was that much lower than anticipated. Any advice on how to get closer to reported scores?
The text was updated successfully, but these errors were encountered: