Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reproducing Cityscapes #30

Closed
erikgaas opened this issue Nov 30, 2018 · 7 comments
Closed

Reproducing Cityscapes #30

erikgaas opened this issue Nov 30, 2018 · 7 comments

Comments

@erikgaas
Copy link

Running the command supplied by the repo,

CUDA_VISIBLE_DEVICES=0,1,2,3 python train.py --dataset cityscapes --model danet --backbone resnet101 --checkname danet101 --base-size 1024 --crop-size 768 --epochs 240 --batch-size 8 --lr 0.003 --workers 2 --multi-grid --multi-dilation 4 8 16

returned a mIOU of 0.735 at the end of 240 epochs. There is probably some randomness in running through the dataset, but I'm surprised it was that much lower than anticipated. Any advice on how to get closer to reported scores?

@XiaLiPKU
Copy link

XiaLiPKU commented Jan 7, 2019

I find this sentence in the paper: "The base learning rate is set to 0.01 for Cityscapes dataset and 0.001 for others. " But in the repo, the lr is recommended as 0.03. So I'm also confused as you. Looking forward the reply from the author.

Running the command supplied by the repo,

CUDA_VISIBLE_DEVICES=0,1,2,3 python train.py --dataset cityscapes --model danet --backbone resnet101 --checkname danet101 --base-size 1024 --crop-size 768 --epochs 240 --batch-size 8 --lr 0.003 --workers 2 --multi-grid --multi-dilation 4 8 16

returned a mIOU of 0.735 at the end of 240 epochs. There is probably some randomness in running through the dataset, but I'm surprised it was that much lower than anticipated. Any advice on how to get closer to reported scores?

@junfu1115
Copy link
Owner

junfu1115 commented Jan 7, 2019

When training, the result is not the real mIoU, you need run test.py to obtain real mIoU, In addition, we
train FCN, and DANet(only PAM/CAM) with one loss, the lr is set to 0.01, And we adopt three losses(one main loss + two auxi loss) in DANet (PAM+CAM), we adopt 0.003 as final lr. @XiaLiPKU

@junfu1115 junfu1115 mentioned this issue Jan 7, 2019
@LiuPearl1
Copy link

@junfu1115 Why training stage mIoU results is not real? Can you explain about this? Thank you.

@emma-sjwang
Copy link

@LiuPearl1 I think it may because the images for test are from val dataset. But the results reported from test dataset.

@junfu1115
Copy link
Owner

@LiuPearl1 In training phase, we evalute the model with only a part of image, while whole image in test phase.

@LiuPearl1
Copy link

@LiuPearl1 In training phase, we evalute the model with only a part of image, while whole image in test phase.

Thank you. I get it.

@lyn-rgb
Copy link

lyn-rgb commented Aug 15, 2019

Running the command supplied by the repo,

CUDA_VISIBLE_DEVICES=0,1,2,3 python train.py --dataset cityscapes --model danet --backbone resnet101 --checkname danet101 --base-size 1024 --crop-size 768 --epochs 240 --batch-size 8 --lr 0.003 --workers 2 --multi-grid --multi-dilation 4 8 16

returned a mIOU of 0.735 at the end of 240 epochs. There is probably some randomness in running through the dataset, but I'm surprised it was that much lower than anticipated. Any advice on how to get closer to reported scores?
@erikgaas
Have you reproduced the result?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants