Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

performance on deeplab_jpu #18

Closed
alphaccw opened this issue Apr 29, 2019 · 1 comment
Closed

performance on deeplab_jpu #18

alphaccw opened this issue Apr 29, 2019 · 1 comment

Comments

@alphaccw
Copy link

alphaccw commented Apr 29, 2019

Hi,
Thank you for the awesome code.
I test the deeplab +jpu without changing anything on 4xgeforce 1080, cuda 9.0, torch 1.0.0
#train
CUDA_VISIBLE_DEVICES=4,5,6,7 python train.py --dataset pcontext --model deeplab --jpu --aux --backbone resnet50 --checkname deeplab_res50_pcontext_deeplabv3
#test
CUDA_VISIBLE_DEVICES=4,5,6,7 python test.py --dataset pcontext --model deeplab --jpu --aux --backbone resnet50 --resume ./runs/pcontext/deeplab/deeplab_res50_pcontext_deeplabv3/model_best.pth.tar --checkname deeplab_res50_pcontext_deeplabv3 --split val --mode testval
The model_best.pth.tar is the same as checkpoint.pth.tar in my case.
Ther performance i get is pixAcc: 0.7868, mIoU: 0.4904. Compare to Table 1, there is 1% drop (50.07 from table 1).

Do I miss sth or this is normal?

Thank you

@wuhuikai
Copy link
Owner

The training and testing process is the same as ours.

@wuhuikai wuhuikai closed this as completed May 5, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants