Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fail to reproduce your result: EMANet(512)80.05%? #22

Closed
Euphoria16 opened this issue Oct 26, 2019 · 3 comments
Closed

Fail to reproduce your result: EMANet(512)80.05%? #22

Euphoria16 opened this issue Oct 26, 2019 · 3 comments

Comments

@Euphoria16
Copy link

Hi @XiaLiPKU ! This work is wonderful and thanks so much for releasing the code.

May I ask a question? I used your pretrained model to evaluate on val set and got 80.50% mIoU using single-scale test, but when I trained this model from scratch, I can only get 79.44% finally, which is supposed to be 80.05%.

I just followed your default settings(using pretrained ResNet weights, batch size 16,4 gpus, 30k iterations and so on...).

Are there any other techniques special you adopted to get this final model?

Looking forward to your reply!

@XiaLiPKU
Copy link
Owner

Hi @XiaLiPKU ! This work is wonderful and thanks so much for releasing the code.

May I ask a question? I used your pretrained model to evaluate on val set and got 80.50% mIoU using single-scale test, but when I trained this model from scratch, I can only get 79.44% finally, which is supposed to be 80.05%.

I just followed your default settings(using pretrained ResNet weights, batch size 16,4 gpus, 30k iterations and so on...).

Are there any other techniques special you adopted to get this final model?

Looking forward to your reply!

Hi! Sorry for reply too late.
For the pretrained model, the SS result on the val set is 80.51instead of 80.05.
The score 80.05 is achieved in my ICCV paper. And in this repo, the performance is significantly higher than that. I refine the detail just to help my followers to make full use of my repo.

Therefore, I guess the problem may lie in the image reading process.
In my experience, different version of cv2's imread function differently. So this may also occur to PIL. I guess this may be the source of bug. I recommend you check the cv2 or PIL version, and do carefully ablation study of the library. I used to do image processing task, so I just known how important this factor can be.
Moreover, if you use other's code for inference, I also don't guarantee the final performance. If so, you can carefully check the differences between the code you run and the code in my eval.py.

To be honest, I have shown all the training and inference details in this repo. There's no other so-called 'tricks' hidden by myself. For the 80.99 mIoU I report in the top of the repo, I really adopt some tricks, which I would never use in my paper.

@Euphoria16
Copy link
Author

Euphoria16 commented Oct 30, 2019

Thanks so much for your patient reply! @XiaLiPKU
I never thought PIL version would cause a difference. Thanks for telling me that. Could you please tell me your PIL version?
For reproducing, I just used your eval.py script, so it is not supposed to be the problem.
By the way,"For the 80.99 mIoU I report in the top of the repo, I really adopt some tricks, which I would never use in my paper."--Are these tricks included in your code? In this repo, it seems that you save the checkpoints every 2000 iterations, so I guess 80.99 is the highest IoU on val set among these models, instead of the final checkpoint?

@XiaLiPKU
Copy link
Owner

XiaLiPKU commented Nov 1, 2019

Thanks so much for your patient reply! @XiaLiPKU
I never thought PIL version would cause a difference. Thanks for telling me that. Could you please tell me your PIL version?
For reproducing, I just used your eval.py script, so it is not supposed to be the problem.
By the way,"For the 80.99 mIoU I report in the top of the repo, I really adopt some tricks, which I would never use in my paper."--Are these tricks included in your code? In this repo, it seems that you save the checkpoints every 2000 iterations, so I guess 80.99 is the highest IoU on val set among these models, instead of the final checkpoint?

I just use the final checkpoint.
The trick for 80.99 is just for fun.
I will never use it for public usage,
and also will not use in the repo

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants