-
Notifications
You must be signed in to change notification settings - Fork 29
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to evaluate the provided pre-trained models to get the same results with the paper #27
Comments
Does it really matter of ~0.1 difference? |
I just want to confirm that the released pre-trained models are correct and the evaluation I did is right. Plus, can I report the test results I got based on the released models in my manuscript? |
I think the model I upload is correct. As i repeat the same experiments for several times, the one I upload to github might be a light different to the one I use to report in the paper. I still suggest you to use the results in the paper if there is only ~0.1 difference. Oh, btw, the results can also be influenced by order of softmax and upsample in the evaluation code. You can make a simple try by change the order of them. I'm sure it will give you a slightly different result. |
Got it. Thank you so much for your reply and suggestions! |
Hi, I am sorry to disturb you again. I was trying to evaluate the pre-trained models which are provided by this project but I met some difficulties. Can you give me some suggestions? Thanks in advance!
You provided the pre-trained models in the README file, they are GTA5_deeplab, GTA5_VGG, SYNTHIA_deeplab, and SYNTHIA_VGG. In my understanding, I can get the same results with the paper by running evaluation.py on the test dataset. However, the results I got are as follows
The text was updated successfully, but these errors were encountered: