-
Notifications
You must be signed in to change notification settings - Fork 45
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Evaluation Metrics #26
Comments
Hi Dear @ArtemisiaW, thanks for your interest. You may refer to this https://drive.google.com/open?id=1F-uqCKnhtSdQKcDUiL3dRcLOrAxHargz |
Are the metrics (recall, AP, miou) achieved by the same checkpoint with the same post-processing or do you select a best checkpoint for each metric? |
What the paper report is achieved by the same checkpoint. But I am sure that you could train different models to achieve the best for different metrics. |
OK, thank you very much! @AnyiRao |
Your paper uses 3 metrics, which are AP, Miou and Recall.
I wonder whether the best results of these three metrics are achieved by the same pretrained lgss model, or they are achieved by different pretrained models.
And can you provide the pretrained models?
The text was updated successfully, but these errors were encountered: