Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ground truth and prediction labels mismatch #39

Open
steve3nto opened this issue Apr 13, 2017 · 4 comments
Open

Ground truth and prediction labels mismatch #39

steve3nto opened this issue Apr 13, 2017 · 4 comments

Comments

@steve3nto
Copy link

I am using the ground truth data downloaded from the Cityscapes webpage

https://www.cityscapes-dataset.com/

The filenames of the ground truth annotations from Cityscapes end in "gtFine_labelIds", while in the evaluation file for the validation set I have noticed you were looking for annotations ending in "gtFine_labelTrainIds". Where did you get those for the validation set?

I am sure you were using different indices for the classes compared to what can be downloaded from cityscapes now. For example the sky class in the cityscapes ground truths has a value of 23, while the PSP-predicted label for sky is 10. And that happens for all classes.

When running the eval_acc function the performance metrics computed are then obviously wrong.
Do you have a remapping from the current cityscapes labels to the ones you were using?

@steve3nto
Copy link
Author

Ok I have found the answer in the Cityscapes repo, check this file for mappings between class labels

https://github.com/mcordts/cityscapesScripts/blob/master/cityscapesscripts/helpers/labels.py

I would be useful to have a flag to convert the grayscale prediction to LabelIDs instead of LabelTrainIDs before saving them.

@pjohh
Copy link

pjohh commented May 29, 2017

Hey,

I stumbled above the same problem and found a handy script to convert the gt annotations to needed "gtFine_labelTrainIds":
https://github.com/mcordts/cityscapesScripts/blob/master/cityscapesscripts/preparation/createTrainIdLabelImgs.py

just export CITYSCAPES_DATASET=< path to dataset root > and execute the script ...

@chaotaklon
Copy link

pjohh's solution works, thanks.

@Kewenjing1020
Copy link

There's the same problem with the ADE20K dataset, while I couldn't find the file to explain the relation between train_id and id. Does anybody know how to solve this problem?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants