Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

what will be the input labels for the function of 'get_metrics_dict_all_labels'? #28

Closed
xi11 opened this issue May 28, 2024 · 2 comments

Comments

@xi11
Copy link

xi11 commented May 28, 2024

Hi,

Awesome work!
Just wondering labels in def get_metrics_dict_all_labels(labels: Sequence, is gt label or pred label in an image, or the full label?
For example, I have 5 classes to segment, [1, 2, 3, 4, 5], excluding background. Some images may only have 3 classes, [1, 2, 0, 0, 5], then what should be labels for these images, [1, 2, 3, 4, 5] or [1, 2, 0, 0, 5]? If [1, 2, 3, 4, 5], then the dice for class 3 and class 4 will be 0 even if the prediction perfectly aligns with gt, which will eventually decrease the performance for class 3 and class 4, right?

@Jingnan-Jia
Copy link
Owner

Jingnan-Jia commented Jun 18, 2024

@xi11 Sorry for the late reply. the labels here means the full labels. We assumed that the ground truth and prediction image should have the same labels for the same objects. For instance, in a furnature segmentation task, the tables and beds in ground truth image is 1 and 2, respectively. The prediction image should also assign 1 and 2 to tables and bed. Otherwise, we think the prediction is wrong.

Therefore, if you have 5 classes to segment, [1, 2, 3, 4, 5], excluding background, and if some images may only have 3 classes in ground truth, [1, 2, 0, 0, 5], and if your perfect prediction is also 3 classes, [1, 2, 0, 0, 5]. It is easy to know that the dice for the label 1, 2 and 5 will be almost 1. Then what is the dice of label 3 and 4? The answer is:

If label 3 and 4 did not appear in ground truth and also did not appear in prediction image, we should think that the prediction is correct. Therefore, the dice, jaccard should be 1.

In the previous version (1.1.*), I did not consider such cases you mentioned. But after I saw your question, I updated the package so that in such cases, we can also get the correct metrics.

Note: Please ensure install the latest version (>=1.2.6) to get the correct output.

Therefore, your concerns will not appear. Actually, this will eventually increase the performance for class 3 and class 4.

If you have more questions, please let me know.

@xi11
Copy link
Author

xi11 commented Jun 18, 2024

@Jingnan-Jia Thanks for the detailed explanation, really appreciate it!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants