Skip to content
This repository has been archived by the owner on Jul 2, 2021. It is now read-only.

possible bug in the way that mIoU is computed #950

Open
seyeeet opened this issue May 28, 2021 · 0 comments
Open

possible bug in the way that mIoU is computed #950

seyeeet opened this issue May 28, 2021 · 0 comments

Comments

@seyeeet
Copy link

seyeeet commented May 28, 2021

I notice that the results for miou does not match with miou that I manually compute.
here is an example, lets say pres and labels are two lists including the predictions and gt data
I can compute the confusion matrix via chainercv.evaluations.calc_semantic_segmentation_confusion I also can compute the miou via chainercv.evaluations.eval_semantic_segmentation(preds, labels)

the miou based on confusion matrix can be computed as np.nanmean(np.diag(confusion) / (confusion.sum(axis=1) + confusion.sum(axis=0) - np.diag(confusion))) and these results dont match with np.nanmean(chainercv.evaluations.eval_semantic_segmentation(preds, labels)['iou'])

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant