New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
IoU results #28
Comments
Sorry, I have not done that, mostly because it would be pretty useless without the context module. |
Are you saying that purely because training of the context_module has not been implemented? In the original paper they still show that using the front end alone produces a mean IoU of 67.6% for the VOC-2012 dataset. |
Ah thanks, I forgot that they also show results without the context module. |
This is what I came up with to determine the confusion matrix, meanIOU and IOU. def calculate_iou(y_pred_batch, y_true_batch):
conf_m = np.zeros((20, 20), dtype=float)
for i in range(len(y_true_batch)):
flat_pred = np.ravel(y_pred_batch[i]).astype(int)
flat_label = np.ravel(y_true_batch[i]).astype(int)
for p, l in zip(flat_pred, flat_label):
if l == 0:
continue
if l < 20 and p < 20:
conf_m[l, p] += 1
I = np.diag(conf_m)
U = np.sum(conf_m, axis=0) + np.sum(conf_m, axis=1) - I
IOU = I/U
meanIOU = np.mean(IOU)
return conf_m, IOU, meanIOU |
Have you by any chance compared this to the original implementation with regards to the mean IoU?
If so, what implementation of IoU did you use and what were your results?
The text was updated successfully, but these errors were encountered: