-
Notifications
You must be signed in to change notification settings - Fork 744
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bad Performance using PANet pre-trained models #465
Comments
Thanks for the feedback! We have located the buggy PR (#448), which had replaced |
Thanks for the prompt reply, I have confirmed that switching to this commit works. However, I tested PANet -IC15, FCENet -IC15 & CTW, and all of the results look normal to me except PANet -CTW, if the root cause is the library difference, then shouldn't this affects other models/datasets as well? Also, does mmocr supports the output of results to ICDAR format (in text files)? because I can't seem to find this implementation. |
We've tested the new implementation thoroughly and found that some models, including PANet, would output some "invalid" polygons that are self-touching or self-crossing, tho they make sense from appearances. These polygons are ignored by the new implementation but were not by the old one. We will release a patch to fix this discrepancy soon.
Currently not, but any contribution would be appreciated :) |
Okay, I will see what can I do to output them in ICDAR format, thank you once again for your replies, appreciate it very much! |
panet_r18_fpem_ffm_600e_icdar2015 |
Hi there,
I downloaded the PANet checkpoint for CTW and used them to carry out inference on CTW test set. The results that I got are significantly lower than what's reported on this page. I have been using all default configurations without any changes.
The results that I got are:
CTW1500
thr 0.3, recall:0.638, precision: 0.636, hmean:0.637
thr 0.4, recall:0.638, precision: 0.636, hmean:0.637
thr 0.5, recall:0.638, precision: 0.636, hmean:0.637
thr 0.6, recall:0.638, precision: 0.636, hmean:0.637
thr 0.7, recall:0.638, precision: 0.636, hmean:0.637
thr 0.8, recall:0.638, precision: 0.636, hmean:0.637
thr 0.9, recall:0.618, precision: 0.671, hmean:0.643
{'hmean-iou:recall': 0.6182113821138211, 'hmean-iou:precision': 0.6705467372134039, 'hmean-iou:hmean': 0.64331641285956}
I suspect that there is something wrong with the released checkpoints, it seems like it is not trained properly.
Based on this CTW training log file, the validation set's H-mean has already reached 0.66035 at 10th epoch, which is higher than what I get through the released checkpoint.
Could you please help me to verify this?
The text was updated successfully, but these errors were encountered: