Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bad Performance using PANet pre-trained models #465

Closed
chunchet-ng opened this issue Aug 26, 2021 · 5 comments · Fixed by #471
Closed

Bad Performance using PANet pre-trained models #465

chunchet-ng opened this issue Aug 26, 2021 · 5 comments · Fixed by #471
Labels
bug Something isn't working

Comments

@chunchet-ng
Copy link

chunchet-ng commented Aug 26, 2021

Hi there,

I downloaded the PANet checkpoint for CTW and used them to carry out inference on CTW test set. The results that I got are significantly lower than what's reported on this page. I have been using all default configurations without any changes.

The results that I got are:
CTW1500
thr 0.3, recall:0.638, precision: 0.636, hmean:0.637
thr 0.4, recall:0.638, precision: 0.636, hmean:0.637
thr 0.5, recall:0.638, precision: 0.636, hmean:0.637
thr 0.6, recall:0.638, precision: 0.636, hmean:0.637
thr 0.7, recall:0.638, precision: 0.636, hmean:0.637
thr 0.8, recall:0.638, precision: 0.636, hmean:0.637
thr 0.9, recall:0.618, precision: 0.671, hmean:0.643
{'hmean-iou:recall': 0.6182113821138211, 'hmean-iou:precision': 0.6705467372134039, 'hmean-iou:hmean': 0.64331641285956}

I suspect that there is something wrong with the released checkpoints, it seems like it is not trained properly.
Based on this CTW training log file, the validation set's H-mean has already reached 0.66035 at 10th epoch, which is higher than what I get through the released checkpoint.
Could you please help me to verify this?

@gaotongxiao
Copy link
Collaborator

Thanks for the feedback! We have located the buggy PR (#448), which had replaced Polygon3 with shapely and affected hmean-iou compuation. We are now fixing this issue. You may reset the codebase to commit 7c1bf45c63e3962bae3ed88ce0fdab967172c07b to avoid this bug. Sorry for causing inconvenience.

@gaotongxiao gaotongxiao added the bug Something isn't working label Aug 26, 2021
@chunchet-ng
Copy link
Author

Thanks for the prompt reply, I have confirmed that switching to this commit works. However, I tested PANet -IC15, FCENet -IC15 & CTW, and all of the results look normal to me except PANet -CTW, if the root cause is the library difference, then shouldn't this affects other models/datasets as well?

Also, does mmocr supports the output of results to ICDAR format (in text files)? because I can't seem to find this implementation.

@gaotongxiao
Copy link
Collaborator

We've tested the new implementation thoroughly and found that some models, including PANet, would output some "invalid" polygons that are self-touching or self-crossing, tho they make sense from appearances. These polygons are ignored by the new implementation but were not by the old one. We will release a patch to fix this discrepancy soon.

Also, does mmocr supports the output of results to ICDAR format (in text files)? because I can't seem to find this implementation.

Currently not, but any contribution would be appreciated :)

@chunchet-ng
Copy link
Author

Okay, I will see what can I do to output them in ICDAR format, thank you once again for your replies, appreciate it very much!

@ming-eng
Copy link

panet_r18_fpem_ffm_600e_icdar2015
thr 0.3, recall:0.638, precision: 0.636, hmean:0.637
thr 0.4, recall:0.638, precision: 0.636, hmean:0.637
thr 0.5, recall:0.638, precision: 0.636, hmean:0.637
thr 0.6, recall:0.638, precision: 0.636, hmean:0.637
thr 0.7, recall:0.638, precision: 0.636, hmean:0.637
thr 0.8, recall:0.638, precision: 0.636, hmean:0.637
thr 0.9, recall:0.618, precision: 0.671, hmean:0.643
训练效果太差了
请问你遇到了吗

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants