You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thanks for the excellent work! I am trying to reproduce the result on CORD dataset. However, I find the f-score results in your paper are somewhat different from that in LayoutLMv2 paper. Specifically, LayoutLMv2*-base achieves 96.05 and LayoutLMv2*-large achieves 97.24 in your paper. While in LayoutLMv2 paper, LayoutLMv2-base achieves 94.95 and LayoutLMv2-large achieves 96.01. Could you give an example of BROS fine-tuning on CORD dataset? Thanks!
The text was updated successfully, but these errors were encountered:
Thank you for your interest in our work!
The scores can be different because the fine-tuning script of LayoutLMv2* is our own implementation and the fine-tuning settings (e.g. batch size, # training steps) of LayoutLMv2 are unknown.
You can conduct fine-tuning experiments by preprocessing the CORD dataset similarly to the datasets in this repo.
Thanks for the excellent work! I am trying to reproduce the result on CORD dataset. However, I find the f-score results in your paper are somewhat different from that in LayoutLMv2 paper. Specifically, LayoutLMv2*-base achieves 96.05 and LayoutLMv2*-large achieves 97.24 in your paper. While in LayoutLMv2 paper, LayoutLMv2-base achieves 94.95 and LayoutLMv2-large achieves 96.01. Could you give an example of BROS fine-tuning on CORD dataset? Thanks!
The text was updated successfully, but these errors were encountered: