Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

FUNSD test result is very low! #1

Closed
weishu20 opened this issue Dec 5, 2022 · 7 comments
Closed

FUNSD test result is very low! #1

weishu20 opened this issue Dec 5, 2022 · 7 comments

Comments

@weishu20
Copy link

weishu20 commented Dec 5, 2022

Hi,I tried your public model to test FUNSD,but get low result,is some thing wrong?
I tried edge model and e2e model
截屏2022-12-05 下午6 56 30
截屏2022-12-05 下午6 55 08

@weishu20
Copy link
Author

weishu20 commented Dec 5, 2022

@andreagemelli

@andreagemelli
Copy link
Owner

Hi,
Probably there are some problems with the weights. I will try to fix it asap, let you know.
A.

@weishu20
Copy link
Author

weishu20 commented Dec 6, 2022

And I tried to train e2e model on FUNSD,but I cant get good result as paper‘s?
截屏2022-12-06 上午9 59 57

@weishu20
Copy link
Author

weishu20 commented Dec 6, 2022

for Semantic Entity Labeling, the paper's best F1 is 0.82,is equal to node micro I tried(0.8105)?
for Entity Linking,best F1 is 0.53 in paper,which metric matches the above?
None=0.9964;Key-Value=0.5895;AUC-PR=0.7903 in paper is higher ?
Thanks for answering^^

@andreagemelli
Copy link
Owner

Thanks for the feedbacks!
I cannot currently work on the bug, I can get a look at it during the next week! I will let you know.

@andreagemelli
Copy link
Owner

Hey @weishu20,
I did not forget about you!
While I have solved other minor issue with the last commit, we are still struggling with the results: in the image you can see that on the left, in another machine where I installed again the repo, the results are low like yours. While, in my server, the results are still the one I experimented with for the publication.
We believe that the problem is in the construction of the graph or the features used by the nodes. We already excluded data and weights for the possible problems.
I hope to get back to you asap.

Screenshot 2022-12-14 at 14 41 36

@andreagemelli
Copy link
Owner

Hi @weishu20 ,
sorry for my late answer.
Going through the code I have noticed that in the meantime we published the code spaCy came out with a new version of its language models, including the one used by Doc2Graph. As a results, the textual features loaded during the preprocessing were completely different from the ones in my original repo.
I tried myself to reproduce the behaviour of my repo into a new and different environment and now the problem should be fixed: please reinstall the library following the new README.md file and let me know!
A.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants