You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I run the experiment of evaluating the entity disambiguation performance without candidate set.
As shown in the paper, the performance should be.
However, when I run the entity disambiguation without candidate set using the provided checkpoint. python evaluate_kilt_dataset.py path_to/fairseq_entity_disambiguation_aidayago path_to/datasets path_to/predictions --trie path_to/kilt_titles_trie_dict.pkl --batch_size 64 --device "cuda:0"
It gives the performance:
Is there anything wrong for my run?
The text was updated successfully, but these errors were encountered:
DISCLAIMER: I don't have access to the corporate machine I used for these experiments any more so I cannot check precisely what was the setting I used.
However, if it might be I didn't use the KILT trie but the YAGO trie. The KB AIDA uses is not the whole Wikipedia (5M items) but a 10x smaller set (400k if I remember). Additionally, the KILT KB is from 2019 were the AIDA KB is older and titles differ. Thus when using that the results should be higher.
Again, unfortunately I don't have code and data I used for that so if you want to use a different trie you need to build it yourself.
Hi,
Thanks for your work.
I run the experiment of evaluating the entity disambiguation performance without candidate set.
As shown in the paper, the performance should be.
However, when I run the entity disambiguation without candidate set using the provided checkpoint.
python evaluate_kilt_dataset.py path_to/fairseq_entity_disambiguation_aidayago path_to/datasets path_to/predictions --trie path_to/kilt_titles_trie_dict.pkl --batch_size 64 --device "cuda:0"
It gives the performance:
Is there anything wrong for my run?
The text was updated successfully, but these errors were encountered: