You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Sorry for disturbing you. However, I wonder about using the dataset NYT-FB in your experiment. While TACRED test set provides the relation type for each sentence, I cannot find each relation type for each sentence in NYT-FB. I already got the NYT-FB dataset from Diego Marcheggiani, but most sentences are without relation type as (https://github.com/diegma/relation-autoencoder/blob/master/data-sample.txt). I wonder how to evaluate your system on NYT-FB without labels?
Thanks for your help!
The text was updated successfully, but these errors were encountered:
Please find the statistics of positive sentences (labelled sentences) in our paper, Table 3 Appendix A.
There are 262 relation types in NYT-FB, "...2.1% of the sentences in NYT-FB were aligned against Freebase’s triplets" (page 3, section 3 Experiments and results, datasets).
All data are used during training, but only the labelled sentences are used for evaluation (7,793 and 33,808 sentences in dev and test set, respectively).
Let me know if you have other questions.
Hi @ttthy ,
Sorry for disturbing you. However, I wonder about using the dataset NYT-FB in your experiment. While TACRED test set provides the relation type for each sentence, I cannot find each relation type for each sentence in NYT-FB. I already got the NYT-FB dataset from Diego Marcheggiani, but most sentences are without relation type as (https://github.com/diegma/relation-autoencoder/blob/master/data-sample.txt). I wonder how to evaluate your system on NYT-FB without labels?
Thanks for your help!
The text was updated successfully, but these errors were encountered: