New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Table 4 experiments #12
Comments
@posenhuang I think the biggest difference between the results in table 3&4 and the previously published results is that most of the previous evaluations have taken both subject and object rankings into account, while MINERVA only evaluated on object rankings. This seems fine to me, but also explains why they have to re-evaluate most of the existing models themselves. |
Thanks @todpole3 for the answer. Do you know how I can change to run both subject and object rankings? Is it to have both directions in train.txt, test.txt? |
Hi Po-Sen, |
Hi @shehzaadzd, in previous baselines, they train on both head predictions and tail predictions. When they run evaluations, they average both results on test set. |
Hi @posenhuang, yes as @todpole3 pointed out, MINERVA starts from the entity in the question and searches to find the correct answer. This is akin to doing tail prediction. Yes, previous word report avg. of head and tail prediction. But the results we report in our paper are results of tail prediction and hence they are comparable. Would that work for you? Also, I am not sure, sometimes head prediction makes complete sense. For example, a query of the form, (person X, lives_in, city Y) wouldn't make sense if it is inverted. Essentially what I mean is I am not sure the notion of finding a "reasoning path" in the inverted triple holds anymore. Would like to know what you think? Regarding your last point, if training on the graph.txt (which contains edges in both directions) would make it comparable - I am not sure it would because of the above mentioned reason. Previous models define a score function between an entity pair and a relation. But MINERVA is kind of different that way. I am unsure if augmenting the training data with inverted edges will make the training easier and also comparable. |
Thanks for answering the questions. I think there are also 1-to-many relations in tail prediction (such as company-> employ -> person) though. |
You mean e2, r^-1, e1? |
Hi @rajarshd, @shehzaadzd,
For all the baseline methods, do you train on
train.txt
or on bothtrain.txt
andgraph.txt
?Do you have setting to reproduce the results? (e.g., Neural LP result is different from the ones in their paper for FB15k-237)
Thanks!
The text was updated successfully, but these errors were encountered: