-
Notifications
You must be signed in to change notification settings - Fork 22
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
1vsAll objective and reciprocal triples #19
Comments
@loreloc Hi Lorenzo, you are right. We did augment the data set with reciprocal triples as empirically we found that it is better than using |
Hi @yihong-chen, So do you confirm that the loss stated in the paper (Eq. 2) is not exactly the one used in the experiments, but it is actually the one showed in Lacroix et. al (2018) (Eq. 7) with the addition of the relation prediction auxiliary? |
Hi @loreloc We have two implementations in our codebase, with- and without- reciprocal triples. The Our reported results are with reciprocal triples. So you are right, it is Lacroix et. al (2018) (Eq. 7) + the relation prediction auxiliary. In general, using reciprocal triples is a very useful trick as observed both in Dettmers et al., 2018 and Lacroix et. al (2018). Let me know if there is anything else I can help. |
Thank you! I think this can be closed. |
Hi,$-\log P_\theta(s\mid p,o)$ into loss. In contrast, the 1vsAll objective includes this conditional likelihood, so it seems there is a discrepancy between the objective function in the paper (where there is a conditioning on the subjects) and the one used here.
I have noticed that in your experiments the flag
--score_lhs
is not enabled, and this flag includes the componentIs it because you augment the data set with reciprocal triples? If so, is this equivalent to assuming that$P_\theta(S=s\mid R=p,O=o) = P_\theta(O=s\mid R=p^{-1},S=o)$ , where $r^{-1}$ denotes the inverse relation?
Thank you
The text was updated successfully, but these errors were encountered: