You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I want to know whether the result reported in your paper about KBAT is the result after fix. I only saw the content of the evaluation protocol modified in the code, and did not see the modification of the valid_triples_dict you mentioned, thanks.
The text was updated successfully, but these errors were encountered:
Yes, the results are reported after correcting both the evaluation script and the test leakage problem. Thanks for pointing that out. We might have missed that in the version which we made publicly available.
Thank you for your reply. Based on the "they shouldn't use valid_triples_dict (all triples in train, dev, and test) in this line, but should only use training data" you mentioned in other issues, can I understand that it is enough to change the "self.valid_triples_dict = {j: i for i, j in enumerate(self.train_triples + self.validation_triples + self.test_triples)}" in the code to "self.valid_triples_dict = {j: i for i, j in enumerate(self.train_triples )}"?
There is another problem. I noticed that you have commented out two lines of code in "main.py". In our experiment, this operation will cause loss to fail to converge. Is it possible that you sort out the omissions of the code or should you really comment it out? If it should be commented out, can you tell me why? Thank you.
#main.py
line198 # loss.backward()
line199 # optimizer.step()
I want to know whether the result reported in your paper about KBAT is the result after fix. I only saw the content of the evaluation protocol modified in the code, and did not see the modification of the valid_triples_dict you mentioned, thanks.
The text was updated successfully, but these errors were encountered: