Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Why I get the 0.94 in hop2, 0.972 in hop1, 0.48 in hop3 (metaQA full KG) #28

Closed
ToneLi opened this issue Sep 8, 2020 · 12 comments
Closed

Comments

@ToneLi
Copy link

ToneLi commented Sep 8, 2020

Dear authors,
Why I get the 0.94 in hop2, 0.972 in hop1, 0.48 in hop3 (metaQA full KG), I used your code, same parameters, there is a big gap with the description in your paper, what should I do?

@apoorvumang
Copy link
Collaborator

Can you please tell the exact commands you used?

@ToneLi
Copy link
Author

ToneLi commented Sep 8, 2020

parser = argparse.ArgumentParser()
parser.add_argument('--dataname', type=str, default='metaQA')
parser.add_argument('--hops', type=str, default='3')
parser.add_argument('--ls', type=float, default=0.0)
parser.add_argument('--validate_every', type=int, default=5)
parser.add_argument('--model', type=str, default='ComplEx')
parser.add_argument('--kg_type', type=str, default='full')

parser.add_argument('--mode', type=str, default='train')
parser.add_argument('--batch_size', type=int, default=128)
parser.add_argument('--dropout', type=float, default=0.1)
parser.add_argument('--entdrop', type=float, default=0.1)
parser.add_argument('--reldrop', type=float, default=0.2)
parser.add_argument('--scoredrop', type=float, default=0.2)
parser.add_argument('--l3_reg', type=float, default=0.0)
parser.add_argument('--decay', type=float, default=1.0)
parser.add_argument('--shuffle_data', type=bool, default=True)
parser.add_argument('--num_workers', type=int, default=15)
parser.add_argument('--lr', type=float, default=0.0005)
parser.add_argument('--nb_epochs', type=int, default=90)
parser.add_argument('--gpu', type=int, default=4)
parser.add_argument('--neg_batch_size', type=int, default=128)
parser.add_argument('--hidden_dim', type=int, default=256)
parser.add_argument('--embedding_dim', type=int, default=256)
parser.add_argument('--relation_dim', type=int, default=200)
parser.add_argument('--use_cuda', type=bool, default=True)
parser.add_argument('--patience', type=int, default=5)
parser.add_argument('--freeze', type=str2bool, default=True)
image

@apoorvumang
Copy link
Collaborator

Are you using the pretrained KG embeddings or did you retrain them?

@ToneLi
Copy link
Author

ToneLi commented Sep 9, 2020

Yes, I retrained it by your command in #11.

@apoorvumang
Copy link
Collaborator

Those aren't the best hyperparameters in #11, I just wrote that command to clarify that it does work even when batch_norm is set to 0. Please set batch_norm=1, and if possible, please let me know the MRR you get when training full MetaQA KG embedding. It should be very close to 1.0, if not 1.0 .

@apoorvumang
Copy link
Collaborator

Also, as mentioned in #15, we need relation scoring module for 3-hop SOTA performance. Without it we get ~0.75 test accuracy

@ToneLi
Copy link
Author

ToneLi commented Sep 9, 2020

if set batch_norm=1, #11 is the best hyperparameters ?

@apoorvumang
Copy link
Collaborator

Let me confirm, please wait 15-20 minutes

@apoorvumang
Copy link
Collaborator

@ToneLi Yes, please try same command with batch_norm = 1. As explained in #11, this is needed because of the implementation of KG embedding done by https://github.com/ibalazevic/TuckER from which this code has been taken

@apoorvumang
Copy link
Collaborator

Closing this for now, please reopen if issue remains @ToneLi

@ToneLi
Copy link
Author

ToneLi commented Sep 11, 2020

Hi, Sorry to disturb you, I used the hyperparamaters, why I get the HIT value is 1. It's the right result? (FULL KG metaQA)
image

@ToneLi
Copy link
Author

ToneLi commented Sep 12, 2020

I train KG embedding again, but I still cannot get 0.49 in metaQA-full KG-3hops, do you have any advice to get 0.75 you mentioned?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants