Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

I still get 0.49 in metaQA full KG-hop3. #31

Open
ToneLi opened this issue Sep 12, 2020 · 7 comments
Open

I still get 0.49 in metaQA full KG-hop3. #31

ToneLi opened this issue Sep 12, 2020 · 7 comments

Comments

@ToneLi
Copy link

ToneLi commented Sep 12, 2020

I still get 0.49 in metaQA full KG-hop3. can you provided the whole project (KG embedding, command), or other advice? 0.49 is too low, I do not know what's wrong with it? I used your command, your data. I hop you can provided the relevant project again or other advice to help me rise the accuracy.

@apoorvumang
Copy link
Collaborator

Can you pls give the exact command that you used again? A screenshot of output would also be helpful.

@panhaiming
Copy link

panhaiming commented Oct 15, 2020

I encountered the same problem. I used the training command of metaQA_full 2-hop that you released on GitHub, but the accuracy rate of the test set was only 0.70. Can you provide me the training commands for metaQA_full 3-hop?

@ShuangNYU
Copy link

I got a same problem. With the ComplEx embeddings you provided, the best validation score achieved on MetaQA_full is only 0.717879. I have already unfroze the embeddings.

@mili6qm
Copy link

mili6qm commented Oct 27, 2020

I got a same problem. I used the training command on the 3-hop MetaQA data, but the model early stopped when it reached 14 rounds with the accuarcy rate about 0.141376. And it took about 2 day to train. When I set the roberta_model.parameters.requires_grad = False, it can reach 0.599131 accuarcy at 37 epoch in a shorter time. The command is "python RoBERTa/main.py --mode train --relation_dim 200 --hidden_dim 256
--gpu 3 --freeze 0 --batch_size 128 --validate_every 5 --hops 3 --lr 0.0005 --entdrop 0.1 --reldrop 0.2 --scoredrop 0.2
--decay 1.0 --model ComplEx --patience 10 --ls 0.0 --outfile 3hop"
and I use the "qa_train_3hop.txt" to train the model. Can you provide me the training log and I want to know how much training time the best model took.

@apoorvumang
Copy link
Collaborator

I got a same problem. I used the training command on the 3-hop MetaQA data, but the model early stopped when it reached 14 rounds with the accuarcy rate about 0.141376. And it took about 2 day to train. When I set the roberta_model.parameters.requires_grad = False, it can reach 0.599131 accuarcy at 37 epoch in a shorter time. The command is "python RoBERTa/main.py --mode train --relation_dim 200 --hidden_dim 256
--gpu 3 --freeze 0 --batch_size 128 --validate_every 5 --hops 3 --lr 0.0005 --entdrop 0.1 --reldrop 0.2 --scoredrop 0.2
--decay 1.0 --model ComplEx --patience 10 --ls 0.0 --outfile 3hop"
and I use the "qa_train_3hop.txt" to train the model. Can you provide me the training log and I want to know how much training time the best model took.

Please use LSTM for MetaQA datasets

@mili6qm
Copy link

mili6qm commented Oct 29, 2020 via email

@Ironeie
Copy link

Ironeie commented Nov 25, 2020

I got a same problem. I used the training command on the 3-hop MetaQA data, but the model early stopped when it reached 14 rounds with the accuarcy rate about 0.141376. And it took about 2 day to train. When I set the roberta_model.parameters.requires_grad = False, it can reach 0.599131 accuarcy at 37 epoch in a shorter time. The command is "python RoBERTa/main.py --mode train --relation_dim 200 --hidden_dim 256
--gpu 3 --freeze 0 --batch_size 128 --validate_every 5 --hops 3 --lr 0.0005 --entdrop 0.1 --reldrop 0.2 --scoredrop 0.2
--decay 1.0 --model ComplEx --patience 10 --ls 0.0 --outfile 3hop"
and I use the "qa_train_3hop.txt" to train the model. Can you provide me the training log and I want to know how much training time the best model took.

Please use LSTM for MetaQA datasets

Hello. I used LSTM for MetaQA 3-hop-full dataset, training on many sets of hyperparameters, but the result can only reached 0.728 for the best hyperparameters. Here is the command for the best result I use:
python main_LSTM.py --mode train --relation_dim 200 --hidden_dim 256 --gpu 0 --freeze 0 --batch_size 1024 --validate_every 5 --hops 3 --lr 0.0005 --entdrop 0.1 --reldrop 0.2 --scoredrop 0.2 --decay 1.0 --model ComplEx --patience 12 --ls 0.0 --kg_type full

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants