New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fine-tune bert #16
Comments
how can I fine-tune the bert with text of FB15K-237 |
Hi, the input for finetuning BERT was the phrases representing the nodes (not the edges/triples themselves). Since commonsense KGs have natural language phrases as nodes, it made sense to do that. |
Thanks for your reply. Which file should I choose for finetuning Bert among the files you give? or maybe I need to construct a new file by following your tips? |
Hey, yea so since you mentioned you are interested in training models on FB15k-237, you could get that dataset from this repo: https://github.com/TimDettmers/ConvE. |
thanks for your great work
i feel confused about the way to fine-tuned the bert
as mentioned before, the model remove the relation, so the input text is {head entity token} concat {tail entity token} ?
The text was updated successfully, but these errors were encountered: