Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Questions about the inverse relations #34

Open
ShaoZhangHao opened this issue Sep 21, 2023 · 1 comment
Open

Questions about the inverse relations #34

ShaoZhangHao opened this issue Sep 21, 2023 · 1 comment

Comments

@ShaoZhangHao
Copy link

Could you please explain how you handle relations in the code for the Bert.Tokenizer? Is it simply by adding "inverse" in front of each relation? It seems like you don't have a dataset called 'relations2description,' which means that relations are just the strings in the middle column of 'train.txt/valid.txt/test.txt' and haven't undergone any processing. They haven't been converted into descriptions like entities and are used directly, is that correct? Will the Bert model be able to correctly process and output embeddings with rich semantic information, similar to when they are converted into descriptions?

Additionally, how do you handle inverse relations? Is it as simple as adding 'inverse' in front of all relations? Will this approach provide the correct inverse relations? I couldn't find a file like 'inverse_relation2description,' so I need your assistance. Thank you for taking the time to help me with this.

@intfloat
Copy link
Owner

  1. We map each relation id to its string format, please check out preprocess.py for details.
  2. Yes, the inverse relations are simply handled by adding "inverse" prefix, BERT will learn this pattern.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants