Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

About the second encoder for the relation model #44

Closed
momoyao1 opened this issue Apr 12, 2022 · 3 comments
Closed

About the second encoder for the relation model #44

momoyao1 opened this issue Apr 12, 2022 · 3 comments

Comments

@momoyao1
Copy link

Hello, I'm really interested in your pipelined approach in this paper. I notice that you mentioned a second pre-trained encoder in the relation model. I'm a little confused about that. I'm not sure how you get the second pre-trained encoder. Is the training procedure completely identical with BERT? Or is there anything special in getting the encoder?

Thanks,

@a3616001
Copy link
Member

Hi! Thanks for your interests! A "second pre-trained encoder" means that we use a separate encoder for the relation model (instead of sharing the encoder with the entity model). Both relation encoder and entity encoder are BERT.

Sorry about the confusion! Let me know if you have any further questions!

@momoyao1
Copy link
Author

Hello! Thanks for your help! But I still have a question about it. As you mentioned, the additional inserted marker plays an important role in obtaining the relation representation. Here is the thing. As BERT haven't seen these markers before, so I find it really hard to understand how to use BERT to get reasonable representation for these markers. I guess there are some pre-trained tasks like MLM to help BERT see these special markers at first? Please correct me if I miss the point.

Thanks,

@a3616001
Copy link
Member

You are right that BERT haven't seen these markers during pre-training. We don't have any special pre-training tasks for marker tokens. The representations of marker tokens are learned merely during fine-tuning on the downstream task (relation classification).

@danqi danqi closed this as completed May 7, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants