Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to extract context embeddings from XLM-R? #2

Closed
SherryShen9 opened this issue Oct 1, 2022 · 2 comments
Closed

How to extract context embeddings from XLM-R? #2

SherryShen9 opened this issue Oct 1, 2022 · 2 comments

Comments

@SherryShen9
Copy link

Hi @zjpbinary,

Thanks for the awesome work. I try to reproduce the results in the paper but do not find the code on XLM-R embedding extraction. Could you please also publicate the code on how to extract them if possible?

@zjpbinary
Copy link
Owner

Hi @syl007,
Thanks for your question, most of our code for extracting contextual embeddings is borrowed from HuggingFace Transformers. You can also refer to “https://github.com/joker-xii/simalign/blob/master/simalign/simalign.py” lines 117 ~ 140 and lines 195 ~ 215 to extract the contextual embedding of each word. Due to my busy work, I will release my code later.

@SherryShen9
Copy link
Author

Hi @zjpbinary,
Thanks for your reply. According to your hints, I have successfully extracted contextual embeddings.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants