You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thanks for the awesome work. I try to reproduce the results in the paper but do not find the code on XLM-R embedding extraction. Could you please also publicate the code on how to extract them if possible?
The text was updated successfully, but these errors were encountered:
Hi @syl007,
Thanks for your question, most of our code for extracting contextual embeddings is borrowed from HuggingFace Transformers. You can also refer to “https://github.com/joker-xii/simalign/blob/master/simalign/simalign.py” lines 117 ~ 140 and lines 195 ~ 215 to extract the contextual embedding of each word. Due to my busy work, I will release my code later.
Hi @zjpbinary,
Thanks for the awesome work. I try to reproduce the results in the paper but do not find the code on XLM-R embedding extraction. Could you please also publicate the code on how to extract them if possible?
The text was updated successfully, but these errors were encountered: