Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Is BERT capable of producing semantically close word embeddings for synonyms? #1369

Open
niquet opened this issue Sep 13, 2022 · 1 comment

Comments

@niquet
Copy link

niquet commented Sep 13, 2022

Hello everyone, I am currently working on my undergraduate thesis on matching job descriptions to resumes based on the contents of both. Recently, I came across the following statement by Schmitt et al., 2016: "[...] [Recruiters] and
job seekers [...] do not seem to speak the same language [...]. More precisely, CVs and job announcements tend to use different vocabularies, and same words might be used with different meanings".

Therefore, I wonder if BERT is able to create contextualized word embeddings that are semantically similar or close for synonyms and semantically dissimilar or distant for the same words that have different meanings in the context of resumes and job postings?

Thank you very much in advance!

@Pixelatory
Copy link

It's a bit late, but take a look at the following: https://krishansubudhi.github.io/deeplearning/2020/08/27/bert-embeddings-visualization.html

It allows you to visualize the contextualized embeddings in BERT, and then you'll be able to see for yourself if it holds for your purposes.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants