Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature Request: Return indices for the extracted keywords #62

Open
MalteHB opened this issue Oct 6, 2021 · 5 comments
Open

Feature Request: Return indices for the extracted keywords #62

MalteHB opened this issue Oct 6, 2021 · 5 comments

Comments

@MalteHB
Copy link

MalteHB commented Oct 6, 2021

Hi @MaartenGr!

It would be awesome if you had the option to return the location of the extracted keywords in the given input document/text!

Anyhow, I love your work! It is truly awesome to follow, so please keep it up! :-D

Best Regards
Malte

@MaartenGr
Copy link
Owner

Thank you for your kind words, greatly appreciate it!

It definitely sounds like a useful feature to implement. Most likely, this will be a separate function to extract the indices as I want to prevent giving back too much information in the default implementation. I am not sure how much time this will take but I'll take a look.

For now, you can extract the indices using something like this:

from sklearn.feature_extraction.text import CountVectorizer
cv = CountVectorizer().fit([doc])
tokenizer = cv.build_tokenizer()
tokens = tokenizer(doc)
indices = [index for index, token in enumerate(tokens) if token.lower() in [word.lower() for word, _ in keywords]]

The most important thing here is that you make sure that the input for the CountVectorizer is the same as the one you used in KeyBERT.

@pahalie
Copy link

pahalie commented Oct 27, 2021

Hey! I'm wondering how to do this for bigrams correctly?

@MaartenGr
Copy link
Owner

That is actually quite difficult to do. For example, when you are removing stopwords the resulting bigrams does not take into account that there was a stopword in the original text. Take for example the text The learning of machines.. A bigram might be learning machines since we remove the stopword "of". However, that also means that the bigram learning machines does not appear anywhere in the original text as it had initially a stopword. Thus, doing this for n-grams larger than 1 requires a lot of checks to do this properly.

@fortyfourforty
Copy link

I'm also interested in getting the index range of extract keywords.
Let's say if we don't remove stop words, so the extract keywords are the same as is from the document.
How can I get the index from the beginning and end of each keyword extraction, for whatever ngram 1, 2 ,3 or more?

@MaartenGr
Copy link
Owner

MaartenGr commented Nov 21, 2023

@fortyfourforty You would have to adopt the code I wrote above to check for every instance within a n-gram. That way, you can check whether all tokens within a keyword match a set of tokens within a document. In other words, tokenize both the keywords and document and then match them.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants