-
Notifications
You must be signed in to change notification settings - Fork 349
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature Request: Return indices for the extracted keywords #62
Comments
Thank you for your kind words, greatly appreciate it! It definitely sounds like a useful feature to implement. Most likely, this will be a separate function to extract the indices as I want to prevent giving back too much information in the default implementation. I am not sure how much time this will take but I'll take a look. For now, you can extract the indices using something like this: from sklearn.feature_extraction.text import CountVectorizer
cv = CountVectorizer().fit([doc])
tokenizer = cv.build_tokenizer()
tokens = tokenizer(doc)
indices = [index for index, token in enumerate(tokens) if token.lower() in [word.lower() for word, _ in keywords]] The most important thing here is that you make sure that the input for the CountVectorizer is the same as the one you used in KeyBERT. |
Hey! I'm wondering how to do this for bigrams correctly? |
That is actually quite difficult to do. For example, when you are removing stopwords the resulting bigrams does not take into account that there was a stopword in the original text. Take for example the text |
I'm also interested in getting the index range of extract keywords. |
@fortyfourforty You would have to adopt the code I wrote above to check for every instance within a n-gram. That way, you can check whether all tokens within a keyword match a set of tokens within a document. In other words, tokenize both the keywords and document and then match them. |
Hi @MaartenGr!
It would be awesome if you had the option to return the location of the extracted keywords in the given input document/text!
Anyhow, I love your work! It is truly awesome to follow, so please keep it up! :-D
Best Regards
Malte
The text was updated successfully, but these errors were encountered: