Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Extracting Keywords That Are Subsets Of Each Other #137

Open
shner-elmo opened this issue Jan 8, 2023 · 3 comments
Open

Extracting Keywords That Are Subsets Of Each Other #137

shner-elmo opened this issue Jan 8, 2023 · 3 comments

Comments

@shner-elmo
Copy link

Hey everyone, I was wondering why is it that when we have multiple keywords that overlap, for example: ["super computer", "computer game"] we only extract the longest one, why not extract both of them?
I would assume that if you want to extract a bunch of keywords from a document it makes sense to get all the matches (even those that overlap), then the user could decide what to do with them.

In this test we want to make sure that with the following sentence: sentence = "distributed super computer game" and these following keywords:

{
    "Distributed Super Computer": ["distributed super computer"],
    "Computer Game": ["computer game"]
}

we only extract the first keyword which is the longest: "Distributed Super Computer", but why not get both in their order? i.e: ["Distributed Super Computer", "computer game"]

@Raidus
Copy link

Raidus commented Mar 15, 2023

Would be also interested in an efficient way to do that. It would not be compute efficient and would only work on very short text like a keyword list but you could create ngram (1,2, 3) pairs like distributed, computer, distributed super, super computer and distributed super computer and do flashtext the work on the bigrams list. Then you could match it.

You could try out hyperscan. It's extremely fast. But compiling a huge list of regex rules takes a bit of time. Here is a great tutorial: https://geekmonkey.org/regular-expression-matching-at-scale-with-hyperscan/

@Sandy4321
Copy link

hyperscan is not working for windows..

@NLPShenanigans
Copy link

Hey @shner-elmo and @Raidus, while it doesn't look that this repo is well-maintained at this point, I'd like to give a stab why the algorithm doesn't accommodate. In short, it's greedy, so it seeks to locally optimise the length of the extracted tokens. This is done for efficiency, but you both rightly point out that this can lead to instances, such as in named entity recognition, where you'll miss mentions of overlapping name entities. Intuitively, I'd expect that this isn't wholly undesirable though, since in the toy example you mention, the named entity is the whole sentence, "distributed super computer game". In this, I would argue building your vocabulary to add into the keyword processor class instance, either via word tokenisation w/n-gram enrichment, or subword tokenisation via something like BPE, would be a suitable choice to ensure your vocabulary is better aligned to the true named entities you wish to extract. If you still need the smaller token counts too, building a set of unigrams, bigrams and trigrams as your vocabulary, and then a more brute force approach may be required. I hope that helps a little!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants