Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How does JamSpell correction work? #15

Closed
novitoll opened this issue Apr 8, 2018 · 7 comments
Closed

How does JamSpell correction work? #15

novitoll opened this issue Apr 8, 2018 · 7 comments

Comments

@novitoll
Copy link

novitoll commented Apr 8, 2018

it consider words surroundings (context) for better correction

If Hunspell suggests a list of words with minimal REPlacement position but for uni-gram, then does JamSpell consider some N-gram with Markov chain etc :) ?

>>> jamspell_corrector.FixFragment('how sre you')
u'how are you'
>>> hunspell_corrector.suggest('sre')
[u'tire', u'are', u'see', u're', u'sere', u'sire', u'sore', u'sure', u'res', u'ere', u'ire', u'ore', u'sue', u'she', u'Ore']
>>> jamspell_corrector.FixFragment('how you sre')
u'how you are'
>>> jamspell_corrector.FixFragment('you sre how')
u'you see how'
@bakwc
Copy link
Owner

bakwc commented Apr 8, 2018

It use n-gram language model (word based, 3-gram), and select candidate with the highest score. Also it is optimized for speed (modified symspell algorithm) and memory consumption (bloom-filter & perfect hash). I will add some description in README later. Here is an article (russian) with detailed explanation, habrahabr.ru/post/346618.

@novitoll
Copy link
Author

novitoll commented Apr 8, 2018

Спасибо, надо почитать.

Closing this issue.
It is also interesting to see if JamSpell is going to have a "hyped", yet more related n-gram language model in some RNN architecture.

P.S.: Your library is popular and most recommended for Spell checking problem in ODS #nlp channel community

@novitoll novitoll closed this as completed Apr 8, 2018
@bakwc
Copy link
Owner

bakwc commented Apr 8, 2018

It is also interesting to see if JamSpell is going to have a "hyped", yet more related n-gram language model in some RNN architecture.

I want to try LSTM in the future.

@wolfgarbe
Copy link

wolfgarbe commented Apr 9, 2018

Hi, this is Wolf, the author of the original SymSpell algorithm. Unfortunately, my Russian is limited, so I used Google Translate to read your interesting habrahabr post. I hope I got it right:

"... the index from the SymSpell algorithm took up a lot of space. ... But if a bloom filter says that such a deletion is in the index - we can restore the original word by performing insertions to the delete and checking them in the index. The performance of the resulting solution has practically not slowed down, and the memory used has decreased very significantly. ..."

While a bloom filter of deletes indeed takes much less space than storing the deletes and the pointers to the original words, I believe that the performance will be significantly reduced by performing the insertions.
A single delete of length=n with maximum edit distance=2 requires 26(n+1)26(n+2) insertions and checks in the dictionary (for 26 letters in latin alphabet). For length=5 this results in 28,392 dictionary lookups per delete. And you have n(n-1)/2 deletes for each word for edit distance=2.
Also, the algorithm becomes language dependent, as the character set of the characters to be inserted is different in Latin, Cyrillic, Georgian, Tibetan or Chinese

In your tests, JamSpell seems to be 3..4 times faster than Norvig. In my benchmark the original SymSpell is 1000x faster than Norvig for maximum edit distance=2.

Btw, the memory usage of the recent SymSpell versions has been significantly reduced by prefix indexing.

@bakwc
Copy link
Owner

bakwc commented Apr 9, 2018

Hi, thanks a lot for your feedback! In my implementation the bottleneck now is generating sentences with candidates and getting language model predictions. Originally I was using Norvig's approach, but it was very slow, especially on long words. Your algorithm helped me to improve performance.

A single delete of length=n with maximum edit distance=2 requires 26(n+1)26(n+2) insertions and checks in the dictionary (for 26 letters in latin alphabet). For length=5 this results in 28,392 dictionary lookups per delete. And you have n(n-1)/2 deletes for each word for edit distance=2.

For distance=2, length=n my approach gives average O(n) solution, not O(n^2). I generate insertions at distance=1 (linear), and i go to level 2 only if my bloom-filter say that there is such delete there. So I don't see why it should be significantly reduced.

Also, the algorithm becomes language dependent, as the character set of the characters to be inserted is different in Latin, Cyrillic, Georgian, Tibetan or Chinese

Agree, it could be an issue for languages with lot's of characters.

In your tests, JamSpell seems to be 3..4 times faster than Norvig. In my benchmark the original SymSpell is 1000x faster than Norvig for maximum edit distance=2.

I think it's not correct to compare your library with my, your library doesn't consider words surroundings. Also my benchmarks performed in python, which is rather slow. I would be glad to add symspell to my benchmarks too, but, as far as I know - it don't have any python bindings.

Btw, the memory usage of the recent SymSpell versions has been significantly reduced by prefix indexing.

I will look at it. I tried to use suffix tree - it reduced memory usage but it still required a lot of it. Bloom filter is much more compact.

@wolfgarbe
Copy link

wolfgarbe commented Apr 9, 2018

For distance=2, length=n my approach gives average O(n) solution, not O(n^2). I generate insertions at distance=1 (linear), and i go to level 2 only if my bloom-filter say that there is such delete there. So I don't see why it should be significantly reduced.

In the best case (assuming there is at least one suggestion within MaxEditDistance), you would still have n*26 + (n+1)*26 insertions/dictionary lookups per delete. Those minimum 269 dictionary lookups per delete (n=5, distance=2) is one reason for a performance reduction. Even if not visible in Big O notation, constant factors may heavily influence performance. The second reason is that probably multiple level1 deletes will exist, and you will have to to iterate level2 multiple times as well.

Also my benchmarks performed in python, which is rather slow.

That's why I benchmarked a c# port of Norvig's algorithm against the c# implementation of SymSpell.
That should exclude the performance impact of the implementation language, and just compare the algorithmical difference.

@sumyatthitsarr
Copy link

Can we use some neural language model instead of n-gram language model for candidate suggestions?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants