-
Notifications
You must be signed in to change notification settings - Fork 11
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add sentence splitter #51
Comments
spaCy has two sentence segmentation implementations. The default is based on the dependency parser which requires a statistical model. The second implementation is a simpler splitter based on punctuation (default is ".", "!", "?"). (https://spacy.io/usage/linguistic-features#sbd) On another note it looks like the Unicode sentence boundaries from unicode-rs/unicode-segmentation#24 has been implemented. I could look at how to incorporate this into this library? |
I think you can do this already with the let tokenizer = RegexpTokenizerParams:default()
.pattern(r"[^\.!\?]".to_string())
.build()
.unwrap();
tokenizer.tokenize("some string. another one").collect(); (haven't checked that the regexp is correct) so I'm not sure we need a separate object for it. Maybe just indicating the appropriate regexp for sentence tokenization could be enough? For the sentence boundaries from the unicode_segmentation crate, yes that would be great if you are interested to look into it! I would also be interested to know how it compares to the Spacy tokenizer that uses a language model. |
I have just done a comparison between the different methods: splitting based on punctuation, Unicode segmentation, NLTK's Punkt model and spaCy. I used the Brown corpus as the benchmarking dataset. Here are my results:
Jupyter notebook with full analysis Interestingly each model scores very similarly. Presumably because most sentences are quite easy (with a full stop and then a space) and you get very few which are more difficult (for example quotes, colons etc.). The Unicode segmentation surprisingly has the best score (F1) and has the added benefit that it's language independent (as previously suggested by @jbowles in #52). I will do a PR on incorporating UnicodeSegmentation in the coming days. |
Thanks! Sounds great. It's interesting indeed that Unicode segmentation is competitive even compared to spacy, and I imagine it's much faster. |
PR #66 implements the thin wrapper around the Unicode sentence segmentation. Regarding the "simple punctuation splitter": using a regex like
I don't think it's possible with Regex. Do you have any ideas? Another tactic would be to create a itterator similar to how to spaCy and what I did in the Jupyter notebook. For example: def split_on_punct(doc: str):
""" Split document by sentences using punctuation ".", "!", "?". """
punct_set = {'.', '!', '?'}
start = 0
seen_period = False
for i, token in enumerate(doc):
is_punct = token in punct_set
if seen_period and not is_punct:
if re.match('\s', token):
yield doc[start : i+1]
start = i+1
else:
yield doc[start : i]
start = i
seen_period = False
elif is_punct:
seen_period = True
if start < len(doc):
yield doc[start : len(doc)] |
FYI I'm going to look into implementing the "simple punctuation splitter" using a Rust iterator. |
Thanks, that would be great!
I think using the regexp crate would still work but using |
It would be useful to add a sentence splitter, for instance, possibilities could be,
The text was updated successfully, but these errors were encountered: