Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add sentence splitter #51

Closed
rth opened this issue May 3, 2019 · 8 comments
Closed

Add sentence splitter #51

rth opened this issue May 3, 2019 · 8 comments
Labels
new feature This doesn't seem right

Comments

@rth
Copy link
Owner

rth commented May 3, 2019

It would be useful to add a sentence splitter, for instance, possibilities could be,

@rth rth added the new feature This doesn't seem right label May 3, 2019
@joshlk
Copy link
Collaborator

joshlk commented May 9, 2020

spaCy has two sentence segmentation implementations. The default is based on the dependency parser which requires a statistical model. The second implementation is a simpler splitter based on punctuation (default is ".", "!", "?"). (https://spacy.io/usage/linguistic-features#sbd)

On another note it looks like the Unicode sentence boundaries from unicode-rs/unicode-segmentation#24 has been implemented. I could look at how to incorporate this into this library?

@rth
Copy link
Owner Author

rth commented May 9, 2020

The second implementation is a simpler splitter based on punctuation (default is ".", "!", "?").

I think you can do this already with the RegexpTokenizer using something like,

let tokenizer = RegexpTokenizerParams:default()
    .pattern(r"[^\.!\?]".to_string())
    .build()
    .unwrap();
tokenizer.tokenize("some string. another one").collect();

(haven't checked that the regexp is correct) so I'm not sure we need a separate object for it. Maybe just indicating the appropriate regexp for sentence tokenization could be enough?

For the sentence boundaries from the unicode_segmentation crate, yes that would be great if you are interested to look into it! I would also be interested to know how it compares to the Spacy tokenizer that uses a language model.

@joshlk
Copy link
Collaborator

joshlk commented May 27, 2020

I have just done a comparison between the different methods: splitting based on punctuation, Unicode segmentation, NLTK's Punkt model and spaCy. I used the Brown corpus as the benchmarking dataset.

Here are my results:

Method Precision Recall F1
Punctuation spliter 0.896 0.915 0.906
Unicode segmentation 0.938 0.912 0.925
NLTK Punkt 0.907 0.875 0.891
spaCy 0.924 0.908 0.916

Jupyter notebook with full analysis

Interestingly each model scores very similarly. Presumably because most sentences are quite easy (with a full stop and then a space) and you get very few which are more difficult (for example quotes, colons etc.). The Unicode segmentation surprisingly has the best score (F1) and has the added benefit that it's language independent (as previously suggested by @jbowles in #52).

I will do a PR on incorporating UnicodeSegmentation in the coming days.

@rth
Copy link
Owner Author

rth commented May 27, 2020

Thanks! Sounds great. It's interesting indeed that Unicode segmentation is competitive even compared to spacy, and I imagine it's much faster.

@joshlk
Copy link
Collaborator

joshlk commented May 29, 2020

PR #66 implements the thin wrapper around the Unicode sentence segmentation.

Regarding the "simple punctuation splitter": using a regex like [^\.!\?] doesn't work as you would loose the punctuation at the end of each sentence. I also tried (.*?[\.\?!]\s?) but here you would loose the trailing sentence if it didn't include a punctuation. For example:

Input = ["Here is one. Here is another! This trailing text is one more"]
Desired Output = ["Here is one.", "Here is another!", "This trailing text is one more"]

I don't think it's possible with Regex. Do you have any ideas?

Another tactic would be to create a itterator similar to how to spaCy and what I did in the Jupyter notebook. For example:

def split_on_punct(doc: str):
    """ Split document by sentences using punctuation ".", "!", "?". """
    punct_set = {'.', '!', '?'}
    
    start = 0
    seen_period = False
    
    for i, token in enumerate(doc):        
        is_punct = token in punct_set
        if seen_period and not is_punct:
            if re.match('\s', token):
                yield doc[start : i+1]
                start = i+1
            else:
                yield doc[start : i]
                start = i
            seen_period = False
        elif is_punct:
            seen_period = True
    if start < len(doc):
        yield doc[start : len(doc)]

@joshlk
Copy link
Collaborator

joshlk commented Jun 8, 2020

FYI I'm going to look into implementing the "simple punctuation splitter" using a Rust iterator.

@rth
Copy link
Owner Author

rth commented Jun 8, 2020

FYI I'm going to look into implementing the "simple punctuation splitter" using a Rust iterator.

Thanks, that would be great!

Regarding the "simple punctuation splitter": using a regex like [^\.!\?] doesn't work as you would loose the punctuation at the end of each sentence.

I think using the regexp crate would still work but using split instead of find_iter to avoid the issue of the last sentence. Though I agree you would have to do a separate tokenizer for it.

@rth
Copy link
Owner Author

rth commented Jun 12, 2020

Closing as resolved in #66 and #70, thanks again @joshlk !

@rth rth closed this as completed Jun 12, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
new feature This doesn't seem right
Projects
None yet
Development

No branches or pull requests

2 participants