Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Spacy Integration - "detect an empty sentence" #53

Open
mrw-dev opened this issue Feb 19, 2024 · 5 comments
Open

Spacy Integration - "detect an empty sentence" #53

mrw-dev opened this issue Feb 19, 2024 · 5 comments

Comments

@mrw-dev
Copy link

mrw-dev commented Feb 19, 2024

Using your spacy integration the senticizer for Spacy will sometimes produce an empty sentence (using "en_core_web_sm"). These leads to the SpanMarkerTokenizer throwing an exception. Not sure how active this project is any more, but these seems like an easy fix. Is there a work-around already for this? Would you like the code updated to have one (I might be able to do this fix).

@tomaarsen
Copy link
Owner

Hello!

I don't think there's a workaround yet. Feel free to make a pull request and I'll try to have a look at it :)

  • Tom Aarsen

@mwade-noetic
Copy link

Actually, using the en_core_web_lg model instead of the sm model mitigates the problem to a great extent (it just parses the sentences better). May not be worth a pull just yet, but I had to fix the same issue during training by pre-processing the sentences.

@mrw-dev
Copy link
Author

mrw-dev commented Mar 2, 2024

Hi,

I have a one-line fix that should work for this as it keeps happening. spacy integration code that resolves this issue fairly nicely. I am not sure how to do the pull request, but the in the "spacy_integration.py" file I would propose the following change:

    def __call__(self, doc: Doc) -> Doc:
        """Fill `doc.ents` and `span.label_` using the chosen SpanMarker model."""
        sents = list(doc.sents)
        inputs = [[token.text if not token.is_space else "" for token in sent] for sent in sents]
        # Remove any sentences where the tokens are all empty strings or the sentence has 0 tokens.
        inputs = [sentence for sentence in inputs if any(sentence) and len(sentence) > 0]

        # use document-level context in the inference if the model was also trained that way
        if self.model.config.trained_with_document_context:
            inputs = self.convert_inputs_to_dataset(inputs)

I have updated the code and run a substantial amount of text through it. I did not however create any unit-tests but will be happy to do so you would consider this addition. It could be merged into the list comprehension above, but I wanted to show the logic clearly here.

@xxyzz
Copy link

xxyzz commented May 15, 2024

The pipe() function seems a bit difficult to fix, because if empty sentences are removed it will be tricky to calculate the offset value. Couldn't the tokenizer code somehow ignore empty sentences?

@mwade-noetic
Copy link

Yes, I have run into this issue. I ended up dropping the spacy support altogether and just parse the sentences with my own parser that is targeted at my data. This resolved my issue with the empty sentences and the let me deal with better sentence capture. Contracts and other legal documents do not really parse well with Spacy anyways.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants