Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Spacy Tokenization encoding problem #68

Closed
ZohaibRamzan opened this issue Mar 23, 2021 · 6 comments
Closed

Spacy Tokenization encoding problem #68

ZohaibRamzan opened this issue Mar 23, 2021 · 6 comments
Labels

Comments

@ZohaibRamzan
Copy link

I am using spacy tokenizer while creating stanza pipeline. But during tokenization it does not handle expression like '1-2' properly.
For example: when i tokenize the sentence:
'Soak the PVDF membrane in 100% methanol for 1‐2 minutes then rinse 2‐3 times with deionized water.  '
The result is:
["Soak","the","PVDF","membrane","in","100","%","methanol","for","1\u20102","minutes","then","rinse","2\u20103","times","with","deionized","water",".","\u00a0 \n"]

What to do to solve this issue?

@polm
Copy link

polm commented Mar 24, 2021

That's an interesting problem. For reference, it looks like your text is using the unicode hyphen and non-breaking space characters.

  • Unicode Character 'HYPHEN' (U+2010)
  • Unicode Character 'NO-BREAK SPACE' (U+00A0)

spaCy (and I guess Stanza) don't have any special treatment of these characters, which means they can end up being treated differently than their ASCII equivalents. If you're working with English text and don't have to worry about losing diacritics then maybe you can preprocess your text with unidecode.

If you need to Unicode characters in general but don't want the keep these, then I would recommend doing a simple string replace on your input text, like this:

text = text.replace("\u2010", "-")

@polm polm added the usage label Mar 24, 2021
@ZohaibRamzan
Copy link
Author

That's an interesting problem. For reference, it looks like your text is using the unicode hyphen and non-breaking space characters.

  • Unicode Character 'HYPHEN' (U+2010)
  • Unicode Character 'NO-BREAK SPACE' (U+00A0)

spaCy (and I guess Stanza) don't have any special treatment of these characters, which means they can end up being treated differently than their ASCII equivalents. If you're working with English text and don't have to worry about losing diacritics then maybe you can preprocess your text with unidecode.

If you need to Unicode characters in general but don't want the keep these, then I would recommend doing a simple string replace on your input text, like this:

text = text.replace("\u2010", "-")

You are right, i have used 'utf-8' encoding while reading the .txt file. The problem is not limited to one or two Unicode characters. I am working on bigger dataset and you can realize there would be many Unicode characters. Therefor, i need all of them to replace. In that case, could you help me further?

@polm
Copy link

polm commented Mar 24, 2021

OK, in that case maybe unidecode can help you. Is all your text in English? Is it OK if you strip all diacritics, so that "Erdős Pál" becomes "Erdos Pal"? If so then you can just do this:

# set up spaCy first
from unidecode import unidecode

text = ... # your text goes here
doc = nlp(unidecode(text))

If that's not OK, you'll need to describe your data in more detail.

@ZohaibRamzan
Copy link
Author

My complete dataset is in English. You can have a look on dataset for more clarity.
https://github.com/chaitanya2334/WLP-Dataset

@polm
Copy link

polm commented Mar 25, 2021

Thanks for the link! It's much easier to give advice when the data is open like this.

Here are some example sentences:

Add 250 µl PB2 Lysis Buffer.
Centrifuge for 5 min at 11,000 x g at room temperature.
HB101 or strains of the JM series), perform a wash step with 500 µl PB4 Wash Buffer pre-warmed to 50°C.

Unfortunately it looks like the data has unicode characters without clear ASCII equivalents. For example, unidecode would convert µl to ul, or 50°C to 50degC. That might actually be OK, since ul isn't otherwise a word, but you'd have to be careful, and it might make your output hard to understand in some cases.

Based on the sample data I've seen, while there are a number of unicode characters, only a few like the hyphen or space would actually cause strange behavior in spaCy's tokenizer. Given that, I would first try making a list of characters and replacing them in preprocessing, and if that doesn't work, then try unidecode. If neither of those work what I'd do next would depend on what the problem was.

@ZohaibRamzan
Copy link
Author

Thank you for showing some impacts. This is helpful.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants