-
Notifications
You must be signed in to change notification settings - Fork 58
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Spacy Tokenization encoding problem #68
Comments
That's an interesting problem. For reference, it looks like your text is using the unicode hyphen and non-breaking space characters.
spaCy (and I guess Stanza) don't have any special treatment of these characters, which means they can end up being treated differently than their ASCII equivalents. If you're working with English text and don't have to worry about losing diacritics then maybe you can preprocess your text with unidecode. If you need to Unicode characters in general but don't want the keep these, then I would recommend doing a simple string replace on your input text, like this:
|
You are right, i have used 'utf-8' encoding while reading the .txt file. The problem is not limited to one or two Unicode characters. I am working on bigger dataset and you can realize there would be many Unicode characters. Therefor, i need all of them to replace. In that case, could you help me further? |
OK, in that case maybe unidecode can help you. Is all your text in English? Is it OK if you strip all diacritics, so that "Erdős Pál" becomes "Erdos Pal"? If so then you can just do this:
If that's not OK, you'll need to describe your data in more detail. |
My complete dataset is in English. You can have a look on dataset for more clarity. |
Thanks for the link! It's much easier to give advice when the data is open like this. Here are some example sentences:
Unfortunately it looks like the data has unicode characters without clear ASCII equivalents. For example, unidecode would convert Based on the sample data I've seen, while there are a number of unicode characters, only a few like the hyphen or space would actually cause strange behavior in spaCy's tokenizer. Given that, I would first try making a list of characters and replacing them in preprocessing, and if that doesn't work, then try unidecode. If neither of those work what I'd do next would depend on what the problem was. |
Thank you for showing some impacts. This is helpful. |
I am using spacy tokenizer while creating stanza pipeline. But during tokenization it does not handle expression like '1-2' properly.
For example: when i tokenize the sentence:
'Soak the PVDF membrane in 100% methanol for 1‐2 minutes then rinse 2‐3 times with deionized water. '
The result is:
["Soak","the","PVDF","membrane","in","100","%","methanol","for","1\u20102","minutes","then","rinse","2\u20103","times","with","deionized","water",".","\u00a0 \n"]
What to do to solve this issue?
The text was updated successfully, but these errors were encountered: