Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Custom words dictionary #73

Open
venzen opened this issue Dec 23, 2021 · 5 comments
Open

Custom words dictionary #73

venzen opened this issue Dec 23, 2021 · 5 comments

Comments

@venzen
Copy link

venzen commented Dec 23, 2021

Thank you for this good work. I have two questions about using this tool. First let me briefly explain my use case:

I am translating Buddhist texts from Thai to English for the Mahachulalangkornraachawitayaalay (MCU). The source material is images, so I must first do OCR (with tesseract) and then edit to markdown format. After that I can translate to English using Google Translate. During OCR some characters and annotations are missed or misinterpreted. I hope that deepcut can allow me to correct those words that are misrepresented by OCR. For example, the correct word is 'ประจําบท' but OCR misses the sara am and returns 'ประจาบท'.

  1. Can deepcut help in this case?
  2. If there are new or unseen words in the text, how can I add these words to deepcut for identification in the future?
@titipata
Copy link
Collaborator

Hi @venzen I can confirm that deepcut cannot do the text correction or spelling correction. You may have to create a function (after OCR) to deal with these corrections before performing the tokenization.

For the second question, I think deepcut should generalized enough to parse new unseen words. However, you can put list of words in the dictionary to cover some cases that you think that it may tokenize wrongly.

Hope this help a bit! Maybe someone can follow this issue too!

@titipata
Copy link
Collaborator

There was a paper for spelling correction by Ekapol and team which may help you quite a bit: https://ieeexplore.ieee.org/document/9145483. I'm not sure if they provide open-source implementation somewhere.

@venzen
Copy link
Author

venzen commented Dec 23, 2021

@titipata Thank you for your response. I started implementing word similarity checking before I saw your reply. Can use difftool.SequenceMatcher() with a text file of Thai words and find correct spellings.

As you recommend, I then add new words to this file and add it as a deepcut custom dictionary. For example deepcut was tokenizing 'ความนํา' as ['ความ', 'นํา'] and after adding the word to the custom dictionary it is correctly tokenized as ['ความนํา'].

Thank you, also, for sending the link to the paper about Thai spelling correction. I will read and give feedback if I find a solution.

@venzen
Copy link
Author

venzen commented Dec 24, 2021

Unexpected behavior from deepcut: I am passing a custom dictionary that contains both the words 'หรือ' and 'อิริยาบถ'. Each word is a separate entry on its own line and without whitespace. There is a newline ('\n') after each entry. So we would expect deepcut will tokenize each word, correct?

The string is: หรืออิริยาบถน้อย มีคู้เข้า เหยียดออก....

deepcut fails to segment and returns a single list item for 'หรืออิริยาบถ':

['หรืออิริยาบถ', 'น้อย', ' ', 'มี', ... ]

Any idea what is happening in this case?

EDIT: I should add that my custom dictionary includes 19,000 Thai words. Some of them are compound words. Perhaps this is causing strange behavior.

@venzen
Copy link
Author

venzen commented Jan 16, 2022

The issue was that the custom dictionary contained duplicate words (words also present in the deepcut dictionary). When I made a new blank custom dictionary deepcut works as expected.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants