Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Character encoding problems in the NCI #4

Closed
fosterjen opened this issue Oct 1, 2020 · 4 comments
Closed

Character encoding problems in the NCI #4

fosterjen opened this issue Oct 1, 2020 · 4 comments
Labels
bug Something isn't working NCI Processing the New Corpus of Ireland

Comments

@fosterjen
Copy link
Collaborator

Some characters in the NCI are not properly encoded. This affects characters in otherwise ok sentences, or whole blocks of text.

@jowagner
Copy link
Collaborator

jowagner commented Oct 15, 2020

Findings so far:

  • The .vert file does not have the utf-8 errors of the nci_cleaned file. The .vert file covers a wider range of characters from the "Latin" block and "μ" (U+03BC). The cleaned version has only 19 million words. The Irish side of the NCI is supposed to have 30.2 million words.

  • I found a missing <s> tag. Our new extractor script should use any of <s>, <p>, <doc> and <file> (and the respective closing tags) as trigger for a sentence boundary.

  • Looking at 3 examples of English sentences in random locations, they seem to unexpectedly occur in bursts after Irish sentences in the same document. Maybe the corpus is a snapshot of ongoing translation with incomplete parts defaulting to the source language or English. If we can confirm this, this information can be used in language classification: When the classifier is not highly confident it should go with the class of the neighbouring sentences.

  • Some of the <doc> tags have a title attribute that has Irish text not part of the document itself. We could add this text as a separate sentence before the first sentence to get even more data. Same could be done with the author attribute whenever the pubdate field is not empty and medium one of "book" and "newspaper".

  • Glue tags <g/> indicating that there was no space between the neighbouring tokens are not used.

  • No occurrences of < or > outside tags.

  • Backslash is not used as an escape symbol, except maybe in the 7 occurrences of \x\x13.

  • Number of <p> equals number of <s>, i.e. <p> are useless here.

  • Some </p> and </s> are missing.

  • There are empty sentences.

  • Looking at the first 100 lines, it seems that all-caps headings and the first sentence of a section are not separated. However, re-doing the sentence splitting without the extra signals from markup in the original documents probably would produce an overall worse segmentation. For BERT this shouldn't matter as we learned from re-reading the bert and roberta papers over the last weeks but we need to keep this in mind for other work, e.g. using this data for semi-supervised training of dependency parsers with tri-training.

  • The file has no unicode utf-8 errors. It's strange then that NCI_cleaned has such errors. Since other encodings such as ISO 8859-* would be likely to produce a byte sequence that is not valid utf-8 this means that the .vert file is utf-8 encoded with very high likelihood, meaning that there was no reason to attempt conversion from one encoding to another and no opportunity to cause damage to the encoding. The fact that all 34 utf-8 errors in NCI_cleaned are at the start of a line each and use the byte '\xa4' also rules out random bit rot, e.g. on a low quality memory stick. ANyway, good news that the errors are not here.

  • There is a section where "&" is encoded as three tokens &, #38 and ;, i.e. on three lines. (HTML entity &#38; is & in most modern character encodings.) Also, I found &, gt, ; split over 3 lines.

  • There are plenty of &quot; tokens.

  • The unicode combining diaeresis character occurs 18 times. When slicing and recombining character sequences, care must be taken not to separate it from its preceding character, or at least not let it end up at the start of a token, not to fail strict unicode encoding checks.

  • Otherwise, the character table looks fine. There is indication for a small amount of foreign material but this may just be names. The fraction slash U+2044 is used only 17 times. There are no fractions like 1/2 as a single character.

@jowagner
Copy link
Collaborator

doc id="itgm0022", doc id="icgm1042" and doc id="iwx00055" have unescaped & in the value of attributes. XML parser not happy. Implemented workaround in commit a5a27e2 line 52.

@jowagner
Copy link
Collaborator

jowagner commented Oct 16, 2020

Update (with help from Teresa and Lauren):

  • Tokens in tab-separated columns may contain space characters.
  • Found more ampersand substitutions, including recursive & #38 ; #38 ; #38 ;, see print_sentence() in https://github.com/jbrry/Irish-BERT/blob/master/scripts/extract_text_from_nci_vert.py
  • Some tokens contain unexpected hyphens, e.g. Massa-chusetts. Probably a problem with conversion from PDF.
  • The content inside some <s> elements is huge and spans many sentences. The longest element has 65094 tokens. The 100th longest has 5153 tokens.
  • There are cases of special characters replaced by spaces, e.g. in 'G idhlig.
  • There are cases of words split into small pieces, e.g. T UA R A SC Á I L B H L I A N TÚ I L A N O M B UD S MA N 1 9 9 7.

@jowagner
Copy link
Collaborator

Created separate issues for all issues mentioned above.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working NCI Processing the New Corpus of Ireland
Projects
None yet
Development

No branches or pull requests

2 participants