You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
For documentation's sake,, something we talked in real life:
Perhaps tokenization, pos-tagging, ner-tagging and segmentation could be non-optional parts of the preprocessing pipeline since iepy's core would break without them anyway.
If those parts are non-optional, then they can be passed as kwargs to the preprocessing pipeline and therefore it can be easily ensured that tokenization is the same for both documents and literal tagging.
Right now our LiteralNER is very literal, so in some cases is not working.
Example: an entry like this
Is never found because the documents will be tokenized, transforming this
into this
making impossible a match (notice that 's is a separated token).
Also, what make things harder is that the tokenizer to use while parsing the LiteralNER entries must be the same tokenizer used when tokenizing text.
The text was updated successfully, but these errors were encountered: