Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

POS-tagging a list of tokens that have already been tokenized #38

Closed
BLKSerene opened this issue Feb 24, 2019 · 6 comments
Closed

POS-tagging a list of tokens that have already been tokenized #38

BLKSerene opened this issue Feb 24, 2019 · 6 comments

Comments

@BLKSerene
Copy link

Hi, I'm wondering that is it possible for pybo to POS-tag a list of token that have already been tokenized instead of an input string of running text?

@drupchen
Copy link
Collaborator

drupchen commented Feb 25, 2019

First of all, pybo does nothing fancy while attributing POS tags.
The content of this file is read into a dict with word: POS. Then, once the tokenizer has produced a token, it checks if the dict has a POS tag in store for it. That's all. You could do it as I described it.

edit: I had forgotten to mention that every entry entering the trie is inflected with the affixed particles. So it's a bit more than filling a dict. pybo does inflection where required and then attributes to all inflected versions the same POS.

Then, if your tokens are pybo tokens, they should have by default the POS tags in them. If not, you might want to change your tokenizer profile.

It would be wonderful to have pybo do smarter things to attribute POS, but for the moment, that's all it does.

@BLKSerene
Copy link
Author

BLKSerene commented Feb 26, 2019

In my use case, if the user of my program provides a tibetan text with running words, my program will use pybo to first tokenize it and then do lemmatization and POS-tagging if necessary. If the user provides a tibetan text that have already been tokenized (space delimited) using other libraries or tools, my program will just simply split the text into tokens by space.

So in the latter case, after splitting the text into tokens, I have to join the list of tokens back into a string, then feed it into pybo. But there's a catch, when pybo tries to tokenize the text for the second time, the results might not be the same as the original list (there're spaces between Tibetan words already, I'm not sure what's the behavior of pybo in this case).

If pybo does POS-tagging simply by using a mapping table, it would be easy for me to write one myself.
And I'm also trying to figure out how to lemmatize a list of tokens that have already been tokenized (not necessarily by pybo)?

Another question: the POS tags that pybo may assign to each token include "oov", "non-word", "non-bo", "syl", etc., which can't be found in the file you've mentioned, is there any other reference for these POS tags?

@drupchen
Copy link
Collaborator

drupchen commented Feb 26, 2019

Eventhough the solution is not entirely satisfactory, here is what is possible without rewriting and/or subclassing big parts of pybo:

from pybo import BoSyl, PyBoTrie, Config


def is_tibetan_letter(char):
    """
    :param char: caracter to check
    :return: True or False
    """
    if (char >= 'ༀ' and char <= '༃') or (char >= 'ཀ' and char <= 'ྼ'):
        return True
    else:
        return False


def add_tseks(word):
    TSEK = '་'
    if word and is_tibetan_letter(word[-1]) and word[-1] != TSEK:
        return word + TSEK
    else:
        return word


# prepare the tokens
in_str = 'ཤི་ མཐའི་ བཀྲ་ཤིས་ tr བདེ་ལེགས ། བཀྲ་ཤིས་ བདེ་ལེགས་ ཀཀ'
tokens = in_str.split(' ')
tokens = [add_tseks(t) for t in tokens]  # ending tseks are not in the trie


# initialize the trie
bt = PyBoTrie(BoSyl(), 'GMD', config=Config("pybo.yaml"))
# find the data stored in the trie about each token
with_pos = [(t, bt.has_word(t)) for t in tokens]

for num, w in enumerate(with_pos):
    print(num, w)
# 0 ('ཤི་', {'exists': True, 'data': 'VERBᛃᛃᛃ'})
# 1 ('མཐའི་', {'exists': True, 'data': 'NOUNᛃgiᛃ2ᛃaa'})
# 2 ('བཀྲ་ཤིས་', {'exists': True, 'data': 'NOUNᛃᛃᛃ'})
# 3 ('tr', {'exists': False})
# 4 ('བདེ་ལེགས་', {'exists': True, 'data': 'NOUNᛃᛃᛃ'})
# 5 ('།', {'exists': False})
# 6 ('བཀྲ་ཤིས་', {'exists': True, 'data': 'NOUNᛃᛃᛃ'})
# 7 ('བདེ་ལེགས་', {'exists': True, 'data': 'NOUNᛃᛃᛃ'})
# 8 ('ཀཀ་', {'exists': False})

As you see, the data retrieved from the trie is strangely formatted and you will need to do a little cleanup to only keep the POS tags. This is a ugly part of pybo that is in the process of being cleaned up and improved by @10zinten .

In token 1, 'NOUNᛃgiᛃ2ᛃaa' contains the POS tag (NOUN), the type of affixed particle (gi), the amount of chars in the token pertaining to that affixed particle (2) and finally whether the token without the particle should end with a འ or not (aa). None of this information is relevant for your usecase, so you might just want to strip it off using the delimiter:

As for the other values of Token.pos (oov, non-word, etc.) there are not POS tags per se. They give information about the type of token that is given by pybo's preprocessing and tokenizing steps, so in what I proposed here, none of it is available since it is dynamically generated. It is not stored in the trie...

Hope that helps. Doing as you proposed and remove all spaces then have tokenize it will definitely break up the original tokenization, which you don't want.

edit:
Pros: this second solution is able to deal with inflected words, as inflection is dealt with while creating the trie. The simple mapping table approach doesn't support that.
Cons: Words are expected to be well formed and all the preprocessing that happens before tokenizing is not available.

@BLKSerene
Copy link
Author

Thanks, it's quite useful. I'll take a look and try to understand the snippet.
And is there any convenient way to directly get the lemmas of a list of tokens?

@drupchen
Copy link
Collaborator

drupchen commented Feb 26, 2019

Two distinct ways of obtaining the lemmas are implemented in pybo.

The primary strategy is to unaffix an inflected form. Just as cat is the lemma of cats, removing the morpheme is sufficient. The cleaned and unaffixed version of the word is what ends up in Token.lemma.

In order to do that, one needs the information that is dynamically generated while creating the trie: is there an affixed particle ? if so, how many chars does it take? and does the word require the addition of an འ to reconstruct the unaffixed word ? Starting from external tokens won't give this information, so this type of lemmas can't be derived.
This is implemented in this property of the Token class.

The second strategy is to retrieve a lemma from a mapping table. It is what allows to retrieve lemmas such as mouse from mice. Actually, the second strategy is applied after having executed the first one, so the mapping table is kept as small as possible. This is implemented in LemmatizeTokens().

So I guess trying to get lemmas for outside tokens will be difficult because the unaffixation process is dynamically generated in pybo. On the other hand, the mapping table part can be externalized easily.

edit:
At the moment, the lemma mapping table only contains the case particles. But everything is ready for other lists of word: lemma pairs. I am not aware of the existence of such lists and creating them from scratch takes a lot of manual work...

@BLKSerene
Copy link
Author

Thanks for the information! I'll try it out.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants