Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Converting Jiayan's POS into UPOS of Universal Dependencies #3

Closed
KoichiYasuoka opened this issue Sep 19, 2019 · 3 comments
Closed

Comments

@KoichiYasuoka
Copy link
Contributor

I'm now trying to convert Jiayan's POS tag into UPOS tagset of Universal Dependencies. But I'm vague what auxiliary means in Jiayan's POS and cannot determine proper UPOS for Jiayan's u. Then I've tried to compare UPOS (which is used in the train file of https://github.com/UniversalDependencies/UD_Classical_Chinese-Kyoto) with Jiayan's CRFPOSTagger:

% git clone https://github.com/UniversalDependencies/UD_Classical_Chinese-Kyoto.git
% python
>>> from jiayan import CRFPOSTagger
>>> postagger=CRFPOSTagger()
>>> postagger.load("jiayan_models/pos_model")
>>> from opencc import OpenCC
>>> t2s=OpenCC("t2s").convert
>>> train=open("UD_Classical_Chinese-Kyoto/lzh_kyoto-ud-train.conllu","r")
>>> ud=[t.split("\t") for t in train.read().split("\n") if t!="" and not t.startswith("#")]
>>> train.close()
>>> form=[t[1] for t in ud]
>>> upos=[t[3] for t in ud]
>>> simp=[t2s(f) for f in form]
>>> jpos=postagger.postag(simp)
>>> import collections
>>> collections.Counter((f,u,s,j) for f,u,s,j in zip(form,upos,simp,jpos) if j=="u")
Counter({('之', 'SCONJ', '之', 'u'): 841, ('之', 'PRON', '之', 'u'): 561, ('矣', 'PART', '矣', 'u'): 289, ('所', 'PART', '所', 'u'): 173, ('乎', 'PART', '乎', 'u'): 166, ('也', 'PART', '也', 'u'): 101, ('哉', 'PART', '哉', 'u'): 99, ('焉', 'PART', '焉', 'u'): 83, ('乎', 'ADP', '乎', 'u'): 34, ('地', 'NOUN', '地', 'u'): 27, ('得', 'VERB', '得', 'u'): 24, ('之', 'VERB', '之', 'u'): 23, ('焉', 'ADV', '焉', 'u'): 16, ('過', 'VERB', '过', 'u'): 15, ('過', 'NOUN', '过', 'u'): 10, ('焉', 'PRON', '焉', 'u'): 10, ('所', 'NOUN', '所', 'u'): 8, ('得', 'AUX', '得', 'u'): 7, ('兮', 'PART', '兮', 'u'): 6, ('者', 'PART', '者', 'u'): 4, ('等', 'NOUN', '等', 'u'): 3, ('等', 'VERB', '等', 'u'): 2, ('般', 'VERB', '般', 'u'): 2, ('夫', 'PART', '夫', 'u'): 1, ('鄭', 'PROPN', '郑', 'u'): 1, ('其', 'PRON', '其', 'u'): 1, ('否', 'VERB', '否', 'u'): 1, ('之', 'PROPN', '之', 'u'): 1, ('連', 'VERB', '连', 'u'): 1, ('般', 'PROPN', '般', 'u'): 1, ('斯', 'ADV', '斯', 'u'): 1})

After the comparison PART seems suitable for u except for 之. But I wonder why 之 of PRON do not go Jiayan's r pronoun. Also I wonder why 地 of NOUN do not go Jiayan's n noun. Are there any documentations about Jiayan's POS tag?

@jiaeyan
Copy link
Owner

jiaeyan commented Sep 20, 2019

Hi, yes you are right, PART should be u in this case. Let me answer your questions one by one:

  1. Jiayan takes the same pos tag set as LTP, which is a popular NLP toolkit for modern Chinese, whose POS tag category is here. This POS tag set is defined in an important modern Chinese information processing project back in 2003 (let's call it 863 tagset). I tried to find the official page for you but unfortuantely it is obsolete now, so I haven't found any detailed documentations for the tag set for now.

  2. From my understanding, the auxiliary/u in 863 POS tagset is not auxiliary verb in English, and that might be the confusing point for you. It is more a functional particle than a VERB. So you are right, PART in UPOS is suitable for 'u' in Jiayan. In modern Chinese, the words for particle(助词) and auxiliary verb(助动词) only have one character difference, I guess the tagset maker confused with these two words at first.

  3. For PRON 之 doesn't go to 'r' but 'u' instead in Jiayan, that's the propagation issue in the model. Since I didn't find any annotated Classical Chinese data when implementing this feature, it became the most difficult task in this project. The way I resolved it is to tokenize training data with the CharHMMTokenizer from Jiayan, and use LTP mentioned above to postag the tokenized data, so the model is in fact trained in a modern Chinese fashion, which for most of the words is acceptable, but for some words, like 之, is not. 之 is polysemous in Classical Chinese, which can be either PART or PRON, however, in modern Chinese, it can only be PART, so you can see all 之s are tagged as u even it should be a PRON with the model.

  4. The same issue applies to 地. On the contrast of 之, 地 is a polysemous word in modern Chinese, which can either be NOUN or PART, but can only be NOUN in Classical Chinese. Therefore with the modern Chinese fashion tagging, it doesn't only go to NOUN, but also goes to PART, which is u in Jiayan. In this case, you could convert all u tagged 地 to NOUN with post processing.

In conclusion, the modern Chinese tagging fashion of the Jiayan POS model could mistag the words that are polysemous in either Classical Chinese or modern Chinese. I will look deeper into the POS tagging feature to see what can be improved in the future.

Sorry for the long answer, hope my answer could help. Thank you very much!

@KoichiYasuoka
Copy link
Contributor Author

Thank you @jiaeyan for the information about LTP. I've just written a tentative table to convert POS of LTP into UPOS of Universal Denendencies:

aADJ
bNOUN
cCCONJ
dADV
eINTJ
gNOUN
hPART
iNOUN
jPROPN
kPART
mNUM
nNOUN
ndNOUN
nhPROPN
niPROPN
nlNOUN
nsPROPN
ntNOUN
nzPROPN
oINTJ
pADP
qNOUN
rPRON
uPART
vVERB
wpPUNCT
wsX
xSYM

In addition z (descriptive words) should be converted into... ah well, into ADV?

@jiaeyan
Copy link
Owner

jiaeyan commented Sep 20, 2019

Yes,z could be interpreted as ADV here. By the way, thank you for introducing the universal dependency project, which is very interesting, and an important missing piece of Jiayan. I will look into it and hopefully could make Jiayan support UD parsing in the future.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants