-
Notifications
You must be signed in to change notification settings - Fork 21
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support for PTB and Universal POS tags #162
Comments
@LifeIsStrange can you please explain what you want with this issue? I see that the linked NLTK pull request is something to do with PTB style POS tags, but I don't know what you want here. |
Hmm one of the use case I see would be:
The wn APIs that accept a pos argument could allow the universal pos tag and pen tree bank pos tags as equivalent values. (since the pos can be obtained from an external program that may use those popular tagging schemes) The original NLTK issue explain it better than I do: |
Ok, I think I understand. I've updated the original issue text to clarify (please update if it's inaccurate). However, my initial reaction is that this is not a good fit for Wn. Unlike the NLTK, Wn is not trying to accommodate a wide range of NLP tasks, but is specifically about modeling and working with wordnet data as defined by WN-LMF. I would therefore suggest using another tag mapper with Wn, such as the NLTK's nltk.tag.mapping (but I'm not sure if it supports the wordnet tagset). If it does, you could write a wrapper function: def ptb_synsets(lemma: str = None, pos: str = None, *args, **kwargs) -> wn.Synset:
if pos:
pos = map_tag('en-ptb', 'wordnet', pos)
return wn.synsets(lemma, pos, *args, **kwargs) |
i found a case with 'p'???. keeps throwing an error. seems any POS tag with
def process_user_input(user_input: str, db: TinyDB):
if len(user_input) > 0:
# train(user_input, db)
# response
tokens = nltk.word_tokenize(user_input)
tags = nltk.pos_tag(tokens)
# find entities
# entities = nltk.chunk.ne_chunk(tags)
[print(list(sentiwordnet.senti_synsets(tag[0], tag[1].lower()))) for tag in tags] |
@white5moke it appears you are using the NLTK in your example. This repository is a standalone project called Wn and is not part of the NLTK. While the NLTK appears to raise an error for an unknown part of speech, in Wn, >>> import wn
>>> import wn.constants
>>> wn.constants.ADPOSITION
'p'
>>> wn.synsets(pos='p')
[] Wn will also return an empty list (instead of an error) for an invalid part of speech: >>> wn.constants.PARTS_OF_SPEECH
frozenset({'t', 'r', 's', 'p', 'v', 'a', 'n', 'u', 'x', 'c'})
>>> wn.synsets(pos='b') # b is not a defined part of speech
[] |
Since support for non-wordnet POS schemes is not currently part of the roadmap, so I will close this as wontfix. |
Issue text updated by @goodmami
This issue appears to be a request to automatically map other part-of-speech tag schemes (such as PTB and Universal POS) to the ones used by wordnets so that a lookup for, e.g.,
wn.words('dog', pos='VERB')
is equivalent town.words('dog', pos='v')
. I'm not sure if the request is to also support reverse mappings (e.g.,synset.ptb_pos
).Original issue text:
The text was updated successfully, but these errors were encountered: