Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Documentation: need to know how verb/noun indicators are calculated #122

Open
valdisvi opened this issue May 29, 2016 · 1 comment
Labels

Comments

@valdisvi
Copy link
Member

@valdisvi valdisvi commented May 29, 2016

zz_rules file allow to use following marks for rules:

Symbol Description
$verb Use this pronunciation if it's a verb.
$noun Use this pronunciation if it's a noun.
$verbf The following word is probably is a verb.
$verbsf The following word is probably is a if it has an "s" suffix.
$nounf The following word is probably not a verb.
$verbextend Extend the influence of $verbf and $verbsf.

Probably these could be used to distinguish between different spellings not only English language.
E.g. in Latvian "vēl" can be particle spelled with narrow "ē" and verb spelled with wide "ē", "top" can be noun with "o" or verb spelled with "uo", etc.
But, to use them one should be know, how these indicators are calculated.

@rhdunn

This comment has been minimized.

Copy link
Member

@rhdunn rhdunn commented Sep 10, 2016

The way these work is you mark some words with $verbf and $nounf that you know are likely to be a verb or noun. For example, to in English could be marked with $verbf (e.g. "to go"). I'm not sure how adverbs are handled (e.g. "to boldly go"). This is used for disambiguation of words with the same spelling, but different pronunciation (e.g. read, lead and close). You mark the corresponding pronunciations with $verb, $verbp or $verb according to it's usage.

These indicators are specified manually. They hand-implement a basic part-of-speech detection algorithm.

NOTE: Using a different part of speech algorithm is complex, as you need accurate data (e.g. the current ones have issues with archaic English (Shakespeare, Edgar Allen Poe, etc.) and even get confused on simple English phrases). These are also trained to be English specific, so would need training for other languages, which is complex to maintain for the 70-80 languages that eSpeak supports.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
2 participants
You can’t perform that action at this time.