-
Notifications
You must be signed in to change notification settings - Fork 257
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Convert single-byte charset probers to use nested dicts for language models #121
Conversation
…e modules - Also provide conversion script
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So, the only performance benefit to using iteritems anywhere is if you have a dictionary with millions of item pairs. If that is where we are (GitHub won't show these diffs) then that's fine, otherwise, I'd rather we just use .items()
everywhere. Either way, this looks great. 🎉 🍰:sparkles:
…models (#121) * Convert single byte charset modules to use dicts of dicts for language modules - Also provide conversion script * Fix debug logging check * Keep Hungarian commented out until we retrain
Marcopolo |
This isolates one of the major changes in #99, which is changing our single-byte charset prober language model format to use nested dicts instead of giant lists and offset math. This makes the code much easier to understand and language model access takes about 60% of the time it used to.
The language model conversion script I've included in this PR does not need to stick around in
master
long term; I just wanted it here for review, since looking through the code that converts the language models and seeing if that looks right is much easier than visually comparing giant language model files.I'm still seeing some test failures on this branch where Hungarian is being over-predicted, so this isn't quite ready to merge yet, but I figured putting it up here someone might notice something I missed.