New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[MRG] Multithreaded HMM training #30
Conversation
This is almost ready. Just need to add benchmarks, add more thorough unit tests, and we should be good to go. |
Preliminary single core results: master
branch
Only forward, backward, and training have been optimized, so it makes sense that forward-backward and Viterbi don't see huge improvements (forward-backward calls both forward and backward, so it is sped up a little). Master branch includes the speed improvements gained in the GIL released distribution section; these improvements are based only on improvements to the HMM code. |
Multithreading training currently helps a lot with big models, but can be harmful for small models. This is because I'm calling the GIL a lot to ensure all the data structures are threadsafe. This model has 151 states.
In contrast, this model has 16 states.
|
branch
master
|
60463f3
to
79ee86e
Compare
b8aee5c
to
685542a
Compare
The only issue is one data structure is not thread-safe. I am currently looking for it. |
0d60a88
to
7bbb980
Compare
Everything works, finally. Merging! |
[MRG] Multithreaded HMM training
In progress