New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Model.predict() gives only minus ones #1003
Comments
The issue is that my implementation of dirichlet distributions is wrong and bad. I am currently rewriting pomegranate using torch and fixing this issue, among several others. For now, please avoid the DirichletDistribution implementation. |
What would you recommend for multinomial distribution instead? |
Is your data categories or is it probability vectors over classes? If the data are categories, i.e., integers specifying class, you should use |
It's probability vectors over classes. |
I didn't get very far with this. I used |
According to #458, it is not possible to mix |
Thank you for opening an issue. pomegranate has recently been rewritten from the ground up to use PyTorch instead of Cython (v1.0.0), and so all issues are being closed as they are likely out of date. Please re-open or start a new issue if a related issue is still present in the new codebase. |
I am new to pomegranate, and I want to build a HMM with two hidden states. I have labels, and a sequence of series of observations, most of which come from a multinomial distribution (k = 4, n = 60). (There is also one that comes from a wrapped normal distribution—see issue #1002—but it can be forgotten for now.) Any calls to
predict()
result in just a list of minus ones, and sometimes the warning "Sequence is impossible":What am I missing here? (If I create the model with
HiddenMarkovModel.from_samples()
instead,predict()
,fit()
, etc. result in a segfault, but I didn't file a bug report yet, since I think I am just doing something wrong.The text was updated successfully, but these errors were encountered: