-
Notifications
You must be signed in to change notification settings - Fork 164
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Hidden Markov Trees #15
Comments
Hi, thanks for your message! I have never used Hidden Markov Tree but it seems interesting. I may have misunderstood something, but I came up with three possible solutions: 1) Use fully factorizing mean-field approximation.As you described, you could write something like this to construct a binary tree structure:
Equivalent constrution is possible with Gate node, doesn't make a difference in this case. You could also have, for instance, different transition matrices for left and right branches if you wanted. The problem with this construction is that the posterior approximation factorizes with respect to the variables. Thus, the approximation might be poor and convergence slow, as you pointed out. Also, you should think in which order you want to update your variables. But at least this is easy to implement and test on small problems. 2) Use separate chains for the paths to each leaf nodeIn order to reduce the amount of factorization, you could factorize in branch level. That is, each branch is a new categorical Markov chain. I'm not good at explaining but the idea would be that each left "turn" would trigger a new branch and each right "turn" belongs to the same chain as its parent. Thus, sequences 1, 11, 111, 1111, ... would be in the same chain. Also, sequences 0, 01, 011, 0111, ... in the same chain. Also 10, 101, 1011, ... would be a new chain branching at the second level. So, you would have as many chains as you have leaf nodes, that is, 2^(K-1) where K is the height/depth of your tree. The path to each leaf is one chain, and the path starts from the last left "turn" before the leaf. However, this is a bit complicated construction and the factorization is not symmetric so it is difficult at least for me to tell what kind of biases this kind of factorization causes. This shold be possible to implement but there are a few minor details that you need to know: how to access individual states of a chain (the question you marked with (**) ). At the moment, that is slightly complicated, because the indexing is used for (independent) plates only. Thus, you need to do something like this to access individual state variables of a chain:
Yep, I know it's complicated, sorry. Should provide a simpler interface for that. 3) Write custom node for tree (or arbitrary graph) structure categorical variablesQuite often efficient inference and little factorization requires custom nodes that utilize some specific structure. For instance, GaussianMarkovChain implements a Kalman filter/smoother type algorithm and CategoricalMarkovChain implements the alpha-beta recursion. Thus, I guess the best solution is to write a custom node which implements some kind of belief propagation algorithm to perform exact inference for categorical Markov tree given some parameters. Then the challenge is in making the node as general and minimalistic as possible so it can be easily used as a building block in large models. However, I'm not too familiar with inference algorithms for Markov trees so I'm not sure how to implement it without studying a bit. I hope I understood you correctly and I hope this helps. Please comment or ask further questions if you want. |
Hi, thanks for your detailed response. I'm currently kind of busy with other stuff, but hopefully I'll have some time the next days to look at the ideas you proposed; looks promising! I discovered some slight mistake in my exampe above: It should of course read lastA = A[-1] # somehow get the last variable of chain A (**)
linkerNode = Gate(lastA, P)
B = CategoricalMarkovChain(linkerNode, P , states=N_B)
C = CategoricalMarkovChain(linkerNode, P, states=N_C) i.e. the last hidden state of A determines the initial distribution of the two following chains. |
Hi,
Great work, sparked my interest in probabilistic programming!
I'm mostly interested in Hidden Markov Models and in particular in a variant called Hidden Markov Trees (not to be confused with hidden markov decision trees). Basically, the hidden states no longer form a chain, but a (binary) tree.
I wondered how I could implement this in bayespy.
My first idea was to create several 'CategoricalMarkovChain' objects and link their hidden states together:
e.g. chains B,C "inherit" their initial state from chain A.
However, I have no clue how to access the the array of hidden states within a single chain (the line marked with (**) ). Is this possible somehow?
Alternatively one could of course build up the markov chains directly from Categorical, Gate, and Dirichlet variables, such that the hidden states exist explicitly as a list and can be linked between chains as required. However, I'm not sure if inference is still efficient this way (lot more messages/variables probably).
The text was updated successfully, but these errors were encountered: