Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Hidden Markov Trees #15

Open
redst4r opened this issue May 11, 2015 · 2 comments
Open

Hidden Markov Trees #15

redst4r opened this issue May 11, 2015 · 2 comments
Labels

Comments

@redst4r
Copy link

redst4r commented May 11, 2015

Hi,

Great work, sparked my interest in probabilistic programming!

I'm mostly interested in Hidden Markov Models and in particular in a variant called Hidden Markov Trees (not to be confused with hidden markov decision trees). Basically, the hidden states no longer form a chain, but a (binary) tree.
I wondered how I could implement this in bayespy.

My first idea was to create several 'CategoricalMarkovChain' objects and link their hidden states together:

# a0 is the initial distribution of chain a
# P is the prior over transition probabilities
A = CategoricalMarkovChain(a0, P, states=N_A) 

# put the distribution over the last state of A as the initial distribution of B
lastA = A[-1] # somehow get the last variable of chain A (**)
B = CategoricalMarkovChain(lastA, P , states=N_B) 

# same for C
C = CategoricalMarkovChain(lastA, P, states=N_C) 

e.g. chains B,C "inherit" their initial state from chain A.
However, I have no clue how to access the the array of hidden states within a single chain (the line marked with (**) ). Is this possible somehow?

Alternatively one could of course build up the markov chains directly from Categorical, Gate, and Dirichlet variables, such that the hidden states exist explicitly as a list and can be linked between chains as required. However, I'm not sure if inference is still efficient this way (lot more messages/variables probably).

@jluttine
Copy link
Member

Hi, thanks for your message! I have never used Hidden Markov Tree but it seems interesting. I may have misunderstood something, but I came up with three possible solutions:

1) Use fully factorizing mean-field approximation.

As you described, you could write something like this to construct a binary tree structure:

A = Dirichlet(np.ones(D), plates=(D,))  # DxD transition matrix
z = Categorical(np.ones(D)/D) # initial state
z_0 = Mixture(z, Categorical, A)
z_1 = Mixture(z, Categorical, A)
z_00 = Mixture(z_0, Categorical, A)
z_01 = Mixture(z_0, Categorical, A)
z_10 = Mixture(z_1, Categorical, A)
z_11 = Mixture(z_1, Categorical, A)
# and so on..

Equivalent constrution is possible with Gate node, doesn't make a difference in this case. You could also have, for instance, different transition matrices for left and right branches if you wanted. The problem with this construction is that the posterior approximation factorizes with respect to the variables. Thus, the approximation might be poor and convergence slow, as you pointed out. Also, you should think in which order you want to update your variables. But at least this is easy to implement and test on small problems.

2) Use separate chains for the paths to each leaf node

In order to reduce the amount of factorization, you could factorize in branch level. That is, each branch is a new categorical Markov chain. I'm not good at explaining but the idea would be that each left "turn" would trigger a new branch and each right "turn" belongs to the same chain as its parent. Thus, sequences 1, 11, 111, 1111, ... would be in the same chain. Also, sequences 0, 01, 011, 0111, ... in the same chain. Also 10, 101, 1011, ... would be a new chain branching at the second level. So, you would have as many chains as you have leaf nodes, that is, 2^(K-1) where K is the height/depth of your tree. The path to each leaf is one chain, and the path starts from the last left "turn" before the leaf. However, this is a bit complicated construction and the factorization is not symmetric so it is difficult at least for me to tell what kind of biases this kind of factorization causes. This shold be possible to implement but there are a few minor details that you need to know: how to access individual states of a chain (the question you marked with (**) ). At the moment, that is slightly complicated, because the indexing is used for (independent) plates only. Thus, you need to do something like this to access individual state variables of a chain:

Z = CategoricalMarkovChain(np.ones(D)/D, np.ones((D,D))/D, states=100)   # dummy chain
from bayespy.inference.vmp.nodes.categorical import CategoricalMoments
Z_tmp = Z._convert(CategoricalMoments)
Z_tmp[-1]  # access the last state
Z_tmp[2]   # access the third state

Yep, I know it's complicated, sorry. Should provide a simpler interface for that.

3) Write custom node for tree (or arbitrary graph) structure categorical variables

Quite often efficient inference and little factorization requires custom nodes that utilize some specific structure. For instance, GaussianMarkovChain implements a Kalman filter/smoother type algorithm and CategoricalMarkovChain implements the alpha-beta recursion. Thus, I guess the best solution is to write a custom node which implements some kind of belief propagation algorithm to perform exact inference for categorical Markov tree given some parameters. Then the challenge is in making the node as general and minimalistic as possible so it can be easily used as a building block in large models. However, I'm not too familiar with inference algorithms for Markov trees so I'm not sure how to implement it without studying a bit.

I hope I understood you correctly and I hope this helps. Please comment or ask further questions if you want.

@redst4r
Copy link
Author

redst4r commented May 21, 2015

Hi,

thanks for your detailed response. I'm currently kind of busy with other stuff, but hopefully I'll have some time the next days to look at the ideas you proposed; looks promising!

I discovered some slight mistake in my exampe above: It should of course read

lastA = A[-1] # somehow get the last variable of chain A (**)
linkerNode = Gate(lastA, P)
B = CategoricalMarkovChain(linkerNode, P , states=N_B) 
C = CategoricalMarkovChain(linkerNode, P,  states=N_C) 

i.e. the last hidden state of A determines the initial distribution of the two following chains.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants