Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add initial state tests #68

Open
wants to merge 2 commits into
base: main
Choose a base branch
from
Open

Add initial state tests #68

wants to merge 2 commits into from

Conversation

Veganveins
Copy link
Contributor

@Veganveins Veganveins commented Mar 3, 2021

This PR attempts to test how well we are able to estimate initial state probabilities given a transition probability matrix.

The changes at this point are my first pass attempts to:

  1. Test the compute_steady_state method

I've added a few simple test cases to confirm that the steady state estimation is able to recover the steady state for a simple "4x4 identify matrix" example and simple 2x2 matrix with all probabilities equal to .5

  1. Test the gamma_0_estimation method

With a lot of help from @xjing76 I was able to add a test comparing the initial state probabilities to the estimated state probabilities. Currently there's only one test case (two states, 50-50 probability).

tests/test_utils.py Outdated Show resolved Hide resolved
tests/test_estimation.py Outdated Show resolved Hide resolved
@brandonwillard brandonwillard changed the title Unit tests Add initial state tests Mar 4, 2021
tests/test_estimation.py Outdated Show resolved Hide resolved
tests/test_utils.py Outdated Show resolved Hide resolved
@rlouf rlouf linked an issue Mar 9, 2021 that may be closed by this pull request
@Veganveins Veganveins force-pushed the unit_tests branch 2 times, most recently from e8a8419 to ad7ba2a Compare March 16, 2021 21:01
assert 0.49 < prior_predictive["Y_t"].mean() < 0.51

# use model to get a posterior distribution
trace = pm.sample()
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

trace = pm.sample() here is failing with error RuntimeError: Chain 0 failed.

Is this the wrong way to be trying to get a posterior trace with test_model?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If the model above is only used for producing a set of sample observations—as it appears to be—then another copy of that model will be needed for the estimation. That copy will need to set the observed keyword argument in the Y_t term.

In other words, this code is currently asking for posterior samples from a model that has no observations, and, thus, no likelihood(s).

Take a look at the simulation + estimation setup used in the example notebooks.

assert 0.49 < prior_predictive["Y_t"].mean() < 0.51

# use model to get a posterior distribution
trace = pm.sample()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If the model above is only used for producing a set of sample observations—as it appears to be—then another copy of that model will be needed for the estimation. That copy will need to set the observed keyword argument in the Y_t term.

In other words, this code is currently asking for posterior samples from a model that has no observations, and, thus, no likelihood(s).

Take a look at the simulation + estimation setup used in the example notebooks.

@Veganveins Veganveins force-pushed the unit_tests branch 2 times, most recently from d2547f4 to 6f7e71e Compare March 17, 2021 19:49
@Veganveins Veganveins marked this pull request as ready for review March 17, 2021 19:49
if pi_0:
pi_0_tt = pm.Dirichlet("pi_0", pi_0)
else:
pi_0_tt = pm.Dirichlet("pi_0", pi_0_a)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@brandonwillard I think you helped me edit this simulate_poiszero_hmm function when I started working on this but I think I lost that change when I was trying to re-organize the commits. Does this if-else condition look right to you? I can't remember if this was exactly how we had it before

Copy link
Contributor

@brandonwillard brandonwillard Mar 26, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't recall; you'll need to check the Git history or walk through whatever problem(s) this is causing.

def test_gamma_0_estimation():

np.random.seed(2032)
true_initial_states = np.array([0.5, 0.5])
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this a good set of initial values to test? The prior for the initial state vector is a Dirichlet with concentration parameters equal to one.

We need a test confirming that the initial states can be adequately estimated via posterior sampling, so we need to remove the chances that we get a good estimate for other reasons.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if I changed it to something like [.32, .68] would that be better? or are we thinking even more extreme than that, like [.02, .98] or something?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's try the latter.

Comment on lines +9 to +28
def simulate_poiszero_hmm(
N,
observed,
mu=10.0,
pi_0_a=np.r_[1, 1],
p_0_a=np.r_[5, 1],
p_1_a=np.r_[1, 1],
pi_0=None,
):
p_0_rv = pm.Dirichlet("p_0", p_0_a)
p_1_rv = pm.Dirichlet("p_1", p_1_a)
P_tt = tt.stack([p_0_rv, p_1_rv])
P_rv = pm.Deterministic("P_tt", tt.shape_padleft(P_tt))
if pi_0 is not None:
pi_0_tt = tt.as_tensor(pi_0)
else:
pi_0_tt = pm.Dirichlet("pi_0", pi_0_a)
S_rv = DiscreteMarkovChain("S_t", P_rv, pi_0_tt, shape=N)
S_rv.tag.test_value = (observed > 0) * 1
return PoissonZeroProcess("Y_t", mu, S_rv, observed=observed)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As a simple test for initial probability estimation, I don't know if the zero-Poisson model is really the best choice.

If we're going to use a model with a Dirac emission distribution, we should use one that consists entirely of Dirac emissions; it's simpler and accomplishes the same thing.

Otherwise, we need to consider whether or not a non-Dirac model is a better choice for testing (e.g. mixture of normals). Likewise for the choice of transition matrix.


np.random.seed(2032)
true_initial_states = np.array([0.02, 0.98])

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

seems like with true initial states set to [.02, .98] the initial state estimation fails. the error im seeing is

E       AssertionError: 
E       Arrays are not almost equal to 1 decimals
E       
E       Mismatched elements: 2 / 2 (100%)
E       Max absolute difference: 0.25904421
E       Max relative difference: 12.95221072
E        x: array([0.3, 0.7])
E        y: array([0., 1.])

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Test initial state probability estimation
2 participants