Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Structure to combine posterior samples from different prior draws and compute loglikelihoods #62

Closed
hyunjimoon opened this issue Oct 17, 2022 · 10 comments

Comments

@hyunjimoon
Copy link
Contributor

hyunjimoon commented Oct 17, 2022

We use S = 30, M = 100, N = 20.

Procedure

1. Generate 30 datasets

Use $\tilde{\alpha} =.8, \tilde{\beta}= .05, \tilde{\gamma} = .8, \tilde{\delta} = .05$ and inject process noise. Four of the thirty sets of Y datasets (size: 2 * 20) look like:
image

2. Run MCMC for each Y dataset which returns one hundred sets of $\alpha_{1..100}, \beta_{1..100}, \gamma_{1..100}, \delta_{1..100}$ for each $\tilde{Y_s}$. Hundred posterior vectors for S =1 look like:

image

3. Calculate loglikelihood for given $Y_s$

with each posterior sample pairs 1..M. For instance, with ${SM}$ subscript notation, $\alpha_{11} =.7, \beta_{11} = .06, \gamma_{11} = .8, \delta_{11} = .06$ is the example of SM= 11 vector. Compute loglikelihood 3,000 times which is a function of four parameter values and $Y_s$.

4. Compute rank of loglikelihood within each S

Formula for ranks are: $(\Sigma_{m= 1..M} f(\alpha_m, \beta_m, \gamma_m, \delta_m, Y_s) < f(\tilde{\alpha}, \tilde{\beta}, \tilde{\gamma}, \tilde{\delta}, Y_s)$ . Plot the histogram of this S number of ranks (x-axis range would be 0 to 100).
image

@tseyanglim, @tomfid I hope the above is more descriptive (which TY requested) :)

@tseyanglim
Copy link

Wait, so is the idea to first generate 30 synthetic data sets using 30 randomly drawn sets of parameter values, plus process noise, and then run MCMC on each of those 30 synthetic datasets? Is the goal to see whether the original 30 sets of parameter values can be recovered?

And I'm not sure I understand the aim of step 4 - why rank them?

@hyunjimoon
Copy link
Contributor Author

hyunjimoon commented Oct 17, 2022

goal to see whether the original 30 sets of parameter values can be recovered?

Yes the idea is to verify the generator we are using. However, I think there are ways to extend this to validation (real data, not synthetic) e.g. use real Y, not $\tilde{Y}$ but this is under development.

the aim of step 4 - why rank them?

Thanks for this question. This method, called simulation based calibration, is what Bayesian computation people use for verification (to see how statistical and computational model match well). There exist many package that allows this, (e.g. R package introduced by Andrew here:
https://statmodeling.stat.columbia.edu/2022/02/17/sbc-a-new-r-package-for-simulation-based-calibration-of-bayesian-models/

@tseyanglim
Copy link

Do you have an existing mdl file that incorporates process noise in the same way that you're injecting it into the Stan version of the model?

As we discussed there are a few ways to create the 30 sets of random draws, the key distinction here being whether you want to 1) rely on Vensim's random number generation, or 2) use exactly the same 30 sets of values (and process noise stream as well) to get an exact replication of the random component you have in Stan.

My inclination is that you should do the former but it's up to you. The former would mean the datasets produced are not expected to be identical, just statistically comparable. The latter would be more complicated to implement.

Also, doing step 1 and 2 is easy enough but step 3 and 4 would be outside of Vensim / VST's current capabilities - instead you'd get 30 exported CSVs or similar files with 100 sets of parameter estimates each, that you could then use to calculate and rank log likelihoods elsewhere (Python, R, etc.) (@tomfid is there a way to get the corresponding payoff for each accepted point in the MCMC sample in Vensim? if the likelihood is constructed manually as a variable in the model?)

(Sorry I'm working quite slowly yesterday and today, have been dealing with sick baby)

@tomfid
Copy link
Collaborator

tomfid commented Oct 18, 2022

One of the files generated in an MCMC run ends with _MCMC_points.tab, and it contains the payoff as well as the parameters for each point. It includes rejected points, so you'd have to filter a bit to match the sample.

@tseyanglim
Copy link

One of the files generated in an MCMC run ends with _MCMC_points.tab, and it contains the payoff as well as the parameters for each point. It includes rejected points, so you'd have to filter a bit to match the sample.

Is there an easy way to do that (e.g. unique point identifier) or is it just a matter of matching on the parameter value sets that are in the sample file?

@tomfid
Copy link
Collaborator

tomfid commented Oct 18, 2022

Now that I think about it, using the Vensim RNG for process noise is fine, but it would take a little care to get it behaving the same way as the Stanified version. The reason is that TIME STEP=.0625, but the point vector for process noise has 20 points at unit time intervals (we think), interpolated in between. In Vensim, you'd normally either SMOOTH a white noise input to filter the high frequencies, or use SAMPLE IF TRUE to update the noise in a stepwise fashion. Neither of these exactly matches the linear interpolation in the Stanified version. You could use VECTOR LOOKUP to get the same linear interpolation behavior I guess.

@tomfid
Copy link
Collaborator

tomfid commented Oct 18, 2022

Points are coded - status codes are as follows:
-2 Payoff error (e.g., an FP error or negative payoff that should represent a likelihood)
-1 Rejected
0 Initial
1 Accepted
2 Repeated
3 Accepted during burnin
4 Repeated during burnin
5 Accepted, but above payoff sensitivity threshold
6 Repeated, but above payoff sensitivity threshold
7 Improvement on best payoff (this will duplicate a point reported as 0-6)

So, you'd want the 1s and 2s I guess.

@hyunjimoon
Copy link
Contributor Author

For stanify, this commit resolves this.

@tseyanglim, for hierarchical Bayesian, would using VST have some edge above Venpy? I feel as you have operated on hierarchical Bayesian, VST would have better treatments for analysis + plots with subscripts.

@tomfid, could you elaborate the process of the following plots for MCMC prey-predator here?

image

I am asking because @ali-akhavan89 shared during 879 seminar, he used excel for the following analysis and wonder if there are ways to directly do this on python.
image

@tomfid
Copy link
Collaborator

tomfid commented Nov 11, 2022

Q answered in other thread.

I did the same sort of ground-truth analysis for the spatial/hierarchical pred prey model:
image

Basically everything works - all parameters are within expected +/- 2 SD ranges of the estimates (I didn't get around to plotting actual distributions yet, so nongaussian behavior might be an issue).

Since the pred/prey model has a bigger parameter space, hierarchy, and (probably) is more nonlinear, I would expect it to be a harder problem. Therefore I'd interpret the result above as a convergence failure, or maybe some kind of identifiability problem.

@tomfid
Copy link
Collaborator

tomfid commented Nov 12, 2022

I guess another possibility in @ali-akhavan89 's slide is that "Least Squares" implies that the Normal calibration payoff distribution was used instead of Gaussian. That's a pure sum of squares, omitting the /2 and LN(sd) factors in the likelihood. The sd doesn't matter as long as you're not estimating it, but the missing /2 means the log likelihood is bigger. That would make bounds too tight. You can compensate by setting the MCMC temp=2.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants