Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Design of Experiments for model selection #5

Open
JamesPHoughton opened this issue Feb 9, 2016 · 1 comment
Open

Design of Experiments for model selection #5

JamesPHoughton opened this issue Feb 9, 2016 · 1 comment

Comments

@JamesPHoughton
Copy link
Collaborator

Analyze a pair of models together, in such a way that you find the places where their predictions diverge. This is the place to conduct an experiment, as it will give the biggest differentiator between the models.

Because the advance of science is much more economical when we can explicitly eliminate the most likely alternative theories, and because formulating the alternative theories and deriving their consequences is preeminently a theoretical task, the central gift of the great methodologist is his facility at formulating and deriving the consequences of alternative theories in such a way that the observations can actually be made to decide the question. (Stinchcombe 1968)

Formal models should excel at this task, as they are able to actually compare the implications of each formalized theory across a range of parameters.

Going further, If you have an understanding of the uncertainties of the parameters, then you can derive a distribution for the predicted values at that point in the parameters space for each model. Then you have the likelihood of getting a predicted value given that the model is correct (given that one of your models is correct). This is a good setup for model selection using MCMC.

Could show that by preferentially conducting experiments at the place where models diverge, you improve the tightness of your parameter estimation in the MCMC...

@JamesPHoughton JamesPHoughton changed the title Design of Experiments... Design of Experiments for model selection Feb 9, 2016
@JamesPHoughton
Copy link
Collaborator Author

JamesPHoughton commented Mar 16, 2017

Alternately could use some implementation of Reversible Jump Markov Chain Monte Carlo. Not sure if there is a good implementation in python yet, but we could tap the R implementation most likely.

If we get a good example here, it would probably also make a good ISDC/SDR paper.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant