p-Values Without Penalties With Perfect Predictions
Previous research suggests using penalized maximum likelihood for dealing with separation in logistic regression models (Zorn 2005), but notes that the penalty is a meaningful, substantive decision (Rainey 2016). In the project I show that researchers can use the likelihood ratio to compute reasonable, well-behaved p-values without using frequentist penalties or prior information.
The latest (in-progress) draft is here.
I named files and directory so that their purpose can (hopefully) be
understood from the name. The
Makefile formally documents
the relationship among the files and the steps to reproduce my work.
Key Figures and Tables
The project uses two data sets discussed from previous research.
politics_and_need_rescaled.csvcomes from Barrilleaux and Rainey (2014) and their replication files on Dataverse.
bm.csvcomes from Bell and Miller (2015) and the replication files on Dataverse for Rainey (2016).
- Obtain raw data.
- Wrangle the raw data into a usable format consistent with Broman and Woo (2018).
Immediate Next Steps
- Complete manuscript section on the theory of hypothesis tests.
- Perform MC simulations for a general logistic regression.