Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[WIP] compare_lr_test to LikelihoodModelResults #2440

Closed
wants to merge 0 commits into from

Conversation

saketkc
Copy link
Contributor

@saketkc saketkc commented Jun 7, 2015

See #2436
The tests are missing, but a notebook comparing the results of compare_lr_test and R's anova is here: http://nbviewer.ipython.org/github/saketkc/statsmodels/blob/kerby_mixedlm_notebooks/examples/notebooks/MixedLM_compare_lr_test.ipynb

I am still looking for examples which would help me put together a test for compare_lr_test. They do not match in the examples that I have used in the notebook, either due to non-convergence or because they are anyway insignificant.

@coveralls
Copy link

Coverage Status

Coverage decreased (-0.0%) to 83.91% when pulling e2b8685 on saketkc:master into 4b55fa4 on statsmodels:master.

@josef-pkt
Copy link
Member

notebook looks good

what you could do to get larger p-values is to test a irrelevant or almost irrelevant variable, for example a square of an included continuous variable or just add a random variable (that is maybe slightly correlated with something relevant)

Also, are there already unit tests for llf. There are some differences, but they could be small enough to be just numerical differences in the optimization.

the p-value looks like it's calculated the same way in R as in statsmodels, e.g. in one example

>>> from scipy import stats
>>> stats.chi2.sf(11.123, 1)
0.0008526376534050917
>>> stats.chi2.sf(12.155308379806229, 1)
0.00048948349354164796

(I guess this means that there is no correction for the possibility that the random effects variance is on the boundary. I think I never read anything specific for the MixedLM case.)

In my quick browsing of the notebook, I didn't see an LR test for inclusion of a fixed effect.

@saketkc
Copy link
Contributor Author

saketkc commented Jun 8, 2015

There is an example with fixed effects at the end of the notebook.

There is a slight catch with df calculation here:
https://github.com/statsmodels/statsmodels/pull/2440/files#diff-b165b4bd4e10edd0dfb485e47c562b2cR1397

and here:
https://github.com/statsmodels/statsmodels/pull/2440/files#diff-b165b4bd4e10edd0dfb485e47c562b2cR1406

R and scipy's chi2 calculations seem to match( I was confused initially since my pvalues were either nan or 1.0 before I made that hackish change above)

I will update the examples(and a mini blogpost) as soon as I get to my workstation.

@saketkc
Copy link
Contributor Author

saketkc commented Jun 9, 2015

Notebook upated(The bottom most example).
There is still an issue of convergence.

@saketkc saketkc changed the title ENH: compare_lr_test to LikelihoodModelResults [WIP] compare_lr_test to LikelihoodModelResults Jun 9, 2015
@josef-pkt josef-pkt added this to the 0.8 milestone Jun 23, 2015
saketkc added a commit to saketkc/statsmodels that referenced this pull request Oct 6, 2015
saketkc added a commit to saketkc/statsmodels that referenced this pull request Oct 6, 2015
saketkc added a commit to saketkc/statsmodels that referenced this pull request Oct 6, 2015
@saketkc saketkc closed this Oct 8, 2015
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants