-
Notifications
You must be signed in to change notification settings - Fork 2.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
iolib not found #66
Comments
The short answer, the import path needs to be adjusted, and I think Skipper did it in the pandas-integration path. The longer answer: Summary() for other models is where Vincent was working at the end in his branch. I didn't have time to look at it, (still on launchpad and needs manual merge) and I think there are no tests for any summary methods in the test suite. I still need to check what the status of summary for GLM, and RLM is. |
from scikits.statsmodels.iolib import SimpleTable but the version in 0.3 looks unfinished, and two extra '==' |
thanks for the feedback. I was looking for a method to get p-values on regression coefs out of a logistic regression. If you have a simple solution for urgent needs that would be great. |
results.pvalues ? binom_results.tvalues binom_results.pvalues using examples/example_glm.py the parameter summary table works for GLM after fixing the path should also be available using logit in discrete |
I'm working here, but it will still take some time |
while comparing summary methods with R, I saw that pvalues in R are based on normal distribution, the pvalues in statsmodels glm are based on t distribution, tvalues are identical. example from R help file values in R
values in statsmodels
very small sample, 9 observations, 5 regressors including constant, so the difference between t and norm is pretty large
|
maybe a naive question but it seems I get crazy pvalues with :
In [42]: run test_logreg_pvalues.py what am I missing? thx |
I don't seem to have scikits.learn available right now on my computer. just one guess, statsmodels binomial has a problem if there is perfect prediction. (I will have to look up the details in this case.) Are there any observations that are misclassified? Or what is the fraction of misclassified observations? I don't have any other idea, until I look at the data. |
you can access the data here: |
looks like perfect fit, I also tried discrete.Logit, but the numbers there don't seem to make sense. I just had a very fast look, so it's still possible that something else is going on. In the complete separation case the likelihood function has some problems, not finite or the wrong curvature. |
thanks for taking a look. A warning would definitely be helpful. Out of curiosity how does R behave in this degenerated case? They might have a trick. |
I never looked at this case in R. |
pvalues look more reasonable with misclassifying some observations
|
better indeed. It would be great to reproduce SAS behavior. |
/Users/alex/local/lib/python2.7/site-packages/scikits/statsmodels/genmod/generalized_linear_model.py in summary(self, yname, xname, title, returns)
659 """
660 import time as Time
--> 661 from iolib import SimpleTable
662 from stattools import jarque_bera, omni_normtest, durbin_watson
663
ImportError: No module named iolib
it looks like a relative import problem.
The text was updated successfully, but these errors were encountered: