rescale loglike for optimization #480

Closed
josef-pkt opened this Issue Sep 29, 2012 · 1 comment

Projects

None yet

1 participant

@josef-pkt
statsmodels member

what's the proper scaling of the loglikelihood?

In sum cases the loglikelihood, as a sum, increases with the number of
observations.

example Poisson testcase
nobs = 20190
llf = -62420

np.exp(-62420)
0.0

if I take the mean loglikelihood, llf/nobs, the loglikelihood stays in
a smaller range for large samples

poisson_res.llf
-3.0916091413793421
np.exp(poisson_res.llf)
0.045428794205286546

If I remember correctly, then the rate of convergence of the
loglikelihood is sqrt(nobs).

current example:
We get a non-convergence return code for the poisson regression with
L1 penalization.
Using mean loglikelihood, fmin_slsqp converges nicely.

previously, I was playing with normalizations of loglike and
likelihood function in finite mixture Poisson regression with panel
data, but that's a different setup.

one possibility:
rescale loglike, score and hessian (in discrete)
leave llf in the results instance as unscaled loglike (sum), this is
used for the result statistics, aic, bic, LR test, ...

(It could be added just to L1 optimization.)

my guess is that it should improve our numerical optimization in
general and not just for fmin_slsqp.


for PR #465 I changed the Loglikelihood fit method to use the rescaled loglike, score and hessian
this only affects the lambda function for the optimization routines and leaves the underlying methods unchanged.

@josef-pkt
statsmodels member

I'm closing this, it has been in master for some time as part of L1 penalization merge.

Two observations:

the reported print after an optimization finishes, with disp=1, shows now the rescaled optimized value, average likelihood and not llf, which can be a bit confusing when it is compared to summary() llf.

I needed to adjust the test precision in a few tests. Since for some cases the absolute tol used as termination criterium, caused that the optimization stopped at lower relative precision.

@josef-pkt josef-pkt closed this Feb 4, 2013
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment