You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
one refactoring:
allow that methods used as objective functions, score and hessian are optional
e.g. GLM has gmmobjective and gmmobjective_cu
M estimators like RLM don't have a loglike.
Review: base.model.Model.fit has xopt, retvals, optim_settings = optimizer._fit(f, score, start_params, ..
how much does this differ now from minimize directly? (our wrapper precedes scipy minimize)
It looks like the main part of our work is after this call when checking Hessian.
One difference to scipy is in our own optimizer methods, e.g. newton and L1 for discrete.
(related: We still have Wald inference In LikelihoodModel instead of a more generic class)
The text was updated successfully, but these errors were encountered:
GLM case: it would be useful to allow for deviance as objective function
score and hessian should be the same in this case as with loglike.
It would make it easier for tweedie, e.g. bug in elastic net because we don't have proper loglike #7476
Adding an option for selecting the objective function in the optimizer code when creating the lambda functions would, I guess, not be difficult to implement.
The could be an instance attribute, if we want the pick objective function only during fit (GLM), or could be a class attribute if everything is based on the same objective function (GMM).
one refactoring:
allow that methods used as objective functions, score and hessian are optional
e.g. GLM has
gmmobjective
andgmmobjective_cu
M estimators like RLM don't have a loglike.
Review:
base.model.Model.fit
hasxopt, retvals, optim_settings = optimizer._fit(f, score, start_params, ..
how much does this differ now from minimize directly? (our wrapper precedes scipy minimize)
It looks like the main part of our work is after this call when checking Hessian.
One difference to scipy is in our own optimizer methods, e.g. newton and L1 for discrete.
(related: We still have Wald inference In LikelihoodModel instead of a more generic class)
The text was updated successfully, but these errors were encountered: