REF DRYing fit methods #1615

Closed
josef-pkt opened this Issue Apr 21, 2014 · 2 comments

Projects

None yet

2 participants

@josef-pkt
Member

It looks like we are adding or copying many fit functions just because they need to choose the results class. In some cases they also have special code to calculate start_params.

Two possibilities

  • register return class in a module global, as alternative it could be another attribute
  • add method for start_params()

similar use case:
currently it's not possible to subclass the results class without also subclasses the model and overwriting fit,
example: least squares (WLS) with predefined scale, scipy curve_fit discussion, and Stata's vwls which would need to redefine results.scale
(I just realized that I could pick this up as option for cov_type)

(while looking at the missing NegativeBinomial.fit_regularized #1454)

@josef-pkt josef-pkt added a commit to josef-pkt/statsmodels that referenced this issue Apr 22, 2014
@josef-pkt josef-pkt CLN L1NegativeBinomialResults rename closes #1615,
    remove redundant `__init__`
8235c42
@josef-pkt
Member

I'm running into this again for adding another fit_method ( fit_constrained)

example Logit.fit

    def fit(self, start_params=None, method='newton', maxiter=35,
            full_output=1, disp=1, callback=None, **kwargs):
        bnryfit = super(Logit, self).fit(start_params=start_params,
                method=method, maxiter=maxiter, full_output=full_output,
                disp=disp, callback=callback, **kwargs)
        discretefit = LogitResults(self, bnryfit)
        return BinaryResultsWrapper(discretefit)
    fit.__doc__ = DiscreteModel.fit.__doc__

and Poisson.fit

    def fit(self, start_params=None, method='newton', maxiter=35,
            full_output=1, disp=1, callback=None, **kwargs):
        cntfit = super(CountModel, self).fit(start_params=start_params,
                method=method, maxiter=maxiter, full_output=full_output,
                disp=disp, callback=callback, **kwargs)
        discretefit = PoissonResults(self, cntfit)
        return PoissonResultsWrapper(discretefit)
    fit.__doc__ = DiscreteModel.fit.__doc__

The only difference is in the Results and ResultsWrapper classes

@jseabold
Member

Yes, this was part of the refactor that moved these into types of discrete choice models since the return classes have a lot of model-specific code.

I'm +1 on a _start_params method. I'd be +.5 on some magic to specify the return class, but it sounds like more trouble than it's going to be worth IMO. Would make things significantly more complicated.

@josef-pkt josef-pkt referenced this issue May 31, 2014
Merged

WIP: fit_constrained #1714

11 of 17 tasks complete
@josef-pkt josef-pkt added a commit that closed this issue Jul 22, 2014
@josef-pkt josef-pkt CLN L1NegativeBinomialResults rename closes #1615,
    remove redundant `__init__`
cb1795e
@josef-pkt josef-pkt closed this in cb1795e Jul 22, 2014
@PierreBdR PierreBdR pushed a commit to PierreBdR/statsmodels that referenced this issue Sep 2, 2014
@josef-pkt josef-pkt CLN L1NegativeBinomialResults rename closes #1615,
    remove redundant `__init__`
8171198
@yarikoptic yarikoptic added a commit to yarikoptic/statsmodels that referenced this issue Oct 23, 2014
@yarikoptic yarikoptic Merge commit 'v0.5.0-1189-g48f7a21' into releases
* commit 'v0.5.0-1189-g48f7a21': (970 commits)
  REF _fit_newton use np.asarray(hess)
  REF: _fit_newton, add ridge_factor option, default 1e-10
  REF: _fit_newton add Ridge correction to Hessian, see also #953
  Bug comment: commented out code, weigths don't sum to 1 see #1845
  CLN: TST cleanup comment, unused code.
  REF: "is" None (not ==)
  BUG: fix if alpha is scalar, TST: try standardized
  REF: NegativeBinomial fit_regularized, try regularized Poisson for start_params
  DOC: add comment in notes about not penalizing NegBin shape parameter [skip ci]
  TST: TestNegativeBinomialL1Compatability use desired as start_params
  TST: adjust test, precision and start_params (failure on TravisCI)
  BUG: NegativeBinomial fix aic, bic for 'geometric', adjust tests
  REF add k_extra to NegativeBinomial, don't count it in df_model, df_resid
  TST explicitely define k_extra in test class
  CLN: a few cosmetic changes
  TST fix negbin geometric test for fit_regularized
  CLN L1NegativeBinomialResults rename closes #1615,     remove redundant `__init__`
  BUG NegativeBinomial add fit_regularized closes #1453 closes  #1454     adjust test to handle extra parameter
  REF: has_constant remove special code in linear_model, is moved to data
  Fix const_idx with multiple const, more tests
  ...
18a1faa
@yarikoptic yarikoptic added a commit to yarikoptic/statsmodels that referenced this issue Oct 23, 2014
@yarikoptic yarikoptic Merge branch 'releases' into debian-experimental
* releases: (970 commits)
  REF _fit_newton use np.asarray(hess)
  REF: _fit_newton, add ridge_factor option, default 1e-10
  REF: _fit_newton add Ridge correction to Hessian, see also #953
  Bug comment: commented out code, weigths don't sum to 1 see #1845
  CLN: TST cleanup comment, unused code.
  REF: "is" None (not ==)
  BUG: fix if alpha is scalar, TST: try standardized
  REF: NegativeBinomial fit_regularized, try regularized Poisson for start_params
  DOC: add comment in notes about not penalizing NegBin shape parameter [skip ci]
  TST: TestNegativeBinomialL1Compatability use desired as start_params
  TST: adjust test, precision and start_params (failure on TravisCI)
  BUG: NegativeBinomial fix aic, bic for 'geometric', adjust tests
  REF add k_extra to NegativeBinomial, don't count it in df_model, df_resid
  TST explicitely define k_extra in test class
  CLN: a few cosmetic changes
  TST fix negbin geometric test for fit_regularized
  CLN L1NegativeBinomialResults rename closes #1615,     remove redundant `__init__`
  BUG NegativeBinomial add fit_regularized closes #1453 closes  #1454     adjust test to handle extra parameter
  REF: has_constant remove special code in linear_model, is moved to data
  Fix const_idx with multiple const, more tests
  ...
8f2ab7a
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment