Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AR fit with bfgs: large score #990

Closed
josef-pkt opened this issue Jul 26, 2013 · 1 comment

Comments

Projects
None yet
2 participants
@josef-pkt
Copy link
Member

commented Jul 26, 2013

I tried to switch TestARMLEConstant to use bfgs

cls.res1 = AR(data.endog).fit(maxlag=9, method="mle", disp=-1,
                                      solver='bfgs', gtol=1e-12)

I also tried gtol=1e-9

The results are further away from the reference numbers in the unittest than using the default l_bfgs_b

The score is very large, independent of gtol

(Pdb) scipy.__version__
'0.9.0'
(Pdb) res1.model.mlefit.mle_settings
{'disp': -1, 'fargs': (), 'full_output': 1, 'extra_fit_funcs': {}, 'epsilon': 1.4901161193847656e-08, 'gtol':
1e-12, 'cov_params_func': None, 'retall': False, 'callback': None, 'start_params': array([ 6.74305359,  2.31151612, -1.77757298, -0.28866535,  0.07943   ,
       -0.02589424,  0.31901259,  0.46473082,  0.45424795,  0.51828013]), 'maxiter': 35, 'optimizer': 'bfgs',
'norm': inf}
(Pdb) res1.params
array([ 5.6680991 ,  1.16071075, -0.39538174, -0.16634088,  0.15044504,
       -0.09439083,  0.00906318,  0.05205166, -0.08584313,  0.25239265])
(Pdb) res1.model.score(res1.params)
array([ 0.00004547,  0.00143245,  0.00177351,  0.00197815,  0.0018872 ,
        0.00129603,  0.00111413,  0.00106866,  0.00068212,  0.00050022])
@jseabold

This comment has been minimized.

Copy link
Member

commented Feb 6, 2014

I'm not sure there is an issue here. Working fine here, so closing. Tests pass with gtol = 1e-14 and an adjustment for too strict rtol added in the predict tests. One possible improvement though would be to use approx_fprime_cs, but I don't think it's too necessary given that ARMA should be more robust.

[~/statsmodels/statsmodels-skipper/statsmodels/tsa/tests/]
[21]: scipy.__version__
[21]: '0.14.0.dev-fa9258c'

[~/statsmodels/statsmodels-skipper/statsmodels/tsa/tests/]
[22]: res1 = sm.tsa.AR(data.endog).fit(maxlag=9, method='mle', solver='bfgs', gtol=1e-12)
Warning: Maximum number of iterations has been exceeded.
        Current function value: 4.123985
        Iterations: 35
        Function evaluations: 36
        Gradient evaluations: 36

[~/statsmodels/statsmodels-skipper/statsmodels/tsa/tests/]
[23]: res1.params
[23]: 
array([ 5.66828645,  1.16070997, -0.39538124, -0.16634175,  0.1504455 ,
    -0.0943909 ,  0.00906113,  0.05205325, -0.08584372,  0.25239148])

[~/statsmodels/statsmodels-skipper/statsmodels/tsa/tests/]
[24]: res1.model.score(res1.params)
[24]: 
array([-0.00002274, -0.0001819 , -0.00031832, -0.00034106, -0.00040927,
    -0.00050022, -0.00015916, -0.00002274,  0.00009095,  0.00004547])

@jseabold jseabold closed this Feb 6, 2014

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.