ENH: add lbfgs for fitting #1147

Merged
merged 18 commits into from Oct 28, 2013

Projects

None yet

5 participants

@argriffing

I tried to add lbfgs as a fitting method, in anticipation of eventually using the L-BFGS-B simultaneous f(x) and f'(x) evaluation interface, but the changes in this commit do not pass the unit tests.

@josef-pkt josef-pkt commented on an outdated diff Oct 24, 2013
statsmodels/base/model.py
@@ -471,6 +490,37 @@ def _fit_mle_bfgs(f, score, start_params, fargs, kwargs, disp=True,
return xopt, retvals
+def _fit_mle_lbfgs(f, score, start_params, fargs, kwargs, disp=True,
+ maxiter=100, callback=None, retall=False,
+ full_output=True, hess=None):
+ m = kwargs.setdefault('m', 12)
+ pgtol = kwargs.setdefault('pgtol', 1e-8)
+ factr = kwargs.setdefault('factr', 1e2)
+ epsilon = kwargs.setdefault('epsilon', 1e-8)
+ maxfun = kwargs.setdefault('maxfun', 15000)
+ bounds = [(None, None)] * len(start_params)
+ retvals = optimize.fmin_l_bfgs_b(f, start_params, fprime=score, args=fargs,
+ bounds=bounds, m=m, factr=factr, pgtol=pgtol, epsilon=epsilon,
+ maxfun=maxfun, maxiter=maxiter, disp=disp, callback=callback)
@jseabold
Member

What's our minimum scipy requirement these days? 0.9.0 is almost 3 years old.

At the very least, I'd wrap the call in a try/except and fallback to the 0.9.0 behavior without maxiter and callback, perhaps raising a warning if they're not None.

@josef-pkt
Member

I was thinking about increasing the scipy version requirement > 0.9, but I'd rather stick with for the main statsmodels functionality with a scipy version that's available by default on Ubuntu and TravisCI.
(python 2.6 is 5 years old)

I switched away from using scipy 0.9.0 as my default development version, but haven't looked yet what we would like to use from newer scipy besides optimization. There is also rank-revealing QR in linalg (scipy >= 0.10, IIRC)

BTW: wheels are coming for Linux scipy/scipy#3020

@argriffing

I plan to remove these two args for compatibility, but I almost want to suggest that if people want to use an old version of scipy then they can use an old version of statsmodels too! According to http://packages.ubuntu.com/search?keywords=python-scipy the current Ubuntu ('saucy') uses scipy version 0.12.0.

@jseabold
Member

I say keep them, have the code something like this

try:
    fmin_l_bfgs_b(..., maxiter=maxiter, callback=callback)
except TypeError:
    if maxiter is not None or callback is not None:
        from warnings import warn
        warn("fmin_l_bfgs_b does not support maxiter or callback arguments"
                "Update your scipy, otherwise they have no effect", UserWarning)
    fmin_l_bfgs_b(...)
@josef-pkt
Member

I also think Skipper's proposal is the best way.

In general: statsmodels is much easier to build than scipy. I don't think 3 years is too old to just drop it without strong reasons.

@alexbrc alexbrc MAINT: improve compatibility with old scipy versions, but some comple…
…x number are sneaking in and causing tests to fail
73991e5
@argriffing

I just commented them out and added a note, for less clutter. This change also reduced the number of locally failing ARIMA tests from 5 or 6 down to only 1. The failing test seems to be caused by a complex number sneaking in, and some passing tests also complain about complex numbers.

@jseabold jseabold and 1 other commented on an outdated diff Oct 25, 2013
statsmodels/tsa/arima_model.py
@@ -846,6 +846,8 @@ def fit(self, order=None, start_params=None, trend='c', method = "css-mle",
if transparams: # transform initial parameters to ensure invertibility
start_params = self._invtransparams(start_params)
+ # NOTE: after having added 'lbfgs' to the list of fitting methods,
+ # the solver-is-None branch should no longer be necessary
if solver is None: # use default limited memory bfgs
bounds = [(None,)*2]*(k_ar+k_ma+k)
pgtol = kwargs.get('pgtol', 1e-8)
@jseabold
jseabold Oct 25, 2013 Member

I would set these defaults here, and not in the fmin_bfgs wrapper function. The defaults with the optimizer are probably good enough, I just found these to work in most cases for ARIMA.

@argriffing
argriffing Oct 25, 2013

This is a dumb python question, but how would I let fmin_l_bfgs_b use its internal defaults while also allowing these args to be optionally specified by the caller through kwargs?

@jseabold
jseabold Oct 25, 2013 Member

if solver == 'lbfgs' in the above and then just remove the getdefault stuff that is in _fit_mle_lbfgs

@jseabold
jseabold Oct 25, 2013 Member

You're essentially doing this now. I just want it so that the defaults for ARIMA are different from the default defaults. Right now it's written so that the defaults are always what they are in ARIMA now. Does that make sense?

@argriffing
argriffing Oct 25, 2013

I did something that might address this. I am a little bit confused about the function call chains that modify and pass through the **kwargs. I guess this is common to any code that works with options which can be overridden at multiple levels.

@argriffing

The tests are failing because somewhere a variance is negative, and this gives imaginary numbers when you try to take its square root to compute the standard deviation. I know nothing about econometrics, so I'm having a hard time tracking it down.

@jseabold
Member

It's unclear to me why anything should be different given where the lbfgs code is called from. Has anything else changed?

@argriffing

If mlefit = super(ARMA, self).fit(...) does something more magical than calling the new lbfgs mle function, then this could cause a difference in the behavior.

@jseabold
Member

Yeah, let me have a look at this.

@jseabold
Member

I found the problem. We have this code in LikelihoodModel.fit after the fitting is done.

Hinv = np.linalg.inv(self.hessian(xopt))

Since the Hessian of the model is done with the complex-step derivative, it passes complex params to loglikelihood of the KalmanFilter. This method sets sigma2. That's where the complex sigma2 comes from. Path of least resistance is just to add a keyword to loglike in kalmanftiler.py on whether or not to set sigma2. By default it should be true, but when either score or hessian calls this likelihood, then it should be false. Make sense?

@josef-pkt
Member

you could just replace sigma2 by sigma2.real at the end of fit. Or is this too late?

@jseabold
Member

I dunno. Makes me uncomfortable. I'm already uncomfortable that this only showed up in 1 failing bug fix test.

@jseabold
Member

Also, make sure that you change solver=None to solver='lbfgs' in the other fit method as well.

@argriffing

Also, make sure that you change solver=None to solver='lbfgs' in the other fit method as well.

@jseabold do you mean the ARIMA model (in addition to the ARMA model for which the solver has already been changed)? [EDIT: assuming you meant this one, I've made this change in https://github.com/argriffing/statsmodels/commit/5fe272685cba0d5634683054b1ecffac561f9c92]

@argriffing

Path of least resistance is just to add a keyword to loglike in kalmanftiler.py on whether or not to set sigma2.
By default it should be true, but when either score or hessian calls this likelihood, then it should be false.
Make sense?

Could this go into a subsequent PR after this present one is merged? I made these changes.

alexbrc added some commits Oct 26, 2013
@alexbrc alexbrc MAINT: add keyword to the kalman filter log likelihood calculation fu…
…nction, reducing the amount of side effects of the function when False is passed
4610c28
@alexbrc alexbrc MAINT: avoid side effects when computing log likelihood for the estim…
…ation of the score or hessian
3c763ae
@argriffing

@jseabold I implemented your suggestions and now the tests pass.

@coveralls

Coverage Status

Coverage remained the same when pulling 3c763ae on argriffing:add-lbfgs-fit into 1bc6576 on statsmodels:master.

@jseabold jseabold and 1 other commented on an outdated diff Oct 26, 2013
statsmodels/tsa/arima_model.py
@@ -485,11 +485,9 @@ def score(self, params):
-----
This is a numerical approximation.
"""
- loglike = self.loglike
- #if self.transparams:
- # params = self._invtransparams(params)
- #return approx_fprime(params, loglike, epsilon=1e-5)
- return approx_fprime_cs(params, loglike)
+ def pure_loglike(params):
+ return self.loglike(params, set_sigma2=False)
+ return approx_fprime_cs(params, pure_loglike)
@jseabold
jseabold Oct 26, 2013 Member

approx_fprime_cs takes an args and kwargs arguments. You could just do

return approx_fprime_cs(params, loglike, args=(False,))
@jseabold jseabold and 1 other commented on an outdated diff Oct 26, 2013
statsmodels/tsa/arima_model.py
@@ -499,10 +497,9 @@ def hessian(self, params):
-----
This is a numerical approximation.
"""
- loglike = self.loglike
- #if self.transparams:
- # params = self._invtransparams(params)
- return approx_hess_cs(params, loglike)
+ def pure_loglike(params):
+ return self.loglike(params, set_sigma2=False)
+ return approx_hess_cs(params, pure_loglike)
@jseabold
Member

Overall this looks good. If you add the try/except loop into the fit_mle_lbfgs, it should be about ready to merge.

@argriffing

If you add the try/except loop into the fit_mle_lbfgs, it should be about ready

I added this in argriffing@9ed378d according to #1147 (comment). It might be buggy, for example I think that the check if maxiter is not None or callback is not None: is not checking the right thing, although I guess it will take the right branch anyway.

The new test failures are presumably caused by the default maxiter of 100 for lbfgs. [EDIT: Travis does not fail, because the old-scipy branch is taken; this branch uses the lbfgs maxiter instead of using the default argument value for maxiter or the maxiter value passed by the caller, also the limiting maxiter is the 35 in the arima fit default, not the 100 in the fmin lbfgs mle default.]

@coveralls

Coverage Status

Coverage remained the same when pulling 9ed378d on argriffing:add-lbfgs-fit into 1bc6576 on statsmodels:master.

@jseabold
Member

Looks good to me. Ready to merge?

@josef-pkt josef-pkt and 2 others commented on an outdated diff Oct 27, 2013
statsmodels/base/model.py
+ # default values.
+ names = ('m', 'pgtol', 'factr', 'maxfun', 'approx_grad')
+ extra_kwargs = dict((x, kwargs[x]) for x in names if x in kwargs)
+
+ if extra_kwargs.get('approx_grad', False):
+ score = None
+
+ epsilon = kwargs.setdefault('epsilon', 1e-8)
+ bounds = [(None, None)] * len(start_params)
+ try:
+ retvals = optimize.fmin_l_bfgs_b(f, start_params,
+ fprime=score, args=fargs,
+ maxiter=maxiter, callback=callback,
+ bounds=bounds, epsilon=epsilon, disp=disp, **extra_kwargs)
+ except TypeError:
+ if maxiter is not None or callback is not None:
@josef-pkt
josef-pkt Oct 27, 2013 Member

maxiter will always be not None because it is set to maxiter=100 in the function signature.

@argriffing
argriffing Oct 27, 2013

Right, this is what I was referring to in #1147 (comment)

@jseabold
jseabold Oct 27, 2013 Member

There's nothing to change though right? Users should be warned that it has no effect when using old scipy.

@josef-pkt
josef-pkt Oct 27, 2013 Member

But it always raises the warning with old scipy, if the user uses the defaults.

I would set maxiter=None in the signature
and copy it maxiter_ = maxiter
and then assign the default
if maxiter is None: maxiter=100

more code but only a warning if the user sets maxiter

@jseabold
jseabold Oct 27, 2013 Member

On Sun, Oct 27, 2013 at 11:55 AM, Josef Perktold
notifications@github.comwrote:

But it always raises the warning with old scipy, if the user uses the
defaults.

Yes. The warning is then "your version of scipy is so old you're missing
features that we take for granted in the defaults. You need to do work on
your code to set maxiter=None, if you don't want to be reminded."

@josef-pkt josef-pkt and 2 others commented on an outdated diff Oct 27, 2013
statsmodels/tsa/arima_model.py
@@ -846,6 +839,8 @@ def fit(self, order=None, start_params=None, trend='c', method = "css-mle",
if transparams: # transform initial parameters to ensure invertibility
start_params = self._invtransparams(start_params)
+ # NOTE: after having added 'lbfgs' to the list of fitting methods,
+ # the solver-is-None branch should no longer be necessary
if solver is None: # use default limited memory bfgs
@josef-pkt
josef-pkt Oct 27, 2013 Member

solver is None will also not be true anymore by default solver='bfgs'

@josef-pkt
josef-pkt Oct 27, 2013 Member

OK, that's fine just redundant for now, except: why is bounds set here but not below if solver='bfgs'?

@argriffing
argriffing Oct 27, 2013

Yes, I was keeping this section only for reference, and I'll delete it before merging.

@jseabold
jseabold Oct 27, 2013 Member

bounds is set in the lbfgs fit function now for all models.

@argriffing
argriffing Oct 27, 2013

It's here:
https://github.com/argriffing/statsmodels/blob/9ed378d6836e347c89b2f2c2487f445b09db29ee/statsmodels/base/model.py#L507
The bounds are always set to Nones because we only want L-BFGS, dropping the -B (boundedness) from L-BFGS-B.

@argriffing

There is still at least one major thing to change in this PR before merge. The problem is that fit() sends a default maxiter of 35 to the lbfgs mle optimizer. This was not previously used, because fmin_l_bfgs_b was being called without a maxiter. Now, in new scipy when fmin_l_bfgs_b is sent a maxiter of 35, the tests fail because they do not have enough iterations. I'm not sure what is the best solution, because it involves multiple layers within statsmodels, and I'm not so familiar with the layers.

@jseabold
Member

Hmm, I'd just adjust the default maxiter in fit to 50.

I also see a warning in the test suite.

/home/skipper/statsmodels/statsmodels-argriffing/statsmodels/tsa/arima_model.py:1458: ComplexWarning: Casting complex values to real discards the imaginary part
  ('S.D. of innovations', ["%#5.3f" % self.sigma2**.5]),

It looks like we're still carrying around a complex sigma2 somewhere.

@josef-pkt josef-pkt and 2 others commented on an outdated diff Oct 27, 2013
statsmodels/base/model.py
@@ -471,6 +490,54 @@ def _fit_mle_bfgs(f, score, start_params, fargs, kwargs, disp=True,
return xopt, retvals
+def _fit_mle_lbfgs(f, score, start_params, fargs, kwargs, disp=True,
+ maxiter=100, callback=None, retall=False,
+ full_output=True, hess=None):
+
+ # Pass the following keyword argument names through to fmin_l_bfgs_b
+ # if they are present in kwargs, otherwise use the fmin_l_bfgs_b
+ # default values.
+ names = ('m', 'pgtol', 'factr', 'maxfun', 'approx_grad')
+ extra_kwargs = dict((x, kwargs[x]) for x in names if x in kwargs)
+
+ if extra_kwargs.get('approx_grad', False):
+ score = None
+
+ epsilon = kwargs.setdefault('epsilon', 1e-8)
+ bounds = [(None, None)] * len(start_params)
@josef-pkt
josef-pkt Oct 27, 2013 Member

Given that we include now a constraint solve, we should let users take advantage of it.
if bound not in kwargs

@argriffing
argriffing Oct 27, 2013

The l_bfgs_b includes another feature which I would like to take advantage of -- this is the combined evaluation of a function and a derivative. But I am putting this off until a later PR. Maybe this later PR could also include optional bounds? Also maybe the solver name should be 'lbfgsb' instead of 'lbfgs' if bounds are allowed :)

@josef-pkt
josef-pkt Oct 27, 2013 Member

That's a different issue, for computational efficiency.
I don't see a need to postpone the check here that the user hasn't defined bounds in kwargs already.

@jseabold
jseabold Oct 27, 2013 Member

I would prefer to also leave the bounds to a later PR. It's a different feature than the sparse hessian we get here. I plan to handle this separately and generally in #1121. The current constrained is not general either yet.

@coveralls

Coverage Status

Coverage remained the same when pulling 79fa713 on argriffing:add-lbfgs-fit into 1bc6576 on statsmodels:master.

@coveralls

Coverage Status

Coverage remained the same when pulling 6f47919 on argriffing:add-lbfgs-fit into 1bc6576 on statsmodels:master.

@argriffing

It looks like we're still carrying around a complex sigma2 somewhere.

If this is not OK, maybe suggest a test to show the failure? I'm not familiar with econometrics so I do not understand the models where it fails.

@jseabold
Member

I just mean that it's still getting set somewhere in a hessian or score call, so the refactoring isn't complete. I'll look at it.

@argriffing

I added the bounds and renamed lbfgs -> lbfgsb, although somewhat reluctantly because constraints are not used or tested in statsmodels, and I suspect it is not a good policy to always wrap all features just because they can be wrapped (http://en.wikipedia.org/wiki/You_aren't_gonna_need_it).

@coveralls

Coverage Status

Coverage remained the same when pulling 6f47919 on argriffing:add-lbfgs-fit into 1bc6576 on statsmodels:master.

@josef-pkt
Member

bounds are not a supported but an available feature.
Essentially, we get requests for constraints several times, and having it available allows to write experimental code for it, before we ever get the full bounds/constraints support.

@josef-pkt
Member

When we add support for constraint solvers, we still have to decide whether we want the scipy minimize interface or something closer to the original constraints in the solvers.

@jseabold
Member

I disagree with the lbfgs -> lbfgsb change and think it should be changed back. There are no bounds yet, so let's not do this. It's just more typing and in the eventual constrained optimizations will involve unnecessary checks.

@jseabold
Member

This is not a PR about constrained optimization. It's a PR that helps memory-heavy optimization. I have a separate PR already dealing with the optimization infrastructure and a local implementation of constrained optimization (that will likely end up being a separate PR from the optimization infrastructure).

@argriffing

I agree with @jseabold, maybe when this is merged you can somehow pick all commits except the lbfgs->lbfgsb commit? I'm bad with git so I'm afraid of screwing up my repo.

Also I have no clue about that complex number issue. I assume it's a problem with ARMA/ARIMA and not with l_bfgs_b.

@jseabold
Member

I can walk you through removing the commit, when we settle this, if you'd like. I can also do it myself when merging.

@josef-pkt
Member

What happened to github? I don't see any comments in "Files Changed" anymore.


maxiter was only added in scipy 0.12. That's too new to add warnings by default.

@jseabold
Member

Yeah, I noticed that too. I assumed there was a force push or they were buried in an "outdated diff" or something. Ok, fair enough.

@josef-pkt
Member

I also think keeping the name to lbfgs is better (bounds are not supported)
But I'm in favor of hidden options, if they make it easier to write the next enhancements (especially if they are just one line as in this case.)

@argriffing

maxiter was only added in scipy 0.12. That's too new to add warnings by default.

Ok, fair enough.

Should I just remove the warning? The fitting will still depend on the scipy version, because newer scipy will cap the iterations to 50. Personally I think it is OK for the development version of statsmodels to depend on the most recently released scipy version, but I understand this is not the majority opinion.

@argriffing

I also think keeping the name to lbfgs is better (bounds are not supported)
But I'm in favor of hidden options,
if they make it easier to write the next enhancements (especially if they are just one line as in this case.)

I'm not sure which one line you have in mind, but I'll change the name back.

@jseabold
Member

Yes, changing the name is fine. I'm going to have to fix merge conflicts when rebasing my branch anyway.

@josef-pkt
Member

The latest version looks good to me.

@coveralls

Coverage Status

Coverage remained the same when pulling 060248b on argriffing:add-lbfgs-fit into 1bc6576 on statsmodels:master.

@coveralls

Coverage Status

Coverage remained the same when pulling 7697b6d on argriffing:add-lbfgs-fit into 1bc6576 on statsmodels:master.

@argriffing

I removed the warning. Personally I would rather explicitly check scipy versions (using LooseVersion or something) because that's what we are really doing, and catching the type error as a scipy version proxy could mask type errors inside the fmin call.

@jseabold
Member

I'm fine with that. I worried a bit about just catching TypeErrors too.

@josef-pkt
Member

Checking scipy version is also fine,

BTW: running the test suite on current master I also get a runtime warning, related to L-BFGS-B that might be resolved with this PR as a side effect ?
C:\Programs\Python27\lib\site-packages\scipy\optimize\_minimize.py:297: RuntimeWarning: Method L-BFGS-B does not use Hessian information (hess)

@coveralls

Coverage Status

Coverage remained the same when pulling 7697b6d on argriffing:add-lbfgs-fit into 1bc6576 on statsmodels:master.

@jseabold
Member

It's fixed in #1121.

@argriffing

I've added the explicit scipy version check. As far as I know, the only remaining weirdness about this PR is the complex sigma2 which I guess @jseabold is investigating.

@jseabold
Member

Ah, right. I saw this earlier but forgot to comment. The loglike_css method also needs the set_sigma2 keyword.

@coveralls

Coverage Status

Coverage remained the same when pulling 713acdf on argriffing:add-lbfgs-fit into 1bc6576 on statsmodels:master.

@jseabold
Member

Let me check a bit more first.

@jseabold
Member

Yeah this should do it

diff --git a/statsmodels/tsa/arima_model.py b/statsmodels/tsa/arima_model.py
index f365295..fed0540 100644
--- a/statsmodels/tsa/arima_model.py
+++ b/statsmodels/tsa/arima_model.py
@@ -670,7 +670,7 @@ class ARMA(tsbase.TimeSeriesModel):
         if method in ['mle', 'css-mle']:
             return self.loglike_kalman(params, set_sigma2)
         elif method == 'css':
-            return self.loglike_css(params)
+            return self.loglike_css(params, set_sigma2)
         else:
             raise ValueError("Method %s not understood" % method)

@@ -680,7 +680,7 @@ class ARMA(tsbase.TimeSeriesModel):
         """
         return KalmanFilter.loglike(params, self, set_sigma2)

-    def loglike_css(self, params):
+    def loglike_css(self, params, set_sigma2=True):
         """
         Conditional Sum of Squares likelihood function.
         """
@@ -705,7 +705,8 @@ class ARMA(tsbase.TimeSeriesModel):

         ssr = np.dot(errors,errors)
         sigma2 = ssr/nobs
-        self.sigma2 = sigma2
+        if set_sigma2:
+            self.sigma2 = sigma2
         llf = -nobs/2.*(log(2*pi) + log(sigma2)) - ssr/(2*sigma2)
         return llf

@argriffing

Yeah this should do it

done

@jseabold
Member

Great. Thanks for seeing this through. I think it's a nice addition. Long on my TODO list.

@coveralls

Coverage Status

Coverage remained the same when pulling c8dc2a2 on argriffing:add-lbfgs-fit into 1bc6576 on statsmodels:master.

@argriffing

So this can be merged then? After it's merged, I'd like to add simultaneous loglike and score. Although my eventual target is MNLogit with 'big data', MNLogit is not so great for testing this feature because lbfgs is not the best way for fitting models/data that are small enough to be good test cases. Could you guys suggest a better model/data for testing this feature (loglike+score)? @jseabold had mentioned that ARIMA has very similar loglike and score calculations. Do you have links/references to the formulas?

@jseabold
Member

This is the best reference I used

http://www.amazon.com/Series-Analysis-Methods-Statistical-Science/dp/019964117X

I don't have it at the moment, but there should be a section on computing the derivatives while you compute the likelihood for the Kalman Filter ARMA model. If not, I'd try googling.

@jseabold
Member

Ok by me to merge.

@josef-pkt
Member

Could you guys suggest a better model/data for testing this feature (loglike+score)?

For setting up the pattern/support a simple model like Logit might be easiest. However, AR(I)MA is most likely the one it would currently have the most performance impact.

There are some differences:
In Logit and other discrete models we have explicit gradient/score calculations, so we only need to calculate them jointly and connect those to the optimizer.
In AR(I)MA the score uses numerical differentiation. For numerical derivatives, we would have to add the additional returns back in the numdiff functions to calculate results jointly (we dropped them at some point).

@jseabold
Member

Re: ARIMA. You would drop the numerical calculations completely. You can compute the score in the same Kalman Filter loop as the likelihood by adding a few lines, so the exact derivative and log-likelihood could be returned with only one loop. I don't recall the details but it should be in the Durbin and Koopman book.

@argriffing

If not, I'd try googling.

I asked instead of googled, because I assume that there are dozens or hundreds of variants or alternative parameterizations of the model, and I wanted to be sure to get the one that matches the log likelihood that is implemented in statsmodels. I just hiked to the library and checked out the Durbin and Koopman book, so maybe it will work!

Ok by me to merge.

I have no power here, but maybe @josef-pkt would be willing to click the button...

@jseabold
Member

Yeah, let me know. It should be pretty straightforward even to just do the math but that will help you figure out what's what. Let me know if you don't have any luck and I'll see if I can find some other references, but I don't have much time to spend on this right now. It might be easier to do the Logit first since the analytic derivatives are already done. The MNLogit derivatives are a little hairy. Took me a while to get all the axes right.

@josef-pkt josef-pkt merged commit 915abdc into statsmodels:master Oct 28, 2013

1 check passed

default The Travis CI build passed
Details
@josef-pkt
Member

merged

Thanks @argriffing

@argriffing

Yeah, let me know. It should be pretty straightforward even to just do the math

Sorry, I got lost within all of the kalman and AR* code, so I tested with MNLogit instead.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment