Skip to content

ENH: use memoization in MINPACK routines #236

Closed
wants to merge 3 commits into from

4 participants

@dlax
SciPy member
dlax commented Jun 4, 2012

This implements memoization of the objective function and Jacobian for MINPACK routines.
See #130 for a rationale and use-case.

@pv pv and 1 other commented on an outdated diff Jun 4, 2012
scipy/optimize/optimize.py
+
+class MemoizeFun(object):
+ """ Decorator that caches the value of the objective function each time
+ it is called or only the first time if `first_only==True`."""
+ def __init__(self, fun, first_only=False):
+ self.fun = fun
+ self.calls = 0
+ self.first_only = first_only
+
+ def __call__(self, x, *args):
+ if self.calls == 0:
+ self.x = numpy.asarray(x).copy()
+ self.f = self.fun(x, *args)
+ self.calls += 1
+ return self.f
+ elif self.first_only:
@pv
SciPy member
pv added a note Jun 4, 2012

I think this doesn't work --- if first_only is True, the cached function value is never used.

@dlax
SciPy member
dlax added a note Jun 4, 2012
@dlax
SciPy member
dlax added a note Jun 4, 2012
def __call__(self, x, *args):
    if not hasattr(self, 'x'):
        self.x = numpy.asarray(x).copy()
        self.f = self.fun(x, *args)
        self.calls = 1
        return self.f
    elif self.first_only and self.calls > 1:
        return self.fun(x, *args)
    elif numpy.any(x != self.x):
        self.x = numpy.asarray(x).copy()
        self.f = self.fun(x, *args)
        self.calls += 1
        return self.f
    else:
        return self.f

Does this look better?

@dlax
SciPy member
dlax added a note Jun 4, 2012

fixed in a16c58a

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
@rgommers
SciPy member
rgommers commented Jun 4, 2012

OK, I'll wait; let me know when it's ready.

dlax added some commits Jun 4, 2012
@dlax dlax ENH: use memoization in MINPACK routines cce626f
@dlax dlax FIX: copy __name__ attribute upon memoization in optimize
This is needed since, for some kinds of failures, MINPACK routines output
an error message which refers to the __name__ attribute of the objective
function and jacobian.
cd8ba57
@dlax
SciPy member
dlax commented Jun 5, 2012

I've rebased the branch in order to ease merge.
It is ready now, I think.

@yosefm
yosefm commented Jun 7, 2012

Tested, it is slower than vanilla SciPy for me.
I have to apologize - when I tested in #130 I made a PYTHONPATH mistake, so I actually didn't test your version there. But now it's ok.

@dlax
SciPy member
dlax commented Jun 7, 2012
@pv
SciPy member
pv commented Jun 7, 2012

Yosef's problem is probably mainly Python overhead. Adding caching however will be useful in the opposite case when the objective function is slow. One can however make the memoization still faster by just adding the assumption that the first two calls are at same point, rather than explicitly checking for this condition which is always true...

@dlax
SciPy member
dlax commented Jun 7, 2012

One can however make the memoization still faster by just adding the assumption that the first two calls are at same point, rather than explicitly checking for this condition which is always true...

I don't understand this. What do you suggest?

If this were to be dropped, note (to self as well) that cd8ba57 contains a fix that has to be applied anyways.

@pv
SciPy member
pv commented Jun 7, 2012

@dlaxalde: assume that np.all(x == self.x) is true for the first call. The optimization algorithms probably always first evaluate the function value at the input point, so there is no need to actually check what the input argument actually is.

@dlax
SciPy member
dlax commented Jun 7, 2012

assume that np.all(x == self.x) is true for the first call. The optimization algorithms probably always first evaluate the function value at the input point, so there is no need to actually check what the input argument actually is.

This is skipped in the first call since self.calls == 0 (self.x does not exist yet btw).

@pv
SciPy member
pv commented Jun 7, 2012
@dlax
SciPy member
dlax commented Jun 7, 2012

Please see 3d24d3b.
Not sure it will solve your problem @yosefm but at least it skips one more call...

@pv pv added the PR label Feb 19, 2014
@pv pv removed the PR label Aug 13, 2014
@dlax dlax closed this Jan 11, 2015
@dlax dlax deleted the dlax:enh/optimize/memoize-minpack branch Jan 11, 2015
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Something went wrong with that request. Please try again.