Skip to content

HTTPS clone URL

Subversion checkout URL

You can clone with HTTPS or Subversion.

Download ZIP

Loading…

This adds root finding to pchip as this is possible to do optimal because pchip is a monotonic interpolation. #3260

Open
wants to merge 3 commits into from

5 participants

Benny Pauli Virtanen Coveralls Evgeni Burovski argriffing
Benny

Alternative would be fsolve and friends, but those would not use the known structure of pchip.

A typical use for pchip is an interpolation that can be inverted. This adds a root finding to pchip that uses the pchip form to invert fast.

I am not sure this is the best way. It might be better to use phcip to obtain yi=f(xi) and yi' = f'(xi), then create the cubic monotonic interpolation through yi with derivatives yi' as self.pchipinv. This should be the correct f^-1.
Then when asking roots, one can use this self.pchipinv.

The algorithm proposed here works differently: search the cubic for the y value given, calculate the 3 cubic roots. As it is a monotonic function only 1 of the roots will be in the correct interval.

Some problems:
1. computing cubic roots gives imaginary parts which need to be discarded.
See the np.allclose(ires, 0., atol=1e-10) to discard this. -10 is hard coded
2. return value can be numerically close to the bounding edges.
See the np.allclose(x1-rres, 0., atol=1e-10) to recognize this. -10 is hard coded
3. Use of newaxis to dump the results: x[c] = ress[:, np.newaxis]
I believe this will be correct for the use cases possible

Benny bmcage This adds root finding to pchip as this is possible because
pchip is a monotonic interpolation.
Alternative would be fsolve and friends, but those would not use the known structure of pchip
5239a59
Pauli Virtanen
Owner
pv commented

It could be more efficient to convert pchip to use the new PPoly piecewise polynomial representation, which has efficient root finding already implemented.

We will probably eventually deprecate polyint.PiecewisePolynomial in favor of that.

Benny

PPoly seems not optimal to use. It's not actually root finding but inversion, more like fsolve. You then know the interval. After all pchip is monotone, so you know there is one, and only one value that satisfies y=f(x).
My problem 1 and 2 however can be fixed by using the same inversion fuction as PPoly:
real_roots in https://github.com/scipy/scipy/blob/master/scipy/interpolate/_ppoly.pyx

Would this real_roots be exposed?

Pauli Virtanen
Owner
pv commented

The root finding internal function is exposed as scipy.interpolate._ppoly.real_roots.

I see, the point is that you can find the correct interval fast via searchsorted in y. One could think about adding an additional argument "assume_monotonic" to real_roots, so that the binary search can be done inside the routine itself.

Benny

I tried the other possible approach: add an inverse function that returns an interpolated object which is the inverse. This is not working however, too unstable. The numerical precision errors in solution values or inverse derivatives, are such that the inverse interpolated polynomial is very different over large intervals. So that approach is a no go, root finding will be the only acceptable approach.
So, an assume_monotonic added to real_roots is an option. Approach here can be done via inheritance if needed in current scipy

Coveralls

Coverage Status

Coverage remained the same when pulling c6401f1 on bmcage:monohermspline into 233ad82 on scipy:master.

Pauli Virtanen
Owner
pv commented

This should be reimplemented on top of current master branch, as gh-3267 converted Pchip to use BPoly.

Actually, I think this PR would best be done by adding a method def solve(self, y, assume_monotonic=False): to the PPoly piecewise polynomial class. After that, a similar method could be added to Pchip.

For assume_monotonic == True, we should use the faster approach here, and for assume_monotonic=False a slower approach that finds all the roots. Ideally, the solve method would be written in Cython directly in the real_roots routine.

Evgeni Burovski
Collaborator

If/when assume_monotonic is added, the behavior should be at least very clearly documented:

In [21]: %history
import numpy as np
from scipy.interpolate import pchip
xi = [1., 2., 3., 4.]
yi = [-1., 1, -1, 1]
p = pchip(xi, yi)
In [22]: p.root(1)
Out[22]: array(3.9999999947957403)

In [23]: p.root(0.5)
Out[23]: array(3.6736481776669305)
Evgeni Burovski
Collaborator

@bmcage can you provide an example where the inverse interpolation gives unacceptable numerical errors?

Benny

@ev-br I pushed my local branch with that test: https://github.com/bmcage/scipy/tree/monohermsplineinv
If you then run the test: $ python runtests.py --python scipy/interpolate/tests/test_polyint.py
you will see the bad interpolation
In test_inv_vGn of test_polyint.py uncomment the part using pylab to create a plot to see it. That is the function I need to inverse hundred of thousands of times in an inverse algorithm.

Evgeni Burovski
Collaborator

Hmm... This example looks a bit artificial to me, is it actual data you're interpolating? What is the actual problem you're solving?
In any case, since the h is logarithmically spaced and u follows a power law, have you tried working with them in the log space?
[If this turns into a discussion about how to deal with these exact data, we might want to move it from a github issue to the scipy-user mailing list]

Benny

I'm doing it logwise in reality. It's just a testcase which shows how it can fail dramatically. Rootfinding does not cause such errors.

Pauli Virtanen pv added needs-work PR labels
Pauli Virtanen pv removed the PR label
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Commits on Jan 31, 2014
  1. Benny

    This adds root finding to pchip as this is possible because

    bmcage authored
    pchip is a monotonic interpolation.
    Alternative would be fsolve and friends, but those would not use the known structure of pchip
Commits on Feb 7, 2014
  1. Benny
Commits on Feb 28, 2014
  1. Benny

    add example to slsqp itself

    bmcage authored
This page is out of date. Refresh to see the latest.
72 scipy/interpolate/polyint.py
View
@@ -957,6 +957,7 @@ def __init__(self, x, y, axis=0):
xp = x.reshape((x.shape[0],) + (1,)*(y.ndim-1))
yp = np.rollaxis(y, axis)
+ self.origyi = yp
data = np.empty((yp.shape[0], 2) + yp.shape[1:], y.dtype)
data[:,0] = yp
@@ -1017,6 +1018,77 @@ def _find_derivatives(x, y):
return dk.reshape(y_shape)
+ def root(self, y):
+ """
+ Evaluate for which `x` we have `f(x) = y`.
+ As PchipInterpolator is monotonic, the solution is unique if the
+ Interpolator has been constructed with y[i+1] > y[i]
+
+ Parameters
+ ----------
+ y : array-like
+ Point or points at which to evaluate `f^{-1}(y)=x`
+
+ Returns
+ -------
+ d : ndarray
+ root interpolated at the y-points.
+ """
+ # first determine the correct cubic polynomial
+ y, y_shape = self._prepare_x(y)
+ if _isscalar(y):
+ pos = np.clip(np.searchsorted(self.origyi, y) - 1, 0, self.n-2)
+ poly = self.polynomials[pos]
+ x = self._poly_inv(poly, y)
+ else:
+ m = len(y)
+ pos = np.clip(np.searchsorted(self.origyi, y) - 1, 0, self.n-2)
+ x = np.zeros((m, self.r), dtype=self.dtype)
+ if x.size > 0:
+ for i in xrange(self.n-1):
+ c = pos == i
+ if not any(c):
+ continue
+ poly = self.polynomials[i]
+ ress = self._poly_inv(poly, y[c])
+ x[c] = ress[:, np.newaxis]
+ return self._finish_y(x, y_shape)
+
+ @staticmethod
+ def _poly_inv(cubic, y):
+ """Given a cubic KroghInterpolator polynomial,
+ we determine the root f(x) = y, where x must be
+ bounded by the edges of the Krogh polynomial
+
+ Parameters
+ ----------
+ y : array-like
+ Point or points at which to evaluate `f^{-1}(y)=x`
+ Returns
+ -------
+ d : ndarray
+ root of the cubic polynomial interpolated at the y-points.
+ """
+ from scipy.interpolate._ppoly import real_roots
+ x0 = cubic.xi[0]
+ x2 = cubic.xi[2]
+ if (cubic.yi[0][0] >= cubic.yi[2][0]):
+ raise ValueError("Not a strictly increasing monotone function")
+ #convert Krogh c structure to c structure of PPoly
+ ourc = np.empty((4,1), cubic.c.dtype)
+ ourc[0, 0] = cubic.c[3,0]
+ ourc[1, 0] = cubic.c[2,0]
+ ourc[2, 0] = cubic.c[1,0]
+ ourc[1,0] += cubic.c[3]*(x0-x2)
+ ourc = ourc.reshape(4,1,1)
+ y = np.asarray(y)
+ result = np.empty(y.shape, float)
+ for ind, yval in enumerate(y):
+ ourc[3, 0, 0] = cubic.c[0,0] - yval
+ roots = real_roots(ourc, np.array([x0,x2], float), 0, 0)
+ result[ind] = roots[0]
+ return result
+
def pchip_interpolate(xi, yi, x, der=0, axis=0):
"""
39 scipy/interpolate/tests/test_polyint.py
View
@@ -363,6 +363,45 @@ def test_wrapper(self):
assert_almost_equal(P.derivative(self.test_xs,2),piecewise_polynomial_interpolate(self.xi,self.yi,self.test_xs,der=2))
assert_almost_equal(P.derivatives(self.test_xs,2),piecewise_polynomial_interpolate(self.xi,self.yi,self.test_xs,der=[0,1]))
+class CheckInvertPchip(TestCase):
+
+ def setUp(self):
+ x = np.linspace(0, 10, 11)
+ self.pch_lin = pchip(x, x)
+ self.pch_kwa = pchip(x, np.power(x,2))
+ self.pch_cub = pchip(x, np.power(x,3))
+ self.pch_limit = pchip([0,2], [0,4])
+ h = -np.power(10, x[::-1])/5
+ u = 1/np.power(1+ np.power(-0.015 * h, 1.3), 1. - 1./1.3)
+ self.pch_vGn = pchip(h, u)
+
+ def test_inv_lin(self):
+ test = np.array([1., 3.5, 8.3])
+ assert_almost_equal(self.pch_lin.root(test), test)
+
+ def test_inv_kwa(self):
+ test = np.array([1., 2.1, 8.3])
+ assert_almost_equal(self.pch_kwa.root(np.power(test, 2)), test, 2)
+
+ def test_inv_cub(self):
+ test = np.array([1., 2.1, 8.3])
+ assert_almost_equal(self.pch_cub.root(np.power(test, 3)), test, 2)
+
+ def test_inv_vGn(self):
+ testx = np.array([-19., -54., -200., -500.])
+ testy = self.pch_vGn(testx)
+ invx = self.pch_vGn.root(testy)
+ assert_almost_equal(invx, testx, 7)
+
+ def test_scalar(self):
+ testx = -54
+ testy = self.pch_vGn(testx)
+ invx = self.pch_vGn.root(testy)
+ assert_almost_equal(invx, testx, 7)
+
+ def test_inv_limit(self):
+ test = 2
+ assert_almost_equal(self.pch_limit.root(test), 1)
if __name__ == '__main__':
run_module_suite()
83 scipy/optimize/slsqp.py
View
@@ -174,7 +174,88 @@ def fmin_slsqp(func, x0, eqcons=(), f_eqcons=None, ieqcons=(), f_ieqcons=None,
Examples
--------
- Examples are given :ref:`in the tutorial <tutorial-sqlsp>`.
+ Let us consider the problem of minimizing the following function under
+ ``x``
+
+ >>> from scipy.optimize import fmin_slsqp, minimize
+ >>> def fun(x, r=[4, 2, 4, 2, 1]):
+ ... return exp(x[0]) * (r[0] * x[0]**2 + r[1] * x[1]**2 +
+ ... r[2] * x[0] * x[1] + r[3] * x[1] +
+ ... r[4])
+
+ The first parameter is bounded > 0.1, the second > 0.2, so
+
+ >> bnds = array([[-inf]*2, [inf]*2]).T
+ >> bnds[:, 0] = [0.1, 0.2]
+
+ We optimize under the following equality constraints
+
+ >>> def feqcon(x, b=1):
+ ... return array([x[0]**2 + x[1] - b])
+
+ We can use the Jacobian of the equality constraint here
+
+ >>> def jeqcon(x, b=1):
+ ... return array([[2*x[0], 1]])
+
+ Next we consider the inequality constraint that ``x[0]*x[1] > -10``,
+
+ >>> def fieqcon(x, c=10):
+ ... return array([x[0] * x[1] + c])
+
+ We can use the Jacobian of the inequality constraint to speed up
+ optimization
+
+ >>> def jieqcon(x, c=10):
+ ... return array([[1, 1]])
+
+
+ For the ``minimize`` wrapper, a constraints dictionary can be constructed
+
+ >>> cons1 = ({'type': 'eq', 'fun': feqcon, 'args': (1, )},
+ ... {'type': 'ineq', 'fun': fieqcon, 'args': (10,)})
+ >>> cons2 = ({'type': 'eq', 'fun': feqcon, 'jac': jeqcon, 'args': (1, )},
+ ... {'type': 'ineq', 'fun': fieqcon, 'jac': jieqcon, 'args': (10,)})
+
+ Let us solve a bounded constraint problem and an equality/inequality
+ problem with and without using the Jacobians
+
+ >>> print(' Only bounds constraints '.center(72, '-'))
+ >>> print(' * fmin_slsqp')
+ >>> x, f = fmin_slsqp(fun, array([-1, 1]), bounds=bnds, disp=1,
+ ... full_output=True)[:2]
+ >>> print('Minimum x:', x)
+ >>> print(' * minimize wrapper')
+ >>> res = minimize(fun, array([-1, 1]), method='slsqp', bounds=bnds,
+ ... **{'disp': True})
+ >>> print('Minimum x:', res.x)
+
+ >>> print(' Equality and inequality constraints - No Jacobian'.center(72, '-'))
+ >>> print(' * fmin_slsqp')
+ >>> x, f = fmin_slsqp(fun, array([-1, 1]),
+ ... f_eqcons=feqcon,
+ ... f_ieqcons=fieqcon,
+ ... disp=1, full_output=True)[:2]
+ >>> print('Minimum x:', x)
+ >>> print(' * minimize wrapper')
+ >>> res = _minimize_slsqp(fun, array([-1, 1]), constraints=cons1,
+ ... **{'disp': True})
+ >>> print('Minimum x:', res.x)
+
+ >>> print(' Equality and inequality constraints - Jacobian'.center(72, '-'))
+ >>> print(' * fmin_slsqp')
+ >>> x, f = fmin_slsqp(fun, array([-1, 1]),
+ ... f_eqcons=feqcon, fprime_eqcons=jeqcon,
+ ... f_ieqcons=fieqcon, fprime_ieqcons=jieqcon,
+ ... disp=1, full_output=True)[:2]
+ >>> print('Minimum x:', x)
+ >>> print(' * minimize wrapper')
+ >>> res = _minimize_slsqp(fun, array([-1, 1]), constraints=cons2,
+ ... **{'disp': True})
+ >>> print('Minimum x:', res.x)
+
+
+ More examples are given :ref:`in the tutorial <tutorial-sqlsp>`.
"""
if disp is not None:
Something went wrong with that request. Please try again.