Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
50 changes: 50 additions & 0 deletions doc/internals/barycentric_rational_interpolation.qbk
Original file line number Diff line number Diff line change
@@ -0,0 +1,50 @@
[section:barycentric Barycentric Rational Interpolation]

[h4 Synopsis]

``
#include <boost/math/tools/barycentric_rational_interpolation.hpp>
``


[h4 Description]

Barycentric rational interpolation is a high-precision interpolation class for non-uniformly spaced samples.
It requires [bigo](N) time for construction, and [bigo](N) time for each evaluation.
Linear time evaluation is not optimal; for instance the cubic b spline can be evaluated in constant time.
However, using the cubic b spline requires uniformly spaced samples, which are not always available.

Use of the class requires a vector of independent variables x[0], x[1], .... x[n-1] where x[i+1] > x[i],
and a vector of dependent variables y[0], y[1], ... , y[n-1].
The call is trivial:

__boost::math::tools::barycentric_rational<double> interpolant(x.data(), y.data(), y.size());

This implicitly calls the constructor with approximation order 3, and hence the accuracy is [bigo](h^4).
In general, if you require an approximation order d, then the error is [bigo](h^d+1).
A call to the constructor with an explicit approximation order could be

__boost::math::tools::barycentric_rational<double>(x.data(), y.data(), y.size(), 5);

To evaluate the interpolant,

Although this algorithm is robust, it can surprise you. The main way this occurs is if there is some i such that x[i] - x[i-1] is many orders of magnitude smaller than x[i+1] - x[i].
This can cause a large "swing" where the interpolant makes a fast dive to hit y[i], but then flies very high since it is unconstrained until y[i+1], which is far away.

The precise quantification of this idea is encoded in the "local mesh ratio", discussed in Floater's paper on the barycentric rational interpolation.

See

* [@http://www.mn.uio.no/math/english/people/aca/michaelf/papers/rational.pdf Barycentric rational interpolation with no poles and a high rate of interpolation ]
* Floater, M.S. & Hormann, K. Numer. Math. (2007) 107: 315. doi:10.1007/s00211-007-0093-y


[endsect] [/section:barycentric Barycentric Rational Interpolation]

[/
Copyright 2017 Nick Thompson

Distributed under the Boost Software License, Version 1.0.
(See accompanying file LICENSE_1_0.txt or copy at
http://www.boost.org/LICENSE_1_0.txt).
]
1 change: 1 addition & 0 deletions doc/math.qbk
Original file line number Diff line number Diff line change
Expand Up @@ -658,6 +658,7 @@ and as a CD ISBN 0-9504833-2-X 978-0-9504833-2-0, Classification 519.2-dc22.
[include background/error.qbk] [/relative error NOT handling]
[include background/lanczos.qbk]
[include background/remez.qbk]
[include quadrature/double_exponential.qbk]
[include background/references.qbk]

[section:logs_and_tables Error logs and tables]
Expand Down
206 changes: 206 additions & 0 deletions doc/quadrature/double_exponential.qbk
Original file line number Diff line number Diff line change
@@ -0,0 +1,206 @@
[/
Copyright (c) 2017 Nick Thompson
Use, modification and distribution are subject to the
Boost Software License, Version 1.0. (See accompanying file
LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt)
]

[section:double_exponential Double-exponential quadrature]

[heading Synopsis]

``
#include <boost/math/quadrature/tanh_sinh.hpp>
#include <boost/math/quadrature/exp_sinh.hpp>
#include <boost/math/quadrature/sinh_sinh.hpp>

namespace boost{ namespace math{

template<class Real>
class tanh_sinh
{
public:
tanh_sinh(Real tol = sqrt(std::numeric_limits<Real>::epsilon()), size_t max_refinements = 15);

template<class F>
Real integrate(const F f, Real a, Real b, Real* error = nullptr, Real* L1 = nullptr) const;
}

template<class Real>
class exp_sinh
{
public:
exp_sinh(Real tol = sqrt(std::numeric_limits<Real>::epsilon()), size_t max_refinements = 9);

template<class F>
Real integrate(const F f, Real a = 0, Real b = std::numeric_limits<Real>::infinity(), Real* error = nullptr, Real* L1 = nullptr) const;
}

template<class Real>
class sinh_sinh
{
public:
sinh_sinh(Real tol = sqrt(std::numeric_limits<Real>::epsilon()), size_t max_refinements = 9);

template<class F>
Real integrate(const F f, Real* error = nullptr, Real* L1 = nullptr) const;
}

}}
``

[heading Overview]

The tanh-sinh quadrature routine provided by boost is a rapidly convergent numerical integration scheme for holomorphic integrands.
By this we mean that the integrand is the restriction to the real line of a complex-differentiable function which is bounded on the interior of the unit disk /|z| < 1/.
More precisely, the integrands must lie within a Hardy space.
If your integrand obeys these conditions, it can be shown that tanh-sinh integration is optimal,
meaning that it requires the fewest function evaluations for a given accuracy of any quadrature algorithm for a random element from the Hardy space.
A basic example of how to use the tanh-sinh quadrature is shown below:

tanh_sinh<double> integrator;
auto f = [](double x) { return 5*x + 7; };
double Q = integrator.integrate(f, (double) 0, (double) 1);

The basic idea of tanh-sinh quadrature is that a variable transformation can cause the endpoint derivatives to decay rapidly.
When the derivatives at the endpoints decay much faster than the Bernoulli numbers grow,
the Euler-Maclaurin summation formula tells us that simple trapezoidal quadrature converges exponentially fast.


One very nice property of tanh-sinh quadrature is that it can handle singularities at the endpoints of the integration domain.
For instance, the following integrand, singular at both endpoints, can be efficiently evaluated to 100 binary digits:

auto f = [](Real x) { return log(x)*log(1-x); };
Real Q = integrator.integrate(f, (Real) 0, (Real) 1);

Although boost's implementation of tanh-sinh quadrature can achieve very high precision, it is not truly arbitrary precision.
The reason is that vast speed improvements can be obtained by caching the weights and abscissas of the variable transformation

tanh(half_pi<Real>()*sinh(t));

These numbers have been precomputed up to 100 binary digits, so this is roughly the most precision that can be expected from the quadrature.
We think that this is a reasonable compromise between efficiency, accuracy, and compile time that will satisfy the vast majority of users.


Now onto the caveats: As stated before, the integrands must lie in a Hardy space to ensure rapid convergence.
Attempting to integrate a function which is not bounded on the unit disk by tanh-sinh can lead to very slow convergence.
For example, take the Runge function:

auto f1 = [](double t) { return 1/(1+25*t*t); };
Q = integrator.integrate(f1, (double) -1, (double) 1);

This function has poles at \u00B1 \u2148/5, and as such it is not bounded on the unit disk.
However, the related function

auto f2 = [](double t) { return 1/(1+0.04*t*t); };
Q = integrator.integrate(f2, (double) -1, (double) 1);

has poles outside the unit disk (at \u00B1 5\u2148), and is therefore in the Hardy space.
The integration is performed 22x faster for the second function!
If you do not understand the structure of your integrand in the complex plane, do performance testing before deployment.

Like the trapezoidal quadrature, the tanh-sinh quadrature produces an estimate of the L[sub 1] norm of the integral along with the requested integral.
This is to establish a scale against which to measure the tolerance, and provides an estimate of the condition number of the summation.
This can be queries as follows:


tanh_sinh<double> integrator;
auto f = [](double x) { return 5*x + 7; };
double error;
double L1;
double Q = integrator.integrate(f, (double) 0, (double) 1, &error, &L1);
double condition_number = L1/std::abs(Q);

If the condition number is large, the computed integral is worthless.
Occasionally, the condition number will be so large that a `boost::math::evaluation_error` will be thrown.
However, the class is very conservative;
throwing only when the condition number is so large that the integral is essentially guaranteed to be worthless (when it exceeds the reciprocal of the unit roundoff).
Often, the computation is very poor well before the condition number exceeds this threshold,
so it is the responsibility of the user to check the L[sub 1] norm computation and evaluate whether it is acceptable.

Although the tanh-sinh quadrature can compute integral over infinite domains by variable transformations, these transformations can create a very poorly behaved integrand.
For this reason, double-exponential variable transformations have been provided that allow stable computation over infinite domains; these being the exp-sinh and sinh-sinh quadrature.

The sinh-sinh quadrature allows computation over the entire real line, and is called as follows:

sinh_sinh<double> integrator;
auto f = [](double x) { return exp(-x*x); };
double error;
double L1;
double Q = integrator.integrate(f, &error, &L1);

Note that the limits of integration are understood to be (-\u221E, \u221E).

For half-infinite intervals, the exp-sinh quadrature is provided:

exp_sinh<double> integrator;
auto f = [](double x) { return exp(-3*x); };
double error;
double L1;
double Q = integrator.integrate(f, (Real) 0, std::numeric_limits<Real>::infinity(), &error, &L1);

Either the left limit or the right limit must be infinite; not both.
Endpoint singularities are supported by sinh-sinh and exp-sinh.
These quadrature algorithms are truly arbitrary precision, but this requires the abscissas and weights to be computed at runtime.
This means that the call to the constructor is expensive, so make sure to reuse the object for multiple integrations (if, say you are assembling a stiffness matrix).
Subsequent calls the `integrator.integrate` are threadsafe, so assembling a stiffness matrix can be done in parallel.

[heading Tolerance and Max-interval Halvings]

The constructor for all three double-exponential quadratures supports two additional arguments: A tolerance and max-interval halvings.
The tolerance is met when two subsequent estimates of the integral have absolute error less than `tol*L1`.
It is highly recommended that the tolerance be left at the default value of [radic][epsilon].
Since double exponential quadrature converges exponentially fast for functions in Hardy spaces, then once the routine has *proved* that the error is ~[radic][epsilon],
then the error in fact is ~[epsilon].
If you request that the error be ~[epsilon], this tolerance might never be achieved (as the summation is not stabilized ala Kahan), and the routine will simply flounder,
dividing the interval in half in order to increase the precision of the integrand, only to be thwarted by floating point roundoff.

The max interval halvings is the maximum number number of times the interval can be cut in half before giving up.
If the accuracy is not met at that time, the routine returns its best estimate, along with the `error` and `L1`,
which allows the user to decide if another quadrature routine should be employed.

An example of this is

double tol = std::sqrt(std::numeric_limits<double>::epsilon());
size_t max_halvings = 12;
tanh_sinh<double> integrator(tol, max_halvings);
auto f = [](double x) { return 5*x + 7; };
double error, L1;
double Q = integrator.integrate(f, (double) 0, (double) 1, &error, &L1);
if (error*L1 > 0.01)
{
Q = some_other_quadrature_method(f, (double) 0, (double) 1);
}

[heading Caveats]

A few things to keep in mind while using the tanh-sinh, exp-sinh, and sinh-sinh quadratures:

These routines are *very* aggressive about approaching the endpoint singularities.
This allows lots of significant digits to be extracted, but also has another problem: Roundoff error can cause the function to be evaluated at the endpoints.
A few ways to avoid this: Narrow up the bounds of integration to say, [a + [epsilon], b - [epsilon]], make sure (a+b)/2 and (b-a)/2 are representable, and finally, if you think the compromise between accuracy an usability has gone too far in the direction of accuracy, file a ticket.

Both exp-sinh and sinh-sinh quadratures evaluate the functions they are passed at *very* large argument.
You might understand that x[super 12]exp(-x) is should be zero when x[super 12] overflows, but IEEE floating point arithmetic does not.
Hence `std::pow(x, 12)*std::exp(-x)` is an indeterminate form whenever `std::pow(x, 12)` overflows.
So make sure your functions have the correct limiting behavior; for example

auto f = [](double x) {
double t = exp(-x);
if(t == 0)
{
return 0;
}
return t*pow(x, 12);
};

has the correct behavior for large /x/, but `auto f = [](double x) { return exp(-x)*pow(x, 12); };` does not.

Oscillatory integrals, such as the sinc integral, are poorly approximated by double-exponential quadrature.
Fortunately the error estimates and L1 norm are massive for these integrals, but nonetheless, oscillatory integrals require different techniques.

References:

Tanaka, Ken’ichiro, et al. ['Function classes for double exponential integration formulas.] Numerische Mathematik 111.4 (2009): 631-655.

[endsect]
87 changes: 87 additions & 0 deletions example/barycentric_interpolation_example.cpp
Original file line number Diff line number Diff line change
@@ -0,0 +1,87 @@

// Copyright Nick Thompson, 2017

// Distributed under the Boost Software License, Version 1.0.
// (See accompanying file LICENSE_1_0.txt or
// copy at http://www.boost.org/LICENSE_1_0.txt).

#include <iostream>
#include <limits>
#include <vector>

//[barycentric_rational_example

/*`This example shows how to use barycentric rational interpolation, using Walter Kohn's classic paper
"Solution of the Schrodinger Equation in Periodic Lattices with an Application to Metallic Lithium"
In this paper, Kohn needs to repeatedly solve an ODE (the radial Schrodinger equation) given a potential
which is only known at non-equally samples data.

If he'd only had the barycentric rational interpolant of boost::math!
References:
Kohn, W., and N. Rostoker. "Solution of the Schrödinger equation in periodic lattices with an application to metallic lithium." Physical Review 94.5 (1954): 1111.
*/

#include <boost/math/interpolators/barycentric_rational.hpp>
//] [/barycentric_rational_example]

int main()
{
// The lithium potential is given in Kohn's paper, Table I:
std::vector<double> r(45);
std::vector<double> mrV(45);

r[0] = 0.02; mrV[0] = 5.727;
r[1] = 0.04, mrV[1] = 5.544;
r[2] = 0.06, mrV[2] = 5.450;
r[3] = 0.08, mrV[3] = 5.351;
r[4] = 0.10, mrV[4] = 5.253;
r[5] = 0.12, mrV[5] = 5.157;
r[6] = 0.14, mrV[6] = 5.058;
r[7] = 0.16, mrV[7] = 4.960;
r[8] = 0.18, mrV[8] = 4.862;
r[9] = 0.20, mrV[9] = 4.762;
r[10] = 0.24, mrV[10] = 4.563;
r[11] = 0.28, mrV[11] = 4.360;
r[12] = 0.32, mrV[12] = 4.1584;
r[13] = 0.36, mrV[13] = 3.9463;
r[14] = 0.40, mrV[14] = 3.7360;
r[15] = 0.44, mrV[15] = 3.5429;
r[16] = 0.48, mrV[16] = 3.3797;
r[17] = 0.52, mrV[17] = 3.2417;
r[18] = 0.56, mrV[18] = 3.1209;
r[19] = 0.60, mrV[19] = 3.0138;
r[20] = 0.68, mrV[20] = 2.8342;
r[21] = 0.76, mrV[21] = 2.6881;
r[22] = 0.84, mrV[22] = 2.5662;
r[23] = 0.92, mrV[23] = 2.4242;
r[24] = 1.00, mrV[24] = 2.3766;
r[25] = 1.08, mrV[25] = 2.3058;
r[26] = 1.16, mrV[26] = 2.2458;
r[27] = 1.24, mrV[27] = 2.2035;
r[28] = 1.32, mrV[28] = 2.1661;
r[29] = 1.40, mrV[29] = 2.1350;
r[30] = 1.48, mrV[30] = 2.1090;
r[31] = 1.64, mrV[31] = 2.0697;
r[32] = 1.80, mrV[32] = 2.0466;
r[33] = 1.96, mrV[33] = 2.0325;
r[34] = 2.12, mrV[34] = 2.0288;
r[35] = 2.28, mrV[35] = 2.0292;
r[36] = 2.44, mrV[36] = 2.0228;
r[37] = 2.60, mrV[37] = 2.0124;
r[38] = 2.76, mrV[38] = 2.0065;
r[39] = 2.92, mrV[39] = 2.0031;
r[40] = 3.08, mrV[40] = 2.0015;
r[41] = 3.24, mrV[41] = 2.0008;
r[42] = 3.40, mrV[42] = 2.0004;
r[43] = 3.56, mrV[43] = 2.0002;
r[44] = 3.72, mrV[44] = 2.0001;

// Now we want to interpolate this potential at any r:
boost::math::barycentric_rational<double> b(r.data(), mrV.data(), r.size());

for (size_t i = 1; i < 8; ++i)
{
double r = i*0.5;
std::cout << "(r, V) = (" << r << ", " << -b(r)/r << ")\n";
}
}
Loading