Root finding functions for Julia
Clone or download
jverzani rework bisection64 to minimize allocations (#118)
* rework bisection64 to minimize allocations

* adjust badges [ci skip]
Latest commit 2942c7e Jul 18, 2018

Roots Roots
Linux: Build Status Windows: Build status

Root finding functions for Julia

This package contains simple routines for finding roots of continuous scalar functions of a single real variable. The find_zerofunction provides the primary interface. It supports various algorithms through the specification of an method. These include:

  • Bisection-like algorithms. For functions where a bracketing interval is known (one where f(a) and f(b) have alternate signs), the Bisection method can be specified with a guaranteed convergence. For most floating point number types, bisection occurs in a manner exploiting floating point storage conventions. For others, an algorithm of Alefeld, Potra, and Shi is used.

    For typically faster convergence -- though not guaranteed -- the FalsePosition method can be specified. This method has one of 12 implementations for a modified secant method to accelerate convergence.

  • Several derivative-free methods are implemented. These are specified through the methods Order0, Order1 (the secant method), Order2 (the Steffensen method), Order5, Order8, and Order16. The number indicates roughly the order of convergence. The Order0 method is the default, and the most robust, but generally takes many more function calls. The higher order methods promise higer order convergence, though don't always yield results with fewer function calls than Order1 or Order2.

  • There are two historic methods that require a derivative: Roots.Newton and Roots.Halley. (Neither is currently exported.) If a derivative is not given, an automatic derivative is found using the ForwardDiff package.

Each method's documentation has additional detail.

Some examples:

using Roots
f(x) = exp(x) - x^4

# a bisection method has the bracket specified with a tuple or vector
julia> find_zero(f, (8,9), Bisection())

julia> find_zero(f, (-10, 0))  # Bisection if x is a tuple and no method

julia> find_zero(f, (-10, 0), FalsePosition())  # just 11 function evaluations

## find_zero(f, x0::Number) will use Order0()
julia> find_zero(f, 3)         # default is Order0()

julia> find_zero(f, 3, Order1()) # same answer, different method

julia> find_zero(sin, BigFloat(3.0), Order16())

The find_zero function can be used with callable objects:

using SymEngine
@vars x
find_zero(x^5 - x - 1, 1.0)  # 1.1673039782614185


using Polynomials
x = variable(Int)
fzero(x^5 - x - 1, 1.0)  # 1.1673039782614185

The function should respect the units of the Unitful package:

using Unitful
s = u"s"; m = u"m"
g = 9.8*m/s^2
v0 = 10m/s
y0 = 16m
y(t) = -g*t^2 + v0*t + y0
find_zero(y, 1s)      # 1.886053370668014 s

Newton's method can be used without taking derivatives:

f(x) = x^3 - 2x - 5
x0 = 2
find_zero(f, x0, Roots.Newton())   # 2.0945514815423265

Automatic derivatives allow for easy solutions to finding critical points of a function.

## mean
as = rand(5)
function M(x) 
  sum([(x-a)^2 for a in as])
fzero(D(M), .5) - mean(as)	  # 0.0

## median
function m(x) 
  sum([abs(x-a) for a in as])

fzero(D(m), 0, 1)  - median(as)	# 0.0

Multiple zeros

The find_zeros function can be used to search for all zeros in a specified interval. The basic algorithm splits the interval into many subintervals. For each, if there is a bracket a bracketing algorithm is used to identify a zero, otherwise a derivative free method is used to check. This algorithm can miss zeros for various reasons, so the results should be confirmed by other means.

f(x) = exp(x) - x^4
find_zeros(f, -10, 10)


For most algorithms (besides the Bisection ones) convergence is decided when

  • The value f(x_n) ≈ 0 with tolerances atol and rtol or

  • the values x_n ≈ x_{n-1} with tolerances xatol and xrtol and f(x_n) ≈ 0 with a relaxed tolerance based on atol and rtol.

  • an algorithm encounters an NaN or Inf and yet f(x_n) ≈ 0 with a relaxed tolerance based on atol and rtol.

There is no convergence if the number of iterations exceed maxevals, or the number of function calls exceeds maxfnevals.

The tolerances may need to be adjusted. To determine if convergence occurs due to f(x_n) ≈ 0, it is necessary to consider that even if xstar is the correct answer mathematically, due to floating point roundoff it is expected that f(xstar) ≈ f'(xstar) ⋅ xstar ⋅ ϵ. The relative error used accounts for the value of x, but the default tolerance may need adjustment if the derivative is large near the zero, as the default is a bit aggressive. On the other hand, the absolute tolerance might seem too relaxed.

To determine if convergence is determined as x_n ≈ x_{n-1} the check on f(x_n) ≈ 0 is done as algorithms can be fooled by asymptotes, or other areas where the tangent lines have large slopes.

The Bisection and Roots.A42 methods will converge, so the tolerances are ignored.

An alternate interface

For MATLAB users, this functionality is provided by the fzero function. Roots also provides this alternative interface:

  • fzero(f, a::Real, b::Real) and fzero(f, bracket::Vector) call the find_zero algorithm with the Bisection method.

  • fzero(f, x0::Real; order::Int=0) calls a derivative-free method. with the order specified matching one of Order0, Order1, etc.

  • fzeros(f, a::Real, b::Real; no_pts::Int=200) will call find_zeros.

  • The function secant_method, newton, and halley provide direct access to those methods.

Usage examples

f(x) = exp(x) - x^4
## bracketing
fzero(f, 8, 9)		          # 8.613169456441398
fzero(f, -10, 0)		      # -0.8155534188089606
fzeros(f, -10, 10)            # -0.815553, 1.42961  and 8.61317 

## use a derivative free method
fzero(f, 3)			          # 1.4296118247255558

## use a different order
fzero(sin, 3, order=16)		  # 3.141592653589793

Some additional documentation can be read here.