Skip to content
SimpleArt edited this page Oct 30, 2021 · 18 revisions

Welcome to the pyroot wiki!

The purpose of this Python library is to provide implements advanced bracketed root-finding methods for single-variable functions. These methods are meant to both guarantee convergence and also minimize the number of function calls made, even if extremely poor estimates of the root are initially provided or the function is not very well-behaved.

What is root-finding?

Root-finding is the process of solving for x which satisfies f(x) = 0, for some given function f. Root-finding appears in many scenarios. To list a few:

  • Solving y = f(x) for x is the same as solving y - f(x) = 0.
  • Finding the extreme values of f(x) (e.g. the minimum value) is (almost) the same as solving fprime(x) = 0.

Why bracketing methods? Why not iterative methods?

Consider trying to find the roots of x^2 + 1e-5 and x^2 - 1e-5. Graphically, both functions appear to be nearly identical, yet one has no roots and the other has two roots. How, then, can one determine if a continuous function has a root? The only way without analyzing the particular function (which is not always necessary or feasible) is to simply identify two points, x1 and x2, where f(x) takes on two different signs. For example, taking x1 = 0 and x2 = 1, we find that the second function gives f(0) = -1e-5 < 0 < 0.99995 = f(1), which means there is a root between 0 and 1.

Bracketing methods are methods which use the fact that a root lies between x1 and x2 to find the root. These methods work by iteratively shrinking this interval down until it reaches some desired size. This is opposed to iterative methods such as Newton's method, which attempt to refine an estimate of the root.

Below contains a list of reasons why bracketing methods may be preferred over iterative methods in your case.

Guaranteed Convergence

Iterative methods provide **no guarantee of convergence. To assure oneself that Newton's method is in fact converging to the root, a decent amount of math may be required. In contrast, a bracketing method is guaranteed to always be improving its estimates, and hence is able to have guaranteed convergence.

Guaranteed Error

Since bracketing methods use the fact that a root lies between x1 and x2 to find the root, this means the error can be bound as |x1 - x2|. If one desires a solution accurate to 12 digits, the solution will actually be accurate to 12 digits. However, iterative methods require much more mathematical analysis to determine how accurate their results actually are.

Robust Initial Estimates

You may not find iterative methods robust to initial estimates. This is one of the most common issues with Newton's method; you have to already have a really good estimate of the root to even start. In contrast, a bracketing method may be used with arbitrarily bad initial estimates; in fact many of the pyroot examples show that even with an initial estimate of x1 = -inf and x2 = inf, it may only cost you 2 or 3 extra iterations. This makes bracketing methods significantly more robust to the average programmer.

Faster Initial Convergence

Even if methods like Newton's method do happen to converge to the root, they may have a lot of initial trouble. For example in the graph below, Newton's method eventually converges to the root on the left but remains stuck on the right for many many iterations.

An example of Newton's method struggling to converge.

One could argue that it is the fault of the programmer for starting on the right side instead of the left side, but realizing this can be very difficult, especially without any sort of graph to gain some intuition as to what the function looks like.

In contrast, bracketing methods have the ability to avoid problematic iterations such as this, where the estimate of the root either becomes worse than before or very small steps are taken far away from the root (notice that the first few iterations on the far right make very little progress). For example, a good bracketing method would stop trying to estimate the root when it knows it is failing and fall back onto methods such as binary search, which would forcefully pull iterations much closer to the root until better methods can be used again.

Domain Error

Another common issue with iterative methods like Newton's method is that they have no respect for the domain of a function. Bracketing methods however are guaranteed to stay within their given interval. Even better, bracketing methods can often take advantage of a function's domain by using extreme points. For example, it may be easy to bound the solution between x1 = 0 and x2 = 1 when your function is only defined between 0 and 1.