Skip to content

Abhijit-Kadalli/Solving-Non-Linear-Equations

Repository files navigation

Solving Non Linear Equations

This repo contains code for solving non linear equations

Theorem

An equation f (x) = 0 , where f (x) is a real continuous function, has at least one root between x=a and x=b if f(a) f(b) < 0 .

If f(a) f(b) > 0 this implies there may or may not be any roots to the equation in the interval [a, b] .

Bisection Method

Since the root is bracketed between two points, a and b , one can find the midpoint, x=p1 between a and b .
This gives us two new intervals:

  1. [a, p1]
  2. [p1, b]

We can find the sign of f(a) f(p1), and if f(a) f(p1) < 0 then the new interval where solution lies will be [a, p1],
otherwise, it is between [p1, b] .

As we repeat this process, the width of the interval becomes smaller and smaller, until you reach the zero of the equation f(x) = 0 .

Algorithm For Bisection Method

  1. Choose a and b as two guesses for the root such that f(a) f(b) < 0, or in other words, f (x) changes sign between a and b .

  2. Estimate the root, m , of the equation f (x) = 0 as the mid-point between a and b as
    m = (a + b)/2

  3. Now check the following
    a) If f (a ) f (m ) < 0 , then the root lies between a and m ; then a = a and b = m .
    b) If f (b ) f (m ) > 0 , then the root lies between m and b ; then a = m and b = b .
    c) If f (a ) f (b ) = 0 ; then the root is m. Stop the algorithm if this is true .

  4. Find the absolute relative approximate error as
    |∈a| = ( |mnew - mold | / |mold| ) * 100
    mnew = estimated root from present iteration
    mold = estimated root from previous iteration

  5. Compare the absolute relative approximate error ∈a with the pre-specified relative error tolerance ∈s .
    If ∈a >∈s , then go to Step 3, else stop the algorithm .

📝 Note:
We have to also check whether the number of iterations is more than the maximum number of iterations allowed.
If so, one needs to terminate the algorithm and notify the user about it.

Fixed Point Iteration Method

Fixed point iteration method is open and simple method for finding real root of non-linear equation by successive approximation.
It requires only one initial guess to start. Since it is open method its convergence is not guaranteed.

To find the root of nonlinear equation f(x)=0 by fixed point iteration method,
we write given equation f(x)=0 in the form of
x = g(x)
where

  1. there exists [a, b], such that g(x) ∈ [a, b] for all x ∈ [a, b]
  2. there exists a real number L<1 such that |g′(x)|<= L < 1 for all x ∈ (a, b)

If x0 is initial guess then next approximated root in this method is obtaine by
x1 = g(x0 )

And similarly, next to next approximated root is obtained by using value of x1 i.e.
x2 = g(x1 )
And the process is repeated until we get root within desired accuracy .

Algorithm For Fixed Point Iteration Method

  1. Define function f(x)

  2. Define function g(x) which is obtained from f(x)=0 such that x = g(x) and |g'(x) < 1|

  3. Choose intial guess x0, Tolerable Error e and Maximum Iteration N

  4. Initialize iteration counter: step = 1

  5. Calculate x1 = g(x0)

  6. Increment iteration counter: step = step + 1

  7. If step > N then print "Not Convergent" and terminate

  8. Set x0 = x1 for next iteration

  9. If |f(x1)| > e then goto step (5)

  10. Display x1 as root.

Newton's Method

Newton's Method is a root-finding algorithm which produces successively better approximations to the roots of a real-valued function.

The most basic version starts with a single-variable function f defined for a real variable x, the function's derivative f′, and an initial guess x0 for a root of f.

If the function satisfies sufficient assumptions and the initial guess is close, then

x1 = x0 - ( f(x0) / f'(x0) )

is a better approximation of the root than x0. Geometrically, (x1, 0) is the intersection of the x-axis and the tangent of the graph of f at (x0, f(x0)):

that is, the improved guess is the unique root of the linear approximation at the initial point. The process is repeated as

xn+1 = xn - ( f(xn) / f'(xn) )

until a sufficiently precise value is reached.

📝 Note:
The function f(x) should be twice differentiable in the interval.

Algorithm For Newton's Method

  1. Define function as f(x)

  2. Define first derivative of f(x) as g(x)

  3. Input initial guess (x0), tolerable error (e) and maximum iteration (N)

  4. Initialize iteration counter i = 1

  5. If g(x0) = 0 then print "Mathematical Error" and terminate

  6. Calcualte x1 = x0 - f(x0) / g(x0)

  7. Increment iteration counter i = i + 1

  8. If i >= N then print "Not Convergent" and terminate

  9. If |f(x1)| > e then set x0 = x1 and goto (5)

  10. Print root as x1

Secant's Method

Secant method is also used to solve non-linear equations. This method is similar to the Newton method, but here we do not need to find the differentiation of the function f(x). Only using f(x), we can find f’(x) numerically by using Newton’s Divide difference formula. From the Newton formula,

xn+1 = xn - ( f(xn) / f'(xn) )

Now, using divide difference formula, we get,

f'(xn) ~ ( f(xn) - f(xn-1) ) / ( xn - xn-1 )

By replacing the f’(x) of Newton-Raphson formula by the new f’(x), we can find the secant formula to solve non-linear equations.

xn+1 = ( xn-1 f(xn) - xn f(xn-1) ) / ( f(xn) - f(xn-1) )

Algorithm for Secant's Method

  1. Get values of x0, x1 and e
    Here x0 and x1 are the two initial guessese is the stopping criteria and e is absolute error or the desired degree of accuracy

  2. Compute f(x0) and f(x1)

  3. Compute x2 = [x0f(x1) – x1f(x0)] / [f(x1) – f(x0)]

  4. Test for accuracy of x2
    If | (x2 – x1)/x2 | > e
    then assign x0 = x1 and x1 = x2
    goto step 3

  5. Display the required root as x2

Solving A System Of linear Equations

Extending this repo to also contain different methods to solve a system of equations

Gauss Elimination Method

Gaussian elimination is also known as row reduction. It is an algorithm of linear algebra used to solve a system of linear equations. Basically, a sequence of operations is performed on a matrix of coefficients. The operations involved are:

  1. Swapping two rows
  2. Multiplying a row by a nonzero number
  3. Adding a multiple of one row to another row These operations are performed until the lower left-hand corner of the matrix is filled with zeros, as much as possible.

Algorithm for Gauss Elimination Method

  1. At first, we have imported the necessary libraries we will use in our program.
  2. We then asked the user for the number of unknown variables that we store in the variable ‘n’.
  3. After that, we created a numpy array ‘a’ of size nx(n+1) and initialized it to zero. We will be storing our augmented matrix in this array.
  4. Another array ‘x’ of size n is also created and initialized to zero. We will use this array to store the solution vector.
  5. We then used a loop to get the input of the augmented matrix.
  6. After that, we applied the Gaussian elimination method.
  7. If any of the coefficients is 0, an error is raised as division by zero is not possible.
  8. After that, we apply the back substitution method to obtain the desired output.

Gauss Jacobi Method

Jacobian method or Jacobi method is one the iterative methods for approximating the solution of a system of n linear equations in n variables. The Jacobi iterative method is considered as an iterative algorithm which is used for determining the solutions for the system of linear equations in numerical linear algebra, which is diagonally dominant. In this method, an approximate value is filled in for each diagonal element. Until it converges, the process is iterated. This algorithm was first called the Jacobi transformation process of matrix diagonalization. Jacobi Method is also known as the simultaneous displacement method.

Algorithm for Gauss Jacobi Method

Gauss Seidel Method

This is to take Jacobi’s Method one step further. Where the better solution is x = (x1, x2, … , xn), if x1(k+1) is a better approximation to the value of x1 than x1(k) is, then it would better that we have found the new value x1(k+1) to use it (rather than the old value that isx1(k)) in finding x2(k+1), … , xn(k+1). So x1(k+1) is found as in Jacobi’s Method, but in finding x2(k+1), instead of using the old value of x1(k) and old values of x3(k),…, xn(k), we then use the new value x1(k+1) and the old values x3(k), … , xn(k), and similarly for finding x3(k+1), … , xn(k+1). This process to find the solution of the given linear equation is called the Gauss-Seidel Method

Algorithm for Gauss Seidel Method

Finding Eigen Values Using Power Method

Like the Jacobi and Gauss-Seidel methods, the power method for approximating eigenvalues is iterative. First we assume that the matrix A has a dominant eigenvalue with corresponding dominant eigenvectors. Then we choose an initial approximation of one of the dominant eigenvectors of A. This initial approximation must be a nonzero vector in Rn. Finally we form the sequence given by

x1 = Ax0 x2 = Ax1 = A*(Ax0) = A^2(x0) . . . xk = A*x(k-1) = A(A^(k-1)*x0) = A^k(x0)

For large powers of k, and by properly scaling this sequence, we will see that we obtain a good approximation of the dominant eigenvector of A

About

This repo contains code for solving non linear equations.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages