From e69dc734073818f29c87a16c775a503662c9e24c Mon Sep 17 00:00:00 2001
From: James Threlfall Notebook to implement Euler, Midpoint and Euler Trapezium methods to solve differential equations. Designed to support Abertay University undergrad course MAT301 (JT, 2022). Many physical laws governing natural behaviour take the form of differential equations. These are often complicated, coupled equations where the behaviour of quantities in space and time depend on others. While some have analytical solutions, often it is quicker and easier to apply numerical tools to obtain approximate solutions. A differential equation is a relationship between a function, \(f(x)\), its independent variable, \(x\), and any number of its derivatives. An ordinary differential equation or ODE is a differential equation where the independent variable, and therefore also the derivatives, is in one dimension. For our purposes, we can assume that an ODE can be written Lets go ahead and see this in action, using the equation and question posed in from Example 9.1 in the lectures: This formula is called the Explicit Euler Formula, and allows us to approximate the state at \(S(t_{j+1})\) given the state at \(S(t_j)\). Starting from a given initial value of \(S_0 = S(t_0)\), we can use this formula to integrate the states up to \(S(t_f)\); these \(S(t)\) values are then an approximation for the solution of the differential equation. The Euler formula is the simplest and most intuitive method for solving initial value problems. At any state \((t_j, S(t_j))\) it uses \(F\) at that state to “point” toward the next state and then moves in that direction a distance of \(h\). Although there are more sophisticated and accurate methods for solving these problems, they all have the same fundamental structure. As such, we enumerate explicitly the steps for solving an initial value problem using the Explicit Euler formula. To see this in action, we will use the equation and question posed in from Example 9.1 in the lectures: So we see that our approximation displays similar behaviour, but doesn’t follow the trend exactly; in fact the graphs appear to be diverging as \(x\) increases. What if we make the step size ten times smaller? As expected, cranking up the number of steps improves the accuracy of the solution. However, in practice, we often have a finite amount of numerical resources (e.g. CPU speed, number of nodes, memory size), so we can’t always rely on the ability to simply decrease \(h\). In those cases, we have to be smarter, and apply other techniques. In the lectures, we were introduced to a more accurate scheme, the Midpoint method. This essentially estimates the gradient of the tangent at the midpoint between steps \(x_n+\frac{1}{2}h\), which lies halfway between \(x_n\) and \(x_{n+1}\). Using Euler’s method, we then determine \begin{equation}y_{n+1}=y_n+hk_2,\end{equation}where
-$\(
+ In the lectures, we were introduced to a more accurate scheme, the Midpoint method. This essentially estimates the gradient of the tangent at the midpoint between steps \(x_n+\frac{1}{2}h\), which lies halfway between \(x_n\) and \(x_{n+1}\). Using Euler’s method, we then determine \(y_{n+1}=y_n+hk_2,\) where This looks a bit weird, but all this means is that we start to evaluate the function not only at the grid point values, but also now at different locations in \(x\) and \(y\). Let’s go ahead and implement this, in exactly the same manner as the previous method: We quickly see that the midpoint method is more accurate than the Euler method for the same step size. The trade-off is the additional calculation required to evaluate \(k_1\) and \(k_2\).MAT301 Numerical Methods
Euler’s method
+ Contents
Euler’s method
Contents
Differential Equation Solving 1¶
-Motivation¶
Euler’s method\(S(t_{j+1})\) given the state at \(S(t_j)\). Starting from a given initial value of \(S_0 = S(t_0)\), we can use this formula to integrate the states up to \(S(t_f)\); these \(S(t)\) values are then an approximation for the solution of the differential equation. The Explicit Euler formula is the simplest and most intuitive method for solving initial value problems. At any state \((t_j, S(t_j))\) it uses \(F\) at that state to “point” toward the next state and then moves in that direction a distance of \(h\). Although there are more sophisticated and accurate methods for solving these problems, they all have the same fundamental structure. As such, we enumerate explicitly the steps for solving an initial value problem using the Explicit Euler formula.
-
Example 9.1¶
+Euler’s method
-
+
/var/folders/9y/379gbxjx3255qz9rg64kh7mm0000gn/T/ipykernel_52116/2040643597.py:1: MatplotlibDeprecationWarning: The seaborn styles shipped by Matplotlib are deprecated since 3.6, as they no longer correspond to the styles shipped by seaborn. However, they will remain available as 'seaborn-v0_8-<style>'. Alternatively, directly use the seaborn API instead.
+ plt.style.use('seaborn-poster')
+
Euler’s method
-
+
/var/folders/9y/379gbxjx3255qz9rg64kh7mm0000gn/T/ipykernel_52116/4080072740.py:8: MatplotlibDeprecationWarning: The seaborn styles shipped by Matplotlib are deprecated since 3.6, as they no longer correspond to the styles shipped by seaborn. However, they will remain available as 'seaborn-v0_8-<style>'. Alternatively, directly use the seaborn API instead.
+ plt.style.use('seaborn-poster')
+
Midpoint method¶
-Modified Euler (Euler-Trapezium Method)\(y_{n+1}^p\) is an initial prediction. We then use the derivative prediction, \(y_{n+1}'^p\), from
to correct the value
-$\(
+ to correct the value Once again, we can apply this solution to the ODE problem from earlier in the following way: Once again, the Modified Euler does a much better job than the original Euler method, and (for this ODE) is comparable to the Midpoint method in accuracy. Our first ODE will be linear:
-$\(
+ Our first ODE will be linear: with the initial condition, Here, the interval is \(2000\leq t \leq 2020,\) This gives the 201 discrete points: which we can generalise to
-$\(
+ which we can generalise to Let’s go ahead and set this range of \(t\) values as a list in Python: Subsequent coefficients are more complicated to evaluate mathematically, e.g. \(k_2\): This is a lot of mathematical effort to then plug into our calculators! The beauty of this method relies on making the computer to do the hard work for us! Now that we’ve defined a function to spit out values of \(f(x,y)\), we can also evaluate shifted values of \(x\) and \(y\) as input into that function to evaluate \(k_1-k_4\), and hence the next iteration of the solution, \(y_{n+1}\): we can therefore find: with a contant of integration
-$\(
+\] with a contant of integration The plot below again shows the RK4 numerical approximation, \(y_n\) (circles) and the analytical solution to the non-linear population equation: with the initial condition,
-$\(
+ with the initial condition, As with the previous cases, lets express the right hand side of the ODE as a function
-$\(
+ As with the previous cases, lets express the right hand side of the ODE as a function Once again, we could mathematically calculate expressions for \(k_1\rightarrow k_4\).
This time though, you can see that \(f_3(x,y)\) now contains terms that depend on \(y\) and \(t\)! Now that we’re more familiar with how to implement RK4 numerically, we’ll simply move on to define our function representing \(f_3(x,y)\) and let the code evaluate the coefficients: One additional complexity is that this model is not self-starting: each new displacement value requires two previous values, which is an issue for an initial value problem. One common approach is to use a single Euler step to calculate the first displacement, with subsequent values using the Verlet algorithm. A second complexity is that the velocity calculation requires knowledge of the next displacement in time, so calculation of this lags behind the displacement calculation. If the velocity is required after 5 steps, we would need to calculate the displacement over 6 steps to achieve the required solution. Lets use the algorithm to tackle Example 9.4 from the Lecture notes. In this example, a mass of \(2\)kg is propelled along a rough horizontal table which exerts a resistance of \(0.4\)N. The initial velocity of the mass is \(8{\rm{ms}}^{-1}\). We are to use the Verlet algorithm with a step size of \(h=0.1\) to approximate the velocity and displacement of the mass after \(0.5\)s, assuming \(x\left(t=0\right)=0\). To see the algorithm in action, we will re-use Example 9.4 from the Lecture notes. In this example, a mass of \(2\)kg is propelled along a rough horizontal table which exerts a resistance of \(0.4\)N. The initial velocity of the mass is \(8{\rm{ms}}^{-1}\). We are to use the Verlet algorithm with a step size of \(h=0.1\) to approximate the velocity and displacement of the mass after \(0.5\)s, assuming \(x\left(t=0\right)=0\). My first step is to determine the acceleration at a given time. In this case the acceleration (force per unit mass) is constant, so we’ll store acceleration as a constant: We can see that the desired solution at \(t=0.5\) is given by the final element in the list of \(x\) and \(v\), and matches the values we calculated in the lectures. This equation of motion also has exact solutions (the acceleration is constant so we can use \(v=u+at\) and \(s=ut+\frac{1}{2}at^2\)). The exact solutions are:
-$\(
+ This equation of motion also has exact solutions: the acceleration is constant so we can use \(v=u+at\) and \(s=ut+\frac{1}{2}at^2\). From these, we should recover: Lets compare our numerical approximation with the true solutions: This is not a particularly exciting example, but it matches the mathematical steps you were shown in the lectures. Notebook to implement iterative techinques elements of Abertay University undergrad course MAT301 (JT, 2022). The first numerical technique we will explore are algorithms which find zeros: these are commonly called root-finding algorithms. The roots of a function are locations/values which make that function equal zero.
@@ -361,8 +361,8 @@ We will here discuss several iterative techinques to identify roots of equations. When testing some of these tools, we may use quadratic functions, in order to compare exact and approximate solutions. evaluates the root(s) of the function, \(x_r\), exactly (noting that there may be zero, one or two real solutions of this equation). The quadratic formula is a rare example of root-finding with a simple solution, and can only be applied to equations which are quadratic (i.e. whose highest power in \(x\) is two, \(x^2\)); most other types of function require novel ways to identify roots. As computational power has significantly grown in recent years, so too have iterative ways to identify roots which take advantage of computational resources and procedures. We will here discuss several iterative techinques to identify roots of equations. We will often use quadratic functions to test these tools: we already have an exact analytical way to solve these problems, so this makes a good way to test our numerical approaches and examine how accurate and efficient they are. As always, we’ll make use of some clever Python tools for plotting and maths, so our first step is to load in the libraries that store these tools: (Note that the termcolor package, used to change the font color of some output, doesn’t appear to work in Colab. This doesn’t really matter, as I was just using it to highlight some specific values). So far, we’ve effectively only used Python as a simple calculator. Python (and other languages) are far more powerful and can potentially check the derivative for convergence/divergence each time an iteration occurs. We can also build in a test of the difference between the latest iteration and previous values in order to not perform the calculation too many times. One could, in principle, combine all of these steps together in the following way: Newton-Raphson (also sometimes known as Newton’s method) is an iterative technique which rapidly approximates the roots of a real-valued function. It is fast and efficient, yet relatively easy to understand and is in common use in many different practical applications. The function we want to work with, \(f(x)\), must be smooth and continuous; let us define \(x_r\) as an unknown root of \(f(x)\). We could make an initial guess \(x_0\) for the value of \(x_r\). Unless \(x_0\) is a very lucky guess, \(f(x_0)\) will not be a root. If we don’t find the root, we want our next guess, \(x_1\), to be an improvement on \(x_0\) (i.e., closer to \(x_r\) than \(x_0\)). If we assume that \(x_0\) is “close enough” to \(x_r\), then we can improve upon it by taking the linear approximation of \(f(x)\) around \(x_0\), which is a line, and finding the intersection of this line with the x-axis. Written out, the linear approximation of \(f(x)\) around \(x_0\) is \(f(x) \approx f(x_0) + f^{\prime}(x_0)(x-x_0)\). Using this approximation, we find \(x_1\) such that \(f(x_1) = 0\). Plugging these values into the linear approximation results in the equation The function we want to work with, \(f(x)\), must be smooth and continuous; let us define \(x_r\) as an unknown root of \(f(x)\). We could make an initial guess \(x_0\) for the value of \(x_r\). Unless we make a very lucky guess, \(f(x_0)\) will not be a root. If we don’t find the root, we want our next guess, \(x_1\), to be an improvement on \(x_0\) (i.e., closer to \(x_r\) than \(x_0\)). If we assume that \(x_0\) is “close enough” to \(x_r\), then we can improve upon it by taking the linear approximation of \(f(x)\) around \(x_0\), which is a line, and finding the intersection of this line with the x-axis. Written out, the linear approximation of \(f(x)\) around \(x_0\) is \(f(x) \approx f(x_0) + f^{\prime}(x_0)(x-x_0)\). Using this approximation, we find \(x_1\) such that \(f(x_1) = 0\). Plugging these values into the linear approximation results in the equation which when solved for \(x_1\) is
-$\(
+ which when solved for \(x_1\) is An illustration of how this linear approximation improves an initial guess is shown in the following figure. Written generally, a Newton step computes an improved guess, \(x_i\), using a previous guess \(x_{i-1}\), and is given by the equation Notebook to implement Euler, Midpoint and Euler Trapezium methods to solve differential equations. Designed to support Abertay University undergrad course MAT301 (JT, 2022). The next useful mathematical approach that can be applied numerically is numerical integration. Many laws of motion (for example Newton’s Laws) require differentiation or integration in order to extract different information. For example: integrating a displacement in time yields a velocity. The ability to integrate functions between two points is extremely useful in video games programming. Given a function \(f(x)\), we want to approximate the integral of \(f(x)\) over the total interval, \([a,b]\). The following figure illustrates the area this generates. To achieve this, we assume that the interval is broken up (“discretised”). Our independent variable, \(x\), will comprise \(N+1\) points with spacing, \(h = \dfrac{b - a}{n}\). We denote each point in \(x\) using the subscript “n”, i.e. \(x_n\) (which is common for this type of algorithm), beginning with 0 (\(x_0 = a\)) and ending with n (\(x_n = b\)). Note: There are \(N+1\) grid points because the count starts at \(x_0\). We also assume we have a function, \(f(x)\), that can be computed for any of the grid points, or that we have been given the function implicitly as \(f(x_n)\). The interval \([x_n, x_{n+1}]\) is referred to as a subinterval. Several methods exist to approximate the area under the function or \(\int_a^b f(x) dx\). Each method approximates the area under \(f(x)\) for each subinterval by a shape for which it is easy to compute the exact area, and then sums the area contributions of every subinterval. In MAT301 we learned about Simpson’s Rule and Trapezium Rule, which we will detail below. No warning appeared so we’re good to go. All we need to do is sum the elements that we’ve calculated according to the Simpson’s rule formula. We can be a bit clever here, and let Python decide which numbers are odd and which are even (by picking every second element in cleverly chosen ranges which omit \(f_0\) and \(f_n\)) at add on the remaining elements according to the formula. +64K_!TBemn$g%w$jnF>uModified Euler (Euler-Trapezium Method)
-
+
/var/folders/9y/379gbxjx3255qz9rg64kh7mm0000gn/T/ipykernel_52116/1612707178.py:7: MatplotlibDeprecationWarning: The seaborn styles shipped by Matplotlib are deprecated since 3.6, as they no longer correspond to the styles shipped by seaborn. However, they will remain available as 'seaborn-v0_8-<style>'. Alternatively, directly use the seaborn API instead.
+ plt.style.use('seaborn-poster')
+
4th Order Runge Kutta Algorithm
1. Linear Population Equation¶
-Discrete Interval\(2000\leq t \leq 2020,\)
-$\(
+
Tailoring our 4th Order Runge Kutta scheme\(k_2\):
-$\(
+
Results and Graph\(
+
3. Non-Linear Population Equation with an oscillation
-
Implementing RK4¶
-MAT301 Numerical Methods
Simple attempt: Euler
+ Contents
Simple attempt: Euler
Simple attempt: EulerThis ensures that the error does not accumulate, since the velocity calculation is recalculated from a more accurate displacement calculation at each step.
Example 9.4¶
+/var/folders/9y/379gbxjx3255qz9rg64kh7mm0000gn/T/ipykernel_52132/117525382.py:1: MatplotlibDeprecationWarning: The seaborn styles shipped by Matplotlib are deprecated since 3.6, as they no longer correspond to the styles shipped by seaborn. However, they will remain available as 'seaborn-v0_8-<style>'. Alternatively, directly use the seaborn API instead.
+ plt.style.use('seaborn-poster')
+
MAT301 Numerical Methods
Contents
Contents
Iterative Solutions to Equations¶
-Motivation¶
Motivation\(x_r\), exactly (noting that there may be zero, one or two real solutions of this equation). The quadratic formula is a rare example of root-finding with a simple solution; most other types of function require novel ways to identify roots. As computational power has significantly grown in recent years, so too have iterative ways to identify roots which take advantage of computational resources and procedures.
-
Setting up Libraries¶
Setting up Libraries
DEPRECATION: Configuring installation scheme with distutils config files is deprecated and will no longer work in the near future. If you are using a Homebrew or Linuxbrew Python, please see discussion at https://github.com/Homebrew/homebrew-core/issues/76621
-[notice] A new release of pip available: 22.3.1 -> 23.0.1
+
[notice] A new release of pip available: 22.3.1 -> 23.2.1
[notice] To update, run: /usr/local/opt/python@3.9/bin/python3.9 -m pip install --upgrade pip
Simple Iteration\(F_1\) when \(x=1\) and \(F_2\) when \(x=3\).
Iteration Using Loops¶
+Looping Simple Iteration in Python¶
Iteration Using Loops
divergent
-
no. of steps: 1 final x: 3.0
+no. of steps: 1 final x: 3.0
Iteration Using Loops
Newton-Raphson Iteration¶
Contents
Numerical Integration¶
-Motivation¶
Simpson’s Rule\(f_0\) and \(f_n\) (used elsewhere in the formula.
+
total_simp_area = h / 3.0 * ( f[0] + f[N] + 4.0 * sum(f[1:N:2]) + 2.0 * sum(f[2:N-1:2] ))
diff --git a/MAT301_Python_Intro.html b/MAT301_Python_Intro.html
index 65210f9..08d5e20 100644
--- a/MAT301_Python_Intro.html
+++ b/MAT301_Python_Intro.html
@@ -437,7 +437,9 @@
Symbolic Computation
sin(x)**2 + cos(x)**2
-1
+
1
K!Fc{od0K1)Et
zNxxKys-dX)BT<_M%MCOAmfhfLl^MbCI}{({Ng&o#A=!N}#vps`YAY8
j)zV0HhQ$*
zd5i0V17*gyfO7X*B#mF#bbt0q-xai*Fl%?&nK~UhkVY{q?cVuvn&4!$N)aNC7*ai;Yr6{jgC&HBWx7{U)cE&5R8a9Vx9oeLZec>0VMCP!
zkO_<+g2N-@lf5|l06o?=R0-^Qb!bpQJ&m|8JXB&%{1y{$emaD0KEG=Hsvcx;kK?5*
z)ULp0tNth8zFS~p2yrh|2?GTP#RLwG3$~Rnak|eS^pOg=JWfkXdlwLpbsX^ol2-z+
z_5HGOHzvAgA3i|^`0}fgFFuXbpL`ub0|SG_o-f#N|EMT*3fcbUx0gZEl%98N&M7KZ
z@~aN#6RpkqzP^ZUNKHU)ogMFipO(lcD93Rq1nwB(bO|4j6KmUCLC3(^betMSdI}Qj
zIN-y#LlwT&)+ewz3%$YbOOzF2O(x_(G{$hZ53jb@%PM8=$LSHOS6VUxhLeDNEx=&d
zl~kWUeti!G$|SR`$b*0uoze(xbs}}>!gH8Iz4rdIIR6@VLd9Hk27N^NZPxNS+0h!=
z(?(*3JO=9F=8=A8fxiiSd1l>pxLx;1NoC