You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
<p>The <aclass="reference internal" href="cla_utils.html#cla_utils.exercises10.arnoldi" title="cla_utils.exercises10.arnoldi"><codeclass="xref py py-func docutils literal notranslate"><spanclass="pre">cla_utils.exercises10.arnoldi()</span></code></a> function has been left
160
-
unimplemented. It should implement the Arnoldi algorithm using
161
-
Numpy array operations where possible, and return the <spanclass="math notranslate nohighlight">\(Q\)</span> and<spanclass="math notranslate nohighlight">\(H\)</span>
162
-
matrices after the requested number of iterations is complete.
160
+
<p><spanclass="math notranslate nohighlight">\((\ddagger)\)</span>The <aclass="reference internal" href="cla_utils.html#cla_utils.exercises10.arnoldi" title="cla_utils.exercises10.arnoldi"><codeclass="xref py py-func docutils literal notranslate"><spanclass="pre">cla_utils.exercises10.arnoldi()</span></code></a> function has
161
+
been left unimplemented. It should implement the Arnoldi algorithm
162
+
using Numpy array operations where possible, and return the <spanclass="math notranslate nohighlight">\(Q\)</span> and
163
+
<spanclass="math notranslate nohighlight">\(H\)</span>matrices after the requested number of iterations is complete.
163
164
What is the minimal number of Python for loops possible?</p>
164
165
<p>The test
165
166
script <codeclass="docutils literal notranslate"><spanclass="pre">test_exercises10.py</span></code> in the <codeclass="docutils literal notranslate"><spanclass="pre">test</span></code> directory will test
<p>The <aclass="reference internal" href="cla_utils.html#cla_utils.exercises10.GMRES" title="cla_utils.exercises10.GMRES"><codeclass="xref py py-func docutils literal notranslate"><spanclass="pre">cla_utils.exercises10.GMRES()</span></code></a> function has been left
280
-
unimplemented. It should implement the basic GMRES algorithm above,
281
-
using one loop over the iteration count. You can paste code from
282
-
your <aclass="reference internal" href="cla_utils.html#cla_utils.exercises10.arnoldi" title="cla_utils.exercises10.arnoldi"><codeclass="xref py py-func docutils literal notranslate"><spanclass="pre">cla_utils.exercises10.arnoldi()</span></code></a> implementation, and you
283
-
should use your least squares code to solve the least squares
284
-
problem. The test script <codeclass="docutils literal notranslate"><spanclass="pre">test_exercises10.py</span></code> in the <codeclass="docutils literal notranslate"><spanclass="pre">test</span></code>
285
-
directory will test this function.</p>
280
+
<p><spanclass="math notranslate nohighlight">\((\ddagger)\)</span>The <aclass="reference internal" href="cla_utils.html#cla_utils.exercises10.GMRES" title="cla_utils.exercises10.GMRES"><codeclass="xref py py-func docutils literal notranslate"><spanclass="pre">cla_utils.exercises10.GMRES()</span></code></a> function has
281
+
been left unimplemented. It should implement the basic GMRES
282
+
algorithm above, using one loop over the iteration count. You can
283
+
paste code from your <aclass="reference internal" href="cla_utils.html#cla_utils.exercises10.arnoldi" title="cla_utils.exercises10.arnoldi"><codeclass="xref py py-func docutils literal notranslate"><spanclass="pre">cla_utils.exercises10.arnoldi()</span></code></a>
284
+
implementation, and you should use your least squares code to solve
285
+
the least squares problem. The test script <codeclass="docutils literal notranslate"><spanclass="pre">test_exercises10.py</span></code>
286
+
in the <codeclass="docutils literal notranslate"><spanclass="pre">test</span></code>directory will test this function.</p>
<p>The least squares problem in GMRES requires the QR factorisation of
293
-
<spanclass="math notranslate nohighlight">\(H_k\)</span>. It is wasteful to rebuild this from scratch given that we
294
-
just computed the QR factorisation of <spanclass="math notranslate nohighlight">\(H_{k-1}\)</span>. Modify your code
295
-
so that it recycles the QR factorisation, applying just one extra
296
-
Householder rotation per GMRES iteration. Don’t forget to check
297
-
that it still passes the test.</p>
293
+
<p><spanclass="math notranslate nohighlight">\((\ddagger)\)</span> The least squares problem in GMRES requires the QR
294
+
factorisation of <spanclass="math notranslate nohighlight">\(H_k\)</span>. It is wasteful to rebuild this from scratch
295
+
given that we just computed the QR factorisation of
296
+
<spanclass="math notranslate nohighlight">\(H_{k-1}\)</span>. Modify your code so that it recycles the QR
297
+
factorisation, applying just one extra Householder rotation per
298
+
GMRES iteration. Don’t forget to check that it still passes the
299
+
test.</p>
298
300
</div></div><divclass="admonition hint">
299
301
<pclass="admonition-title">Hint</p>
300
302
<p>Don’t get confused by the two Q matrices involved in GMRES! There
@@ -363,13 +365,15 @@ <h2><span class="section-number">6.4. </span>Convergence of GMRES<a class="heade
363
365
well-conditioned, and <spanclass="math notranslate nohighlight">\(p(x)\)</span> is small for all <spanclass="math notranslate nohighlight">\(x\in \Lambda(A)\)</span>. This
364
366
latter condition is not trivial due to the <spanclass="math notranslate nohighlight">\(p(0)=1\)</span> requirement. One
365
367
way it can happen is if <spanclass="math notranslate nohighlight">\(A\)</span> has all eigenvalues clustered in a small
366
-
number of groups. Then we can find a low degree polynomial that passes
367
-
through 1 at <spanclass="math notranslate nohighlight">\(x=0\)</span>, and 0 near each of the clusters. Then GMRES will
368
-
essentially converge in a small number of iterations (equal to the
369
-
degree of the polynomial). There are problems if the eigenvalues are
370
-
scattered over a wide region of the complex plane: we need a very
371
-
high degree polynomial to make <spanclass="math notranslate nohighlight">\(p(x)\)</span> small at all the eigenvalues and
372
-
hence we need a very large number of iterations.</p>
368
+
number of groups, away from $0$. Then we can find a low degree
369
+
polynomial that passes through 1 at <spanclass="math notranslate nohighlight">\(x=0\)</span>, and 0 near each of the
370
+
clusters. Then GMRES will essentially converge in a small number of
371
+
iterations (equal to the degree of the polynomial). There are problems
372
+
if the eigenvalues are scattered over a wide region of the complex
373
+
plane: we need a very high degree polynomial to make <spanclass="math notranslate nohighlight">\(p(x)\)</span> small at
374
+
all the eigenvalues and hence we need a very large number of
375
+
iterations. Similarly there are problems if eigenvalues are very close
376
+
to zero.</p>
373
377
<divclass="proof proof-type-exercise" id="id4">
374
378
375
379
<divclass="proof-title">
@@ -385,8 +389,8 @@ <h2><span class="section-number">6.4. </span>Convergence of GMRES<a class="heade
385
389
What do you observe? What is it about the three matrices that
386
390
causes this different behaviour?</p>
387
391
</div></div></section>
388
-
<sectionid="preconditioning">
389
-
<h2><spanclass="section-number">6.5. </span>Preconditioning<aclass="headerlink" href="#preconditioning" title="Permalink to this headline">¶</a></h2>
392
+
<sectionid="preconditioned-gmres">
393
+
<h2><spanclass="section-number">6.5. </span>Preconditioned GMRES<aclass="headerlink" href="#preconditioned-gmres" title="Permalink to this headline">¶</a></h2>
over the last 30 years. Typically, the matrices that we want to solve
396
400
do not have eigenvalues clustered in a small number of groups, and so
397
401
GMRES is slow. The solution (and the challenge) is to find a matrix
398
-
<spanclass="math notranslate nohighlight">\(M\)</span> such that <spanclass="math notranslate nohighlight">\(Mx = y\)</span> is cheap to solve (diagonal, or triangular, or
399
-
something else) and such that <spanclass="math notranslate nohighlight">\(M^{-1}A\)</span><em>does</em> have eigenvalues clustered
400
-
in a small number of groups (e.g. <spanclass="math notranslate nohighlight">\(M\)</span> is a good approximation of <spanclass="math notranslate nohighlight">\(A\)</span>, so
401
-
that <spanclass="math notranslate nohighlight">\(M^{-1}A\approx I\)</span> which has eigenvalues all equal to 1). We call
402
-
<spanclass="math notranslate nohighlight">\(M\)</span> the preconditioning matrix, and the idea is to apply GMRES to
402
+
<spanclass="math notranslate nohighlight">\(\hat{A}\)</span> such that <spanclass="math notranslate nohighlight">\(\hat{A}x = y\)</span> is cheap to solve (diagonal, or triangular, or
403
+
something else) and such that <spanclass="math notranslate nohighlight">\(\hat{A}^{-1}A\)</span><em>does</em> have eigenvalues clustered
404
+
in a small number of groups (e.g. <spanclass="math notranslate nohighlight">\(\hat{A}\)</span> is a good approximation of <spanclass="math notranslate nohighlight">\(A\)</span>, so
405
+
that <spanclass="math notranslate nohighlight">\(\hat{A}^{-1}A\approx I\)</span> which has eigenvalues all equal to 1). We call
406
+
<spanclass="math notranslate nohighlight">\(\hat{A}\)</span> the preconditioning matrix, and the idea is to apply GMRES to
403
407
the (left) preconditioned system</p>
404
408
<blockquote>
405
409
<div><divclass="math notranslate nohighlight">
406
-
\[M^{-1}Ax = M^{-1}b.\]</div>
410
+
\[\hat{A}^{-1}Ax = \hat{A}^{-1}b.\]</div>
407
411
</div></blockquote>
408
412
<p>GMRES on this preconditioned system is equivalent to the following algorithm,
<p>Show that this algorithm is equivalent to GMRES applied to the
439
443
preconditioned system.</p>
440
-
</div></div><p>The art and science of finding preconditioning matrices <spanclass="math notranslate nohighlight">\(M\)</span> (or
441
-
matrix-free procedures for solving <spanclass="math notranslate nohighlight">\(Mx=y\)</span>) for specific problems
444
+
</div></div><p>The art and science of finding preconditioning matrices <spanclass="math notranslate nohighlight">\(\hat{A}\)</span> (or
445
+
matrix-free procedures for solving <spanclass="math notranslate nohighlight">\(\hat{A}x=y\)</span>) for specific problems
442
446
arising in data science, engineering, physics, biology etc. can
443
447
involve ideas from linear algebra, functional analysis, asymptotics,
444
448
physics, etc., and represents a major activity in scientific computing
445
449
today.</p>
446
450
</section>
451
+
<sectionid="knowing-when-to-stop">
452
+
<h2><spanclass="section-number">6.6. </span>Knowing when to stop<aclass="headerlink" href="#knowing-when-to-stop" title="Permalink to this headline">¶</a></h2>
453
+
<p>We should stop an iterative method when the error is sufficiently small.
454
+
But, we don’t have access the the exact solution, so we can’t compute the
which both tend to zero as <spanclass="math notranslate nohighlight">\({x}^k\to{x}^*\)</span> provided that <spanclass="math notranslate nohighlight">\(A\)</span> is
460
+
invertible.</p></li>
461
+
</ul>
462
+
<p>How do their sizes relate to the size of <spanclass="math notranslate nohighlight">\({e}^k={x}^k-{x}^*\)</span>?</p>
0 commit comments