You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: L6_krylov.html
+51-51Lines changed: 51 additions & 51 deletions
Original file line number
Diff line number
Diff line change
@@ -49,11 +49,11 @@
49
49
50
50
<sectionid="iterative-krylov-methods-for-ax-b">
51
51
<h1><spanclass="section-number">6. </span>Iterative Krylov methods for <spanclass="math notranslate nohighlight">\(Ax=b\)</span><aclass="headerlink" href="#iterative-krylov-methods-for-ax-b" title="Permalink to this headline">¶</a></h1>
52
-
<divclass="admonition hint">
53
-
<pclass="admonition-title">Hint</p>
54
-
<p>A video recording for this material is available <aclass="reference external" href="https://player.vimeo.com/video/454126320">here</a>.</p>
55
-
</div>
56
-
<p>In the previous section we saw how iterative methods are necessary
</iframe></div></details><p>In the previous section we saw how iterative methods are necessary
57
57
(but can also be fast) for eigenvalue problems <spanclass="math notranslate nohighlight">\(Ax=\lambda x\)</span>.
58
58
Iterative methods can also be useful for solving linear systems
59
59
<spanclass="math notranslate nohighlight">\(Ax=b\)</span>, generating a sequence of vectors <spanclass="math notranslate nohighlight">\(x^k\)</span> that converge to the
<h2><spanclass="section-number">6.1. </span>Krylov subspace methods<aclass="headerlink" href="#krylov-subspace-methods" title="Permalink to this headline">¶</a></h2>
76
-
<divclass="admonition hint">
77
-
<pclass="admonition-title">Hint</p>
78
-
<p>A video recording for this material is available <aclass="reference external" href="https://player.vimeo.com/video/454126582">here</a>.</p>
79
-
</div>
80
-
<p>In this section we will introduce Krylov subspace methods for solving
<p>then we would get <spanclass="math notranslate nohighlight">\(Q=Q_n\)</span>. Importantly, in the Arnoldi iteration, we
176
176
never form <spanclass="math notranslate nohighlight">\(K_n\)</span> or <spanclass="math notranslate nohighlight">\(R_n\)</span> explicitly, since these are very
177
177
ill-conditioned and not useful numerically.</p>
178
-
<divclass="admonition hint">
179
-
<pclass="admonition-title">Hint</p>
180
-
<p>A video recording for this material is available <aclass="reference external" href="https://player.vimeo.com/video/454136990">here</a>.</p>
181
-
</div>
182
-
<p>But what is the use of the <spanclass="math notranslate nohighlight">\(\tilde{H}_n\)</span> matrix? Applying
<p>where <spanclass="math notranslate nohighlight">\(H_n\)</span> is the <spanclass="math notranslate nohighlight">\(n\times n\)</span> top left-hand corner of <spanclass="math notranslate nohighlight">\(H\)</span>.</p>
196
-
<divclass="admonition hint">
197
-
<pclass="admonition-title">Hint</p>
198
-
<p>A video recording for this material is available <aclass="reference external" href="https://player.vimeo.com/video/454171516">here</a>.</p>
199
-
</div>
200
-
<p>The intrepretation of this is that <spanclass="math notranslate nohighlight">\(H_n\)</span> is the orthogonal projection
</iframe></div></details><p>The intrepretation of this is that <spanclass="math notranslate nohighlight">\(H_n\)</span> is the orthogonal projection
201
201
of <spanclass="math notranslate nohighlight">\(A\)</span> onto the Krylov subspace <spanclass="math notranslate nohighlight">\(\mathrm{span}(K_n)\)</span>. To see this, take any vector <spanclass="math notranslate nohighlight">\(v\)</span>,
202
202
and project <spanclass="math notranslate nohighlight">\(Av\)</span> onto the the Krylov subspace <spanclass="math notranslate nohighlight">\(\mathrm{span}(K_n)\)</span>.</p>
<p>Finding <spanclass="math notranslate nohighlight">\(y\)</span> to minimise <spanclass="math notranslate nohighlight">\(\mathcal{R}_n\)</span> requires the solution of a
254
254
least squares problem, which can be computed via QR factorisation
255
255
as we saw much earlier in the course.</p>
256
-
<divclass="admonition hint">
257
-
<pclass="admonition-title">Hint</p>
258
-
<p>A video recording for this material is available <aclass="reference external" href="https://player.vimeo.com/video/454171921">here</a>.</p>
259
-
</div>
260
-
<p>We are now in position to present the GMRES algorithm as pseudo-code.</p>
should use your least squares code to solve the least squares
284
284
problem. The test script <codeclass="docutils literal notranslate"><spanclass="pre">test_exercises10.py</span></code> in the <codeclass="docutils literal notranslate"><spanclass="pre">test</span></code>
<h2><spanclass="section-number">6.4. </span>Convergence of GMRES<aclass="headerlink" href="#convergence-of-gmres" title="Permalink to this headline">¶</a></h2>
331
-
<divclass="admonition hint">
332
-
<pclass="admonition-title">Hint</p>
333
-
<p>A video recording for this material is available <aclass="reference external" href="https://player.vimeo.com/video/454198706">here</a>.</p>
334
-
</div>
335
-
<p>The algorithm presented as pseudocode is the way to implement GMRES
0 commit comments