Skip to content

Commit

Permalink
fix some typos and typography
Browse files Browse the repository at this point in the history
  • Loading branch information
UlrikBuchholtz committed Feb 13, 2024
1 parent b9cad0f commit 5f9f040
Show file tree
Hide file tree
Showing 11 changed files with 41 additions and 37 deletions.
8 changes: 4 additions & 4 deletions src/cplx-eigenvals.xml
Original file line number Diff line number Diff line change
Expand Up @@ -639,7 +639,7 @@ A-\frac{4+3i}5I_3 =
<p>
Let <m>A</m> be a <m>2\times 2</m> real matrix with a complex (non-real) eigenvalue <m>\lambda</m>, and let <m>v</m> be an eigenvector. Then <m>A = CBC\inv</m> for
<me>
C = \mat{| |; \Re(v) \Im(v); | |}
C = \mat[c]{| |; \Re(v) \Im(v); | |}
\sptxt{and}
B = \mat{\Re(\lambda) \Im(\lambda); -\Im(\lambda) \Re(\lambda)}.
</me>
Expand Down Expand Up @@ -751,7 +751,7 @@ A-\frac{4+3i}5I_3 =
<li>
Then <m>A=CBC\inv</m> for
<me>
C = \mat{| |; \Re(v) \Im(v); | |}
C = \mat[c]{| |; \Re(v) \Im(v); | |}
\sptxt{and}
B = \mat{\Re(\lambda) \Im(\lambda); -\Im(\lambda) \Re(\lambda)}.
</me>
Expand Down Expand Up @@ -1058,7 +1058,7 @@ A-\frac{4+3i}5I_3 =
<idx><h>Complex eigenvalue</h><h><m>2\times 2</m> matrices</h><h>different rotation-scaling matrices</h></idx>
We saw in the above examples that the <xref ref="cplx-diagonalization-thm"/> can be applied in two different ways to any given matrix: one has to choose one of the two conjugate eigenvalues to work with. Replacing <m>\lambda</m> by <m>\bar\lambda</m> has the effect of replacing <m>v</m> by <m>\bar v</m>, which just negates all imaginary parts, so we also have <m>A=C'B'(C')\inv</m> for
<me>
C' = \mat{| |; \Re(v) -\Im(v); | |}
C' = \mat[c]{| |; \Re(v) -\Im(v); | |}
\sptxt{and}
B' = \mat{\Re(\lambda) -\Im(\lambda); \Im(\lambda) \Re(\lambda)}.
</me>
Expand Down Expand Up @@ -1439,7 +1439,7 @@ B =
According to the <xref ref="block-diag-thm"/>, we have <m>A=CBC\inv</m> for
<me>
\begin{split}
C \amp= \mat{| | |; \Re(v_1) \Im(v_1) v_2; | | |}
C \amp= \mat[c]{| | |; \Re(v_1) \Im(v_1) v_2; | | |}
= \mat{-7 -1 2; 2 -9 -1; 5 0 3} \\
B \amp= \mat{\Re(\lambda_1) \Im(\lambda_1) 0;
-\Im(\lambda_1) \Re(\lambda_1) 0; 0 0 2}
Expand Down
2 changes: 1 addition & 1 deletion src/determinant-cofactors.xml
Original file line number Diff line number Diff line change
Expand Up @@ -1148,7 +1148,7 @@ the license is included in gfdl.xml.
</me>
and thus
<me>
A\inv = \mat{| | ,, |; x_1 x_2 \cdots, x_n; | | ,, |}
A\inv = \mat[c]{| | ,, |; x_1 x_2 \cdots, x_n; | | ,, |}
= \frac 1{\det(A)}\mat{C_{11} C_{21} \cdots, C_{n-1,1} C_{n1};
C_{12} C_{22} \cdots, C_{n-1,2} C_{n2};
\vdots, \vdots, \ddots, \vdots, \vdots;
Expand Down
8 changes: 4 additions & 4 deletions src/determinant-definitions-properties.xml
Original file line number Diff line number Diff line change
Expand Up @@ -876,7 +876,7 @@ the license is included in gfdl.xml.
<me>
AB =
\mat[c]{ \matrow{r_1}; \matrow{r_2}; \vdots ; \matrow{r_m}}
\mat{| | ,, |; c_1 c_2 \cdots, c_p; | | ,, |}
\mat[c]{| | ,, |; c_1 c_2 \cdots, c_p; | | ,, |}
= \mat{ r_1c_1 r_1c_2 \cdots, r_1c_p;
r_2c_1 r_2c_2 \cdots, r_2c_p;
\vdots, \vdots, , \vdots;
Expand All @@ -895,7 +895,7 @@ the license is included in gfdl.xml.
\vdots, \vdots, , \vdots;
c_p^Tr_1^T c_p^Tr_2^T \cdots, c_p^Tr_m^T} \\
\amp= \mat[c]{ \matrow{c_1^T}; \matrow{c_2^T}; \vdots ; \matrow{c_p^T}}
\mat{| | ,, |; r_1^T r_2^T \cdots, r_m^T; | | ,, |}
\mat[c]{| | ,, |; r_1^T r_2^T \cdots, r_m^T; | | ,, |}
= B^TA^T.
\end{split}
</me>
Expand Down Expand Up @@ -1133,8 +1133,8 @@ the license is included in gfdl.xml.
</me>
By the <xref ref="det-defn-trans-prop">transpose property</xref>, the determinant is also multilinear in the <em>columns</em> of a matrix:
<me>
\det\mat{| | |; v_1 av+bw v_3; | | |}
= a\det\mat{| | |; v_1 v v_3; | | |} + b\det\mat{| | |; v_1 w v_3; | | |}.
\det\mat[c]{| | |; v_1 av+bw v_3; | | |}
= a\det\mat[c]{| | |; v_1 v v_3; | | |} + b\det\mat[c]{| | |; v_1 w v_3; | | |}.
</me>
</p>

Expand Down
4 changes: 2 additions & 2 deletions src/determinant-volume.xml
Original file line number Diff line number Diff line change
Expand Up @@ -749,7 +749,7 @@ $\quad\xrightarrow{\phantom{MMMM}}\quad$

\draw[->, thick] (a) to[bend left]
node[above=2pt] {$T$}
node[below=4pt] {$\mat{| |; v_1 v_2; | |}$}
node[below=4pt] {$\mat[c]{| |; v_1 v_2; | |}$}
($(a) + (2.8cm,0)$);

\begin{scope}[xshift=5cm]
Expand Down Expand Up @@ -814,7 +814,7 @@ $\quad\xrightarrow{\phantom{MMMM}}\quad$

\draw[->, thick] (a) to[bend left]
node[above=2pt] {$T$}
node[below=4pt] {$\mat{| |; v_1 v_2; | |}$}
node[below=4pt] {$\mat[c]{| |; v_1 v_2; | |}$}
($(a) + (2.8cm,0)$);

\begin{scope}[xshift=5cm]
Expand Down
18 changes: 9 additions & 9 deletions src/diagonalization.xml
Original file line number Diff line number Diff line change
Expand Up @@ -260,25 +260,25 @@ the license is included in gfdl.xml.
We saw in the above example that changing the order of the eigenvalues and eigenvectors produces a different diagonalization of the same matrix. There are generally many different ways to diagonalize a matrix, corresponding to different orderings of the eigenvalues of that matrix. The important thing is that the eigenvalues and eigenvectors have to be listed in the same order.
<me>
\begin{split}
A \amp= \mat{| | |; v_1 v_2 v_3; | | |}
A \amp= \mat[c]{| | |; v_1 v_2 v_3; | | |}
\mat{\lambda_1 0 0; 0 \lambda_2 0; 0 0 \lambda_3}
\mat{| | |; v_1 v_2 v_3; | | |}\inv \\
\amp= \mat{| | |; v_3 v_2 v_1; | | |}
\mat[c]{| | |; v_1 v_2 v_3; | | |}\inv \\
\amp= \mat[c]{| | |; v_3 v_2 v_1; | | |}
\mat{\lambda_3 0 0; 0 \lambda_2 0; 0 0 \lambda_1}
\mat{| | |; v_3 v_2 v_1; | | |}\inv.
\mat[c]{| | |; v_3 v_2 v_1; | | |}\inv.
\end{split}
</me>
</p>
<p>
There are other ways of finding different diagonalizations of the same matrix. For instance, you can scale one of the eigenvectors by a constant <m>c</m>:
<me>
\begin{split}
A \amp= \mat{| | |; v_1 v_2 v_3; | | |}
A \amp= \mat[c]{| | |; v_1 v_2 v_3; | | |}
\mat{\lambda_1 0 0; 0 \lambda_2 0; 0 0 \lambda_3}
\mat{| | |; v_1 v_2 v_3; | | |}\inv \\
\amp= \mat{| | |; cv_1 v_2 v_3; | | |}
\mat[c]{| | |; v_1 v_2 v_3; | | |}\inv \\
\amp= \mat[c]{| | |; cv_1 v_2 v_3; | | |}
\mat{\lambda_1 0 0; 0 \lambda_2 0; 0 0 \lambda_3}
\mat{| | |; cv_1 v_2 v_3; | | |}\inv,
\mat[c]{| | |; cv_1 v_2 v_3; | | |}\inv,
\end{split}
</me>
you can find a different basis entirely for an eigenspace of dimension at least <m>2</m>, etc.
Expand Down Expand Up @@ -474,7 +474,7 @@ The eigenvectors <m>v_1,v_2,v_3</m> are linearly independent: <m>v_1,v_2</m> for
<li>
Otherwise, the <m>n</m> vectors <m>v_1,v_2,\ldots,v_n</m> in the eigenspace bases are linearly independent, and <m>A = CDC\inv</m> for
<me>
C = \mat{| |,, |; v_1 v_2 \cdots, v_n; | | ,, |} \sptxt{and}
C = \mat[c]{| |,, |; v_1 v_2 \cdots, v_n; | | ,, |} \sptxt{and}
D = \mat{\lambda_1 0 \cdots, 0;
0 \lambda_2 \cdots, 0;
\vdots, \vdots, \ddots, \vdots;
Expand Down
2 changes: 1 addition & 1 deletion src/dimension.xml
Original file line number Diff line number Diff line change
Expand Up @@ -305,7 +305,7 @@ the license is included in gfdl.xml.
<idx><h>Basis</h><h>of a span</h></idx>
<idx><h>Span</h><h>basis of</h><see>Basis</see></idx>
Computing a basis for a span is the same as computing a basis for a column space. Indeed, the span of finitely many vectors <m>v_1,v_2,\ldots,v_m</m> <em>is</em> the column space of a matrix, namely, the matrix <m>A</m> whose columns are <m>v_1,v_2,\ldots,v_m</m>:
<me>A = \mat{| | ,, |; v_1 v_2 \cdots, v_m; | | ,, |}.</me>
<me>A = \mat[c]{| | ,, |; v_1 v_2 \cdots, v_m; | | ,, |}.</me>
</p>

<example xml:id="dimension-eg-basis-span">
Expand Down
14 changes: 9 additions & 5 deletions src/linindep.xml
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,7 @@

<!--********************************************************************
Copyright 2019 Dan Margalit and Joseph Rabinoff
Copyright 2024 Ulrik Buchholtz
Permission is granted to copy, distribute and/or modify this document
under the terms of the GNU Free Documentation License, Version 1.3 or
Expand All @@ -20,6 +21,7 @@ the license is included in gfdl.xml.
<li><em>Recipe:</em> test if a set of vectors is linearly independent / find an equation of linear dependence.</li>
<li><em>Picture:</em> whether a set of vectors in <m>\R^2</m> or <m>\R^3</m> is linearly independent or not.</li>
<li><em>Vocabulary:</em> <term>linear dependence relation</term> / <term>equation of linear dependence</term>.</li>
<li><em>Theorem:</em> “pivotal” theorem.</li>
<li><em>Essential Vocabulary:</em> <term>linearly independent</term>, <term>linearly dependent</term>.</li>
</ol>
</objectives>
Expand Down Expand Up @@ -205,7 +207,7 @@ Let's explain why the vectors <m>(1,1,0)</m> and <m>(-2,0,1)</m> are linearly in
has only the trivial solution, if and only if the matrix equation
<m>Ax=0</m>
has only the trivial solution, where <m>A</m> is the matrix with columns <m>v_1,v_2,\ldots,v_k</m>:
<me>A = \mat{| |, , |; v_1 v_2 \cdots, v_k; | |, , |}.</me>
<me>A = \mat[c]{| |, , |; v_1 v_2 \cdots, v_k; | |, , |}.</me>
This is true if and only if <m>A</m> has a <xref ref="defn-pivot-pos" text="title">pivot position</xref> in every column.
</p>
<p>
Expand Down Expand Up @@ -762,7 +764,7 @@ Let's explain why the vectors <m>(1,1,0)</m> and <m>(-2,0,1)</m> are linearly in

<p>The pivot columns are linearly independent, so we cannot delete any more columns without changing the span.</p>

<p>Every non-pivot column belong to the span of the pivot columns to its left,
<p>Every non-pivot column belongs to the span of the pivot columns to its left,
more precisely, it is the linear combination with coefficients given by the non-pivot column
itself in the reduced row echelon form.</p>
</statement>
Expand All @@ -771,7 +773,8 @@ Let's explain why the vectors <m>(1,1,0)</m> and <m>(-2,0,1)</m> are linearly in
<p>
If the matrix is in reduced row echelon form:
<me>A = \mat{1 0 2 0; 0 1 3 0; 0 0 0 1}</me>
then the column without a pivot is visibly in the span of the pivot columns:
then the column without a pivot is visibly in the span of the pivot columns,
with only nonzero coefficients for the pivot columns to its left:
<me>\vec{2 3 0} = 2\vec{1 0 0} + 3\vec{0 1 0} + 0\vec{0 0 1},</me>
and the pivot columns are linearly independent:
<me>
Expand Down Expand Up @@ -805,8 +808,9 @@ Let's explain why the vectors <m>(1,1,0)</m> and <m>(-2,0,1)</m> are linearly in

<p>
As a corollary, we note the uniqueness of the reduced row echelon form,
already stated in <xref ref="row-reduction-works"/>, by uniqueness of the coefficients
expressing a non--pivot column as a linear combination of pivot columns to its left.</p>
already stated in this <xref ref="row-reduction-works"/>, by uniqueness of the coefficients
expressing a non-pivot column as a linear combination of pivot columns to its left,
see this <xref ref="dimension-unique-coeff"/>.</p>

<p>
Note that it is necessary to row reduce <m>A</m> to find which are its <xref ref="defn-pivot-pos" text="title">pivot columns</xref>. However, the span of the columns of the row reduced matrix is generally <em>not</em> equal to the span of the columns of <m>A</m>: one must use the pivot columns of the <em>original</em> matrix. See <xref ref="dimension-basis-colspace"/> for a restatement of the above theorem.
Expand Down
12 changes: 6 additions & 6 deletions src/matrix-mult.xml
Original file line number Diff line number Diff line change
Expand Up @@ -336,9 +336,9 @@ the license is included in gfdl.xml.
<statement>
<p>
Let <m>A</m> be an <m>m\times n</m> matrix and let <m>B</m> be an <m>n\times p</m> matrix. Denote the columns of <m>B</m> by <m>v_1,v_2,\ldots,v_p</m>:
<me>B = \mat{| | ,, |; v_1 v_2 \cdots, v_p; | | ,, |}.</me>
<me>B = \mat[c]{| | ,, |; v_1 v_2 \cdots, v_p; | | ,, |}.</me>
The <term>product</term> <m>AB</m> is the <m>m\times p</m> matrix with columns <m>Av_1,Av_2,\ldots,Av_p</m>:
<me>AB = \mat{| | ,, |; Av_1 Av_2 \cdots, Av_p; | | ,, |}.</me>
<me>AB = \mat[c]{| | ,, |; Av_1 Av_2 \cdots, Av_p; | | ,, |}.</me>
</p>
</statement>
</definition>
Expand Down Expand Up @@ -475,15 +475,15 @@ the license is included in gfdl.xml.
= \vec{r_1x r_2x \vdots, r_mx}.
</me>
The <xref ref="matrix-mult-defn-of"/> of matrix multiplication is
<me>A\mat{| | ,, |; c_1 c_2 \cdots, c_p; | | ,, |} =
\mat{| | ,, |; Ac_1 Ac_2 \cdots, Ac_p; | | ,, |}.</me>
<me>A\mat[c]{| | ,, |; c_1 c_2 \cdots, c_p; | | ,, |} =
\mat[c]{| | ,, |; Ac_1 Ac_2 \cdots, Ac_p; | | ,, |}.</me>
It follows that
<me>
\mat[c]{ \matrow{r_1};
\matrow{r_2};
\vdots ;
\matrow{r_m}}
\mat{| | ,, |; c_1 c_2 \cdots, c_p; | | ,, |}
\mat[c]{| | ,, |; c_1 c_2 \cdots, c_p; | | ,, |}
= \mat{ r_1c_1 r_1c_2 \cdots, r_1c_p;
r_2c_1 r_2c_2 \cdots, r_2c_p;
\vdots, \vdots, , \vdots;
Expand Down Expand Up @@ -756,7 +756,7 @@ $\xrightarrow{\text{reflect $xy$}}$
Since <m>e_3</m> is perpendicular to the <m>xy</m>-plane, reflecting over the <m>xy</m>-plane takes <m>e_3</m> to its negative:
<me>U(e_3) = -e_3 = \vec{0 0 -1}.</me>
We have computed all of the columns of <m>B</m>:
<me>B = \mat{| | |; U(e_1) U(e_2) U(e_3); | | |}
<me>B = \mat[c]{| | |; U(e_1) U(e_2) U(e_3); | | |}
= \mat{1 0 0; 0 1 0; 0 0 -1}.</me>
By a similar method, we find
<me>A = \mat{0 0 0; 0 1 0; 0 0 1}.</me>
Expand Down
2 changes: 1 addition & 1 deletion src/matrix-trans.xml
Original file line number Diff line number Diff line change
Expand Up @@ -763,7 +763,7 @@ the license is included in gfdl.xml.
<p>
Suppose that <m>A</m> has columns <m>v_1,v_2,\ldots,v_n</m>. If we multiply <m>A</m> by a general vector <m>x</m>, we get
<me>
Ax = \mat{| | ,, |; v_1 v_2 \cdots, v_n; | | ,, |}\vec{x_1 x_2 \vdots, x_n}
Ax = \mat[c]{| | ,, |; v_1 v_2 \cdots, v_n; | | ,, |}\vec{x_1 x_2 \vdots, x_n}
= x_1v_1 + x_2v_2 + \cdots + x_nv_n.
</me>
This is just a general linear combination of <m>v_1,v_2,\ldots,v_n</m>. Therefore, the outputs of <m>T(x) = Ax</m> are exactly the linear combinations of the columns of <m>A</m>: the <em>range</em> of <m>T</m> is the column space of <m>A</m>. See this <xref ref="matrixeq-spans-consistency"/>.
Expand Down
4 changes: 2 additions & 2 deletions src/matrixeq.xml
Original file line number Diff line number Diff line change
Expand Up @@ -46,9 +46,9 @@ the license is included in gfdl.xml.
<idx><h>Vector</h><h>product with matrix</h><see>Matrix-vector product</see></idx>
<statement>
<p>Let <m>A</m> be an <m>m\times n</m> matrix with columns <m>v_1,v_2,\ldots,v_n</m>:
<me>A = \mat{| | ,{}, |; v_1 v_2 \cdots, v_n;| | ,{}, | }</me>
<me>A = \mat[c]{| | ,{}, |; v_1 v_2 \cdots, v_n;| | ,{}, | }</me>
The <term>product</term> of <m>A</m> with a vector <m>x</m> in <m>\R^n</m> is the linear combination
<me>Ax = \mat{| | ,{}, |; v_1 v_2 \cdots, v_n;| | ,{}, | }
<me>Ax = \mat[c]{| | ,{}, |; v_1 v_2 \cdots, v_n;| | ,{}, | }
\vec{x_1 x_2 \vdots, x_n} = x_1v_1 + x_2v_2 + \cdots + x_nv_n.</me>
This is a vector in <m>\R^m</m>.
</p>
Expand Down
4 changes: 2 additions & 2 deletions src/projections.xml
Original file line number Diff line number Diff line change
Expand Up @@ -942,15 +942,15 @@ So, for example, if <m>x=(1,0,0)</m>, this formula tells us that <m>x_W = (2,1,-
<p>
As we saw in this <xref ref="projections-onto-plane4"/>, if you are willing to compute bases for <m>W</m> and <m>W^\perp</m>, then this provides a third way of finding the standard matrix <m>B</m> for projection onto <m>W</m>: indeed, if <m>\{v_1,v_2,\ldots,v_m\}</m> is a basis for <m>W</m> and <m>\{v_{m+1},v_{m+2},\ldots,v_n\}</m> is a basis for <m>W^\perp</m>, then
<me>
B = \mat{| | ,, |; v_1 v_1 \cdots, v_n; | | ,, |}
B = \mat[c]{| | ,, |; v_1 v_1 \cdots, v_n; | | ,, |}
\mat{
1 \cdots, 0 0 \cdots, 0;
\vdots, \ddots, \vdots, \vdots, \ddots, \vdots;
0 \cdots, 1 0 \cdots, 0;
0 \cdots, 0 0 \cdots, 0;
\vdots, \ddots, \vdots, \vdots, \ddots, \vdots;
0 \cdots, 0 0 \cdots, 0}
\mat{| | ,, |; v_1 v_1 \cdots, v_n; | | ,, |}\inv,
\mat[c]{| | ,, |; v_1 v_1 \cdots, v_n; | | ,, |}\inv,
</me>
where the middle matrix in the product is the diagonal matrix with <m>m</m> ones and <m>n-m</m> zeros on the diagonal. However, since you already have a basis for <m>W</m>, it is faster to multiply out the expression <m>A(A^TA)\inv A^T</m> as in the <xref ref="projections-ATA-formula2"/>.
</p>
Expand Down

0 comments on commit 5f9f040

Please sign in to comment.