Skip to content

Singular Value Decomposition

Prasant Chidella edited this page Aug 14, 2014 · 2 revisions

For any matrix A ∈ ℝm×n there exist orthogonal matrices U ∈ ℝm×m, V ∈ ℝn×n and a ’diagonal’ matrix Σ ∈ ℝm×n with diagonal entries
σ1 ≥ · · · ≥ σr > σr+1 = · · · = σmin{m,n} = 0
such that A = UΣVT.

The decomposition A = UΣVT is called Singular Value Decomposition(SVD). It has fundamental importance decomposition of a matrix and tells us a lot about its structure. It is very important in several linear algebra applications.
The diagonal entries σi of Σ are called the singular values of A. The columns of U are called left singular vectors and the columns of V are called right singular vectors.
Using the orthogonality of V we can write it in the form AV = UΣ. We can interpret it as follows: there exists a special orthonormal set of vectors (i.e. the columns of V ), that is mapped by the matrix A into an orthonormal set of vectors (i.e. the columns of U).

In core.matrix, the linear/svd function performs SVD and returns a map containing the key-value pairs [:U :V* :S], where :U and :V(V* is same as the transpose of V, i.e. VT) are the orthogonal matrices containing the left and right singular vectors and :S is a vector of all the singular values of A(generally in decreasing order).
If you are only interested in a particular matrix and would like to avoid wastage of space and time, the API allows you to specify which matrices to return by passing a :return parameter in the options map.

Intended usage:

(let [{:keys [U S V*]} (svd M)] ....)
; => returns a map containing all three results, U, V* and S
(let [{:keys [S]} (svd M {:return [:S]})] ....)
; => returns a map containing just S

Given an SVD decomposition A = UΣVT, σ1 ≥ · · · ≥ σr > σr+1 = · · · = σmin{m,n} = 0, if we denote columns of U as u1, ..., um and columns of V as v1, . . ., vn, we have:

  • rank(A) = r
  • R(A) = R([u1, . . . , ur])
  • N(A) = R([vr+1, . . . , vn])
  • R(AT) = R([v1, . . . , vr])
  • N(AT) = R([ur+1, . . . , um])

Also, if Ur = [u1, . . . , ur], Σr = diag(σ1, . . . , σr), Vr = [v1, . . . , vr], then we have,
A = UrΣrVTr = Σri=1iuiviT).
This is called the dyadic decomposition of A, decomposes the matrix A of rank r into sum of r matrices of rank 1.

Frobenius norm of a matrix can be easily computed from the SVD decomposition:
||A||F = Σmi=1Σnj=1a2ij = sqrt(σ12 + · · · + σp2), p = min {m, n}

From the SVD decomposition of A, it also follows that ATA = VΣTΣVT and AAT = UΣTΣUT
Thus, σ2i, i = 1, . . . , p are the eigenvalues of symmetric matrices ATA and AAT and vi and ui are the corresponding eigenvectors.

Another important application of SVD is solving the linear least squares problem. Given Am, n and b ∈ ℝm, with m ≥ n ≥ 1. The problem to find x ∈ ℝn that minimizes ||Ax-b||2 is called the least squares problem.
A minimizing vector x is called the least squares solution of Ax = b.
let A = UΣVT be the SVD of A ∈ ℝm×n
||Ax-b||22 = ||AVVTx-b||22                        (VVT = I)
                = ||UT(AVVTx-b)||22                 (Orthogonal matrices are isometric)
                = ||ΣVTx-UTb||22                      (A = UΣVT)
let VTx be z
                = Σri=0izi - uTib)2 + Σmi=r+1(uTib)2

Thus we need to find x that for which the following is minimum
                Σri=0izi - uTib)2 + Σmi=r+1(uTib)2
Therefore,
                zi = uiTb/σi, i=1,...r and
                zi = artitrary, i=r+1,...n
As a result, x is minimum when
                ||Ax-b||22 = Σmi=r+1(uTib)2

Since z = VTx and V is orthogonal
                ||x||2 = ||VVTx||2 = ||VTx||2 = ||z||2

The minimum norm solution of the linear least squares problem is given by
                z = VTx,
i.e.            x = Vz,

Therefore, the minimum norm solution is
                x = Σri=0(uiTbvii)