We will introduce conjugate gradient method in the following. Before we introduce the iterative algorithms, let us look at the direct method, which enables us gain a better understanding of the iterative algorithms.
Suppose that
where
is a set of conjugate vectors with respect to matrix A.
Let denote the solution. We can express it as
Therefore, we have
Therefore, we have
which implies that we can get without knowing
.
Observing the above expression, we find that we do not need to calcaulate matrix inversion. Furthermore, the expression can be regarded as iterative process, wherein the n-th term is added at the n-th iteration.
As we mentioned above, the direct menthod is costly when n is large. To avoid such cost, we dynamic gererate the conjugate vectors instead of finding them via direct method.
Let denote the initial guess, the update rule is shown as follow:
At k-th iteration:
where is the redisual at the k-th iteration.
Then, after n-th iteration, we have .
The proof is shown as follow:
First, we express as
Therefore, we have
Therefore, we have
Also, we have
Therefore, we have
However, the above basic iterative method is still computationally expensive due to that it has to store all previous redisual vectors. A promising approach to avoid such cost is to generate a new conjugate vector by only using the previous one. Specifically, we determine the new conjugate vector by the following formular:
where
and
Note that the proof of the above expression is shown as follow:
we have
Therefore, the numertor and denominator can be expressed as
and
respectively. Therefore, we can express as the above.
Finally, the algorithm is shown as follow:
Let denote the initial guess,
.
At k-th iteration: