Skip to content

Latest commit

 

History

History
120 lines (62 loc) · 5.99 KB

Conjugate Gradient.md

File metadata and controls

120 lines (62 loc) · 5.99 KB

We will introduce conjugate gradient method in the following. Before we introduce the iterative algorithms, let us look at the direct method, which enables us gain a better understanding of the iterative algorithms.

Direct Method of Conjugate Gradient

Suppose that

conjugate vectors p,

where

what are conjugate vectors,

is a set of conjugate vectors with respect to matrix A.

Let x* denote the solution. We can express it as

x* expression.

Therefore, we have

x* second expressin.

Therefore, we have

alpha,

which implies that we can get alpha_i without knowing x* .

Substitute alpha_i into x* , we have

x* final expression.

Observing the above expression, we find that we do not need to calcaulate matrix inversion. Furthermore, the expression can be regarded as iterative process, wherein the n-th term is added at the n-th iteration.

Basic Iterative Method of Conjugate Gradient

As we mentioned above, the direct menthod is costly when n is large. To avoid such cost, we dynamic gererate the conjugate vectors instead of finding them via direct method.

Let x1 denote the initial guess, the update rule is shown as follow:

At k-th iteration:

update rule

where image is the redisual at the k-th iteration.

Then, after n-th iteration, we have image.

The proof is shown as follow: First, we express image as

expression of xs.

Therefore, we have

image

Therefore, we have

image

and we can express alpha_k as

expression of alpha_k

Also, we have

image

Therefore, we have

image

Finally, we can express alpha_k as

image.

Improved Iterative Method of Conjugate Gradient

However, the above basic iterative method is still computationally expensive due to that it has to store all previous redisual vectors. A promising approach to avoid such cost is to generate a new conjugate vector by only using the previous one. Specifically, we determine the new conjugate vector by the following formular:

image

where

image

and

beta_k.

Note that the proof of the above image expression is shown as follow:

Since image,

we have

image.

Therefore, the numertor and denominator can be expressed as

image

and

image,

respectively. Therefore, we can express image as the above.

Finally, the algorithm is shown as follow: Let x1 denote the initial guess, image.

At k-th iteration:

image