Document 243566

Numerical Methods for Computational Science and Engineering
Numerical Methods for Computational Science and Engineering
Introduction
Topics of today
I
I
I
Learn simple and effective iterative methods for problems
where direct methods are ineffective.
Analyze methods to see when and where they can be applied
and how effective they are.
Understand modern algorithms, specifically preconditioned
conjugate gradients
Numerical Methods for Computational Science
and Engineering
References
Lecture 10, Oct 21, 2013: Linear systems: iterative methods I
I
Peter Arbenz
I
I
Computer Science Department, ETH Z¨
urich
E-mail: arbenz@inf.ethz.ch
NumCSE, Lecture 10, Oct 21, 2013
1/38
Numerical Methods for Computational Science and Engineering
2/38
Comparison of direct and iterative linear solvers
Why not use a direct method?
Comparison of direct and iterative linear solvers
Direct solvers
+ Computation is numerically stable in many relevant cases.
+ Can solve economically for several right-hand sides.
+ Accuracy can be improved via ‘iterative refinement.’
+ ‘Essentially’ a black box.
− But: fill-in limits usefulness (memory, flops).
A nonsingular n × n.
Iterative method: Starting from initial guess x0 , generate iterates
x1 , x2 , . . . , xk , . . ., hopefully converging to solution x.
But why not simply use LU decomposition, or
x = A \ b
Iterative solvers
+ Only a rough approximation to x is required.
+ A good x0 approximating x is known (warm start)
+ Matrix often only implicitly needed via matvec product.
−− Good preconditioner often necessary for convergence.
−− Quality often dependent on ‘right’ choice of parameters, e.g.
start vector, basis size, restart (see that later).
in Matlab?
The matrix size n must be large, and the matrix A must be
somehow special to consider iterative methods.
NumCSE, Lecture 10, Oct 21, 2013
NumCSE, Lecture 10, Oct 21, 2013
Numerical Methods for Computational Science and Engineering
Comparison of direct and iterative linear solvers
Ax = b,
U. Ascher & C. Greif: Numerical methods. Chapter 7.
Y. Saad: Iterative Methods for Sparse Linear Systems. SIAM,
2003. 2nd ed.
Barrett et al.: Templates for the Solution of Linear Systems.
SIAM, 1994. Online at URL http://www.netlib.org/
linalg/html_templates/Templates.html.
3/38
NumCSE, Lecture 10, Oct 21, 2013
4/38
Numerical Methods for Computational Science and Engineering
Numerical Methods for Computational Science and Engineering
Comparison of direct and iterative linear solvers
Comparison of direct and iterative linear solvers
Typical scenarios
The ubiquitous Poisson problem
Direct solvers
I
Inverse Iteration
I
Determinants
I
Many linear systems with the same matrix A
I
‘Difficult’ applications (e.g. circuit simulation)
∆u(x) = f (x) in domain Ω
and boundary conditions
I
Stationary temperature
distribution.
I
Electrical potential due to charged
distribution (particles).
I
Gravitational potential due to mass
distribution.
Iterative solvers
I
Inexact Newton-Methods
I
Many linear systems with ‘slightly changing’ matrices
I
Matrix-free applications (e.g. matvec product via FFT)
I
Very large problems
NumCSE, Lecture 10, Oct 21, 2013
5/38
Numerical Methods for Computational Science and Engineering
NumCSE, Lecture 10, Oct 21, 2013
6/38
Numerical Methods for Computational Science and Engineering
Comparison of direct and iterative linear solvers
Comparison of direct and iterative linear solvers
Somewhat similar problems
Test problem: Poisson equation on square
Linear elasticity problems (forces → displacements → stresses /
strains)
Let us consider the Poisson equation on a rectangular domain
Ω = [0, 1]2
∂2
∂2
u(x,
y
)
−
u(x, y ) = f (x, y ),
∂x 2
∂y 2
u(x, y ) = p(x, y ) on ∂Ω,
−∆u(x, y ) = −
We define a rectangular grid with grid points that are a distance h
apart. In each grid point the Laplacian of u can be expanded as
Maxwell equation (electric / magnetic fields → acceleration of
particles)
−∆u(x, y ) = −
=
NumCSE, Lecture 10, Oct 21, 2013
7/38
NumCSE, Lecture 10, Oct 21, 2013
∂
∂
u(x, y ) − 2 u(x, y )
2
∂x
∂y
1
(4u(x, y ) − u(x + h, y ) − u(x − h, y )
h2
−u(x, y + h) − u(x, y − h)) + O(h2 )
8/38
Numerical Methods for Computational Science and Engineering
Numerical Methods for Computational Science and Engineering
Comparison of direct and iterative linear solvers
Comparison of direct and iterative linear solvers
Test problem: Poisson equation on square (cont.)
Test problem: Poisson equation on square (cont.)
Let xi = ih, yj = jh, 0 ≤ i, j ≤ N + 1. Then
−∆u(xi , yj ) ≈
1
(4u(xi , yj ) − u(xi+1 , yj ) − u(xi−1 , yj )
h2
−u(xi , yj+1 ) − u(xi , yj−1 ))
for 0 < i, j < N + 1. The discretized equation now becomes
1
(4u(xi , yj ) − u(xi+1 , yj ) − u(xi−1 , yj ) − u(xi , yj+1 ) − u(xi , yj−1 ))
h2
= f (xi , yj ).
or
4u(xi , yj )−u(xi+1 , yj )−u(xi−1 , yj )−u(xi , yj+1 )−u(xi , yj−1 ) = h2 f (xi , yj ).
NumCSE, Lecture 10, Oct 21, 2013
9/38
Numerical Methods for Computational Science and Engineering
NumCSE, Lecture 10, Oct 21, 2013
10/38
Numerical Methods for Computational Science and Engineering
Comparison of direct and iterative linear solvers
Comparison of direct and iterative linear solvers
Test problem: Poisson equation on square (cont.)
Test problem: Poisson equation on square (cont.)
At the near-boundary points, the prescribed boundary values are
pluged in. If, e.g., i = 1 and 1 ≤ j ≤ N we have


u1,1
 u2,1 


 .. 
 . 


 uN,1 


 u1,2 




u =  u2,2  ,
 .. 
 . 


 uN,2 


 u1,3 


 .. 
 . 
uN,N
1
(4u(x1 , yj ) − u(x2 , yj ) − p(x0 , yj ) − u(x1 , yj+1 ) − u(x1 , yj−1 ))
h2
= f (x1 , yj ).
We arrange the unknowns u(x1 , y1 ), u(x1 , y2 ), . . . , u(xN , yN )
‘row-wise’ in a vector u ∈ Rn , n = N 2 :
u(xi , yj ) ∼ u(i−1)N+j ,
1 ≤ i, j ≤ n.
Similarly for f : f(i−1)N+j ∼ h2 f (xi , yj ).


b1,1
 b2,1 


 .. 
 . 


 bN,1 


 b1,2 




b =  b2,2  ,
 .. 
 . 


 bN,2 


 b1,3 


 .. 
 . 
bN,N
bi,j = h2 f (xi , yj ).
Au = b
NumCSE, Lecture 10, Oct 21, 2013
11/38
NumCSE, Lecture 10, Oct 21, 2013
12/38
Numerical Methods for Computational Science and Engineering
Numerical Methods for Computational Science and Engineering
Comparison of direct and iterative linear solvers
Comparison of direct and iterative linear solvers
Structure of a Poisson matrix
Eigenvalues of the Poisson matrix
For any size N the matrix A is diagonally dominant and
nonsingular. It is also a Toeplitz matrix.
It can be verified directly that the n = N 2 eigenvalues of A are
given by (recall (N + 1)h = 1)
λi,j = 4 − 2(cos(iπh) + cos(jπh)),
λmin ≈ 2π 2 h2 ,
λmax ≈ 4.
Thus λi,j > 0 for all 1 ≤ i, j ≤ N, and we see that the matrixA is
also positive definite.
Knowing the eigenvalues explicitly is helpful in understanding
performance, convergence and accuracy issues related to linear
solvers.
Matlab spy of a matrix discretization of the Poisson equation
−∆u = f in Ω = (0, 1)2 , u = 0 on ∂Ω, with finite differences on a
10 × 10 grid (N = 10).
NumCSE, Lecture 10, Oct 21, 2013
1 ≤ i, j ≤ N.
13/38
Numerical Methods for Computational Science and Engineering
NumCSE, Lecture 10, Oct 21, 2013
14/38
Numerical Methods for Computational Science and Engineering
Stationary iterative methods
Stationary iterative methods
Motivation
Motivation
Stationary iterative methods: Motivation
Stationary iterative methods: Motivation (cont.)
Let M be an invertible matrix with the following properties
Let A be so large that it is impossible to compute its LU
factorization (fill-in). Then, we try to solve Ax = b iteratively.
Let xk be an approximation to the solution x∗ of Ax = b. Then
1. M is a good approximation to A.
2. Mz = r can be solved (relatively) cheaply.
3. M can be formed (relatively) cheaply.
x∗ = xk + ek
|{z}
error
∗
Aek = Ax − Axk = b − Axk =:
x∗ = xk + A−1 rk
Then, instead of solving (1) we iterate according to
rk
|{z}
residual
rk
xk+1
(2)
(1)
M is called a preconditioner.
Of course, the above assumption prevents us from solving (1) for
the error ek .
NumCSE, Lecture 10, Oct 21, 2013
= b − Axk
= xk + M −1 rk
15/38
NumCSE, Lecture 10, Oct 21, 2013
16/38
Numerical Methods for Computational Science and Engineering
Numerical Methods for Computational Science and Engineering
Stationary iterative methods
Stationary iterative methods
Motivation
Theory
Stationary iterative methods: Motivation (cont.)
Stationary iterative methods: Theory
Practical procedure:
Definition (Matrix splitting)
rk = b − Axk
Mzk = rk
Let A be nonsingular. Then A = M − N, with M nonsingular, is
called a matrix splitting.
(Compute residual)
(Solve with preconditioner)
xk+1 = xk + zk
(Update approximate)
We consider the iteration
Remarks:
I
In step 1, we have to multiply a matrix with a vector. In
practice: we need to have a procedure that computes y ← Ax.
I
In step 2, we have to solve a system of equations with the
preconditioner which is “by definition” relatively easy to do.
This step is usually the most expensive one.
NumCSE, Lecture 10, Oct 21, 2013
xk+1 = xk + M −1 rk ⇐⇒ Mxk+1 = Nxk + b.
Note:
17/38
Numerical Methods for Computational Science and Engineering
I
If M −1 = A−1 , then one-step convergence: xk+1 = A−1 b.
I
In practice, M −1 should be a good (but cheaper to invert)
approximation to A−1 (‘preconditioner’).
NumCSE, Lecture 10, Oct 21, 2013
18/38
Numerical Methods for Computational Science and Engineering
Stationary iterative methods
Stationary iterative methods
Theory
Theorem on convergence
Formulation as fixed-point iteration
Theorem on convergence
Theorem (Convergence theorem for stationary iterations)
xk+1 = xk + M −1 rk = M −1 (Mxk + rk )
Let A = M − N be a matrix splitting with M invertible. Then,
= M −1 ((M − A)xk + b)
−1
−1
= M
| {z N} xk + M
| {z b},
=: c
=: G
(1) The iteration
N = M − A.
Mxk+1 = Nxk + b,
(+)
converges for any x(0) if and only if
⇒ xk+1 = G xk + c
fixed point iteration
ρ(G ) < 1,
G = M −1 N = I − M −1 A is called the iteration matrix.
G = M −1 N = I − M −1 A.
(2) If, for any vector norm, kG k < 1, then iteration (+)
converges.
Theorem.The error satisfies ek+1 = G ek .
Proof: With fixed point x∗ = A−1 b, one has x∗ = G x∗ + c.
NumCSE, Lecture 10, Oct 21, 2013
19/38
NumCSE, Lecture 10, Oct 21, 2013
20/38
Numerical Methods for Computational Science and Engineering
Numerical Methods for Computational Science and Engineering
Stationary iterative methods
Stationary iterative methods
Theorem on convergence
Theorem on convergence
Convergence rate
Remarks
I
Earlier we derived a convergence rate for fixed point iterations
xk+1 = g (xk ) for solving nonlinear systems.
I
I
Here, the situation is similar: xk+1 = g (xk ) = G xk + c.
I
I
How many iteration steps are required to reduce the error
norm by an order of magnitude?
0.1ke0 k = ρ(G )k ke0 k.
k≈
−1
log10 ρ(G )
I
1
≈ − log10 ρ(G ).
k
The rate is higher (faster convergence), the smaller ρ.
rate =
NumCSE, Lecture 10, Oct 21, 2013
(1) in the convergence theorem is based entirely on the
eigenvalues of the iteration matrix G .
There is however a big difference of the convergence behavior
of normal (diagonalizable by a unitary matrix) and nonnormal
matrices: Compare
the behavior of the error norm
p
p in Matlab
of kek k (e0 = [ 1/2, 1/2]T ) with the two matrices
0.9 0
0.9 10
,
.
0 0.9
0 0.9
Pay attention on the norm! E.g.,
0.1 −0.4
G=
.
−0.4
0.8
Here, kG k∞ = 1.2, kG k2 = maxi |λi (G )| = 0.9815.
21/38
Numerical Methods for Computational Science and Engineering
NumCSE, Lecture 10, Oct 21, 2013
22/38
Numerical Methods for Computational Science and Engineering
Stationary iterative methods
Stationary iterative methods: practical schemes
Theorem on convergence
Jacobi iteration
Practicalities
Practical schemes: Jacobi and Gauss–Seidel iteration
Starting vector, iterates & convergence
Improve initial vector x(0) so that xk → x∗ = A−1 b in fewer steps.
Let A = L + D + U where D is diagonal, L is strictly lower
triangular and U is strictly upper triangular. Let E = D + L.
Where to get x(0) from?
There are various possibilities (some better than others):
D=diag(diag(A));
I
Zero/random vector.
I
Insights into underlying problem.
I
Solution from a ‘similar’ previously-solved problem.

7
A = −3
1
Stopping criterion
A few practical possibilities
I
krk k ≤ τ kbk.
I
krk k ≤ τ kr(0) k.
NumCSE, Lecture 10, Oct 21, 2013
E = tril(A)

3
1
10
2
7 −15
Jacobi iteration:
⇒
NumCSE, Lecture 10, Oct 21, 2013
0
10
7

0
0
−15
xk+1 = xk + D −1 rk
Gauss–Seidel iteration:
23/38



7
0
0
7
0 , E = −3
D = 0 10
0
0 −15
1
xk+1 = xk + E −1 rk
24/38
Numerical Methods for Computational Science and Engineering
Numerical Methods for Computational Science and Engineering
Stationary iterative methods: practical schemes
Stationary iterative methods: practical schemes
Jacobi iteration
Jacobi iteration
Jacobi example
Gauss–Seidel example
7x1 = 3 − 10x2 − x3 ,
7x1 = 3 − 10x2 − x3 ,
10x2 = 4 + 3x1 − 2x3 ,
10x2 − 3x1 = 4 − 2x3 ,
−15x3 = 2 − x1 − 7x2 .
−15x3 + x1 + 7x2 = 2.
Evaluate right hand side at current iterate k and left hand side as
unknown at step k + 1:
(k+1)
x1
(k+1)
x2
(k+1)
x3
Evaluate right hand side at current iterate k and left hand side as
unknown at step k + 1:
1
(k)
(k)
3 − 10x2 − x3
,
7
1 (k)
(k)
=
4 + 3x1 − 2x3
,
10
−1
(k)
(k)
=
2 − x1 − 7x2
.
15
(k+1)
=
x1
(k+1)
x2
(k+1)
x3
1
(k)
(k)
3 − 10x2 − x3
,
7
1 (k+1)
(k)
=
4 + 3x1
− 2x3
,
10
−1
(k+1)
(k+2)
=
2 − x1
− 7x2
.
15
=
Always use most recent information.
NumCSE, Lecture 10, Oct 21, 2013
25/38
Numerical Methods for Computational Science and Engineering
Stationary iterative methods: practical schemes
Properties of Jacobi and Gauss-Seidel relaxation
Successive Over-Relaxation (SOR)
Properties of Jacobi and Gauss-Seidel relaxation
Jacobi more easily parallelized
I
Jacobi matrix M is symmetric
I
GS converges whenever Jacobi converges and often (but not
always) twice as fast
I
Both methods are simple but slow. Used as building blocks for
faster, more complex methods.
26/38
Numerical Methods for Computational Science and Engineering
Stationary iterative methods: practical schemes
I
NumCSE, Lecture 10, Oct 21, 2013
Over-relaxation and under-relaxation
Let xk+1 be obtained from xk by Jacobi or GS.
Modify it further by
xk+1 ← ωxk+1 + (1 − ω)xk
where ω is a parameter.
Two useful variants:
I Based on Gauss-Seidel (GS) and 1 < ω < 2, obtain faster
successive over-relaxation (SOR).
xk+1 = xk + ω[(1 − ω)D + ωE ]−1 rk ≡ xk + Mω−1 rk .
I
Based on Jacobi and ω ≈ 0.8, obtain slower under-relaxation
which is a good smoother in some applications.
xk+1 = xk + ωD −1 rk .
NumCSE, Lecture 10, Oct 21, 2013
27/38
NumCSE, Lecture 10, Oct 21, 2013
28/38
Numerical Methods for Computational Science and Engineering
Numerical Methods for Computational Science and Engineering
Stationary iterative methods: practical schemes
Stationary iterative methods: practical schemes
Successive Over-Relaxation (SOR)
Successive Over-Relaxation (SOR)
Optimal parameter for SOR
Optimal parameter for SOR (cont.)
4. For matrices of the form tridiag[-1, 2, -1], the above formula
holds, and ωopt can be simplified to the following formula:
1. ω = 1: back to Gauss–Seidel.
2. For many matrices there is a known optimal parameter for
SOR: ωopt .
ωopt =
3. For a large class of matrices, the optimal parameter is given by
ωopt =
2
q
> 1,
1 + 1 − ρ2J
2
π ,
1 + sin n+1
where n is the dimension of the matrix.
5. For this class of matrices ρ(G (ωopt )) = ωopt − 1.
6. Poisson matrix: gallery(’poisson’,m) is n × n matrix with
n = m2
2
2
ωopt =
=
π
π .
1 + sin √n+1
1 + sin m+1
with ρJ the spectral radius of the Jacobi iteration matrix.
7. Once the optimal parameter is used, convergence is incredibly
faster than Jacobi or Gauss–Seidel.
NumCSE, Lecture 10, Oct 21, 2013
29/38
Numerical Methods for Computational Science and Engineering
Stationary iterative methods: practical schemes
Successive Over-Relaxation (SOR)
Block iterations
Symmetric Successive Over-Relaxation (SSOR)
Practical schemes: Block iterations
Combination of standard forward and ‘backward’ SOR:
1
Let
e ω x(k+1) = N
e ω x(k+ 12 ) + b,
M
I
A12
A22
..
.

A1m
A2m 

.. 
. 
Amm
Block Jacobi iteration
M = diag(A11 , A22 , . . . , Amm )
Block Gauss–Seidel iteration
Usually only used when A symmetric, i.e. U = LT .

A11
 A21

M= .
 ..
Symmetric Gauss–Seidel: SSOR(ω = 1).

A22
..
.
Am1 Am2
NumCSE, Lecture 10, Oct 21, 2013
···
···
Am1 Am2 · · ·
e ω = 1 − ω D − L.
N
ω
Note:
I

A11
 A21

A= .
 ..
with ‘backward’ SOR
e ω = 1 D + U,
M
ω
30/38
Numerical Methods for Computational Science and Engineering
Stationary iterative methods: practical schemes
Mω x(k+ 2 ) = Nω x(k) + b,
NumCSE, Lecture 10, Oct 21, 2013
31/38
NumCSE, Lecture 10, Oct 21, 2013




..
.
···
Amm
32/38
Numerical Methods for Computational Science and Engineering
Numerical Methods for Computational Science and Engineering
Stationary iterative methods: practical schemes
Stationary iterative methods: practical schemes
Matlab
Matlab
MATLAB - Experiment
if (nargin < 6), tol = eps; end
if (nargin < 5), x = zeros(size(A,1),1); end
function [x,k] = statit(A,M,M2,b,x,tol)
%STATIT Stationary Iteration
%
%
x^{k+1} = x^{k} + M \ r^{k},
r^{k} = b - A x^{k}
%
for solving A x = b
%
%
[x,k] = statit(A,M1,M2,b,x,tol)
%
Input: A system matrix
%
M1,M2 M = M1*M2 ‘preconditioner’
%
(M2 = [] indicates M2=identity)
%
b right hand side
%
x initial vector x^{0} (default x = 0)
%
tol (default tol = eps)
%
Output: x approximate solution
%
k number of iteration until convergence
%
convergence criterion:
%
norm(b - A*x) <= tol*norm(b - A*x0)
NumCSE, Lecture 10, Oct 21, 2013
r = b - A*x;
rnrm0 = norm(r); rnrm = rnrm0;
for k=1:5000
if isempty(M2),
x = x + M\r;
else
x = x + M2\(M\r);
end
r = b - A*x;
rnrm = norm(r);
if rnrm < tol*rnrm0, return, end
end
33/38
Numerical Methods for Computational Science and Engineering
34/38
Numerical Methods for Computational Science and Engineering
Stationary iterative methods: practical schemes
Stationary iterative methods: practical schemes
Matlab
Matlab
Poisson equation on square 11 × 11 - grid
Poisson equation on slightly larger grids
m=11; A=gallery(’poisson’,m);
b = A*[1:m2 ]’; tol = 1e-6, x = zeros(m2 ,1)
Solver
Jacobi
Block Jacobi
Gauss–Seidel
Block Gauss–Seidel
SGS (A s.p.d.)
Block SGS
SOR(ω = 1.6)
Block SOR(ω = 1.5)
NumCSE, Lecture 10, Oct 21, 2013
MATLAB
M = D = diag(diag(A))
M= DB = triu(tril(A,1),-1)
M=tril(A)
M=tril(A,1)
M1 =tril(A)/sqrt(D); M2 = M1T
M1 =tril(A,1)/chol(DB ); M2 = M1T
D/omega + tril(A,-1)
triu(tril(A,1),-1)/omega + tril(A,-m)
solver
Jacobi
Block Jacobi
Gauss–Seidel
Block Gauss–Seidel
SSOR (ω = 1.8)
Block SSOR (ω = 1.8)
nit
341
176
174
90
90
48
32
24
n = 312
2157
1093
1085
547
85
61
n = 632
7787
3943
3905
1959
238
132
Table 1: Iteration steps for solving the Poisson equation on a 31-by-31
and on a 63-by-63 grid with an relative residual accuracy of 10−6 .
Note: The generation of M for the blocked versions is specific to this
problem!
NumCSE, Lecture 10, Oct 21, 2013
35/38
NumCSE, Lecture 10, Oct 21, 2013
36/38
Numerical Methods for Computational Science and Engineering
Numerical Methods for Computational Science and Engineering
Stationary iterative methods: practical schemes
Stationary iterative methods: practical schemes
Matlab
Matlab
Iterative refinement/improvement
Iterative refinement/improvement (cont.)
% Ex. 2.7 from Schwarz-K"ockler
A = [0.29412, 0.41176, 0.52941,
0.42857, 0.57143, 0.71429,
0.36842, 0.52632, 0.42105,
0.38462, 0.53846, 0.46154,
b = [0.17642, 0.21431, 0.15792,
Let Ax = b be solved via GEPP, PA = LU.
We wish to improve the accuracy of the computed solution ˆ
x.
If we execute
r = b − Aˆ
x
Solve L y = Pr
Solve U z = y
0
ˆ
x =ˆ
x+z
then, in exact arithmetic,
Aˆ
x0 = A ˆ
x + A z = (b − r) + r = b.
Attention: the residual ˆr must be computed with higher precision!
Otherwise there may be no improvement.
10α
Heuristic: In computation with d decimal digits and κ∞ (A) =
d − α digits can be gained per step. (At most d correct digits.)
NumCSE, Lecture 10, Oct 21, 2013
37/38
on iterative improvement
0.58824;
0.64286;
0.36842;
0.38462]
0.15380]’
x = single(A)\single(b);
r = b - A*x
r = b - A*double(x)
% solve A*x=b in single precision
% residual in single precision
% residual in double precision
X = A\b;
R = b - A*X
% solve A*x=b in double precision
% residual in double precision
x-X, r-R
z = single(A)\r;
x = x + z
% correct single prec. x using double
% prec. residual
NumCSE, Lecture 10, Oct 21, 2013
38/38