SlideShare a Scribd company logo
1 of 9
Download to read offline
Smoothed Aggregation
Seminar Multigrid Methods
Tingting Sun
Fakult¨at f¨ur Informatik
Technische Universit¨at M¨unchen
Email: sun@in.tum.de
Abstract— Multigrid methods are very efficient when solving
large linear system equations. Among MG the most familiar
approach with us is geometric multigrid method (GMG). GMG is
intuitive and can achieve a satisfying performance when applied
to isotropic problems on structured grids and regular geometries.
When it comes to unstructured grids, irregular geometries, as
well as anisotropic problems, things are not so simple any more.
It is not feasible to predefine a grids hierarchy on a unstructured
grid; also, in the cases of anisotropic problems, the smooth errors
generated from smoother are not ”smooth” in all the directions.
If we insist on GMG, the unsmoothness in the direction of
anisotropy might arise some incorrectness in the restriction and
the corresponding prolongation. Then the convergence would
be slowed down. Compared with GMG, AMG is independent
of the knowledge of the geometry. The coarsening strategy is
generated automatically from the information in system matrix.
In this paper we will develop a special AMG based on smoothed
aggregation(SA). In the end, the specific algorithm for SA will
be demonstrated. We will make comparisons with different
parameters, and find the optimal performance.
Index Terms— Algebraic Multigrid Method(AMG), Smoothed
Aggregation(SA), Smooth Error
I. INTRODUCTION
Normal iterative method for linear system of equations
can eliminate high frequency errors in each iteration while
still keeping low frequency ones. Multigrid Method could
complement this by solving the problems on a hierarchy of
coarse grid levels. It is very efficient when applied to solve
large linear system of equations. With different ways of
creating the hierarchy of grids one can obtain different MG
approaches. The coarse grids selection in geometric multigrid
method utilizes the geometric informations, meanwhile, in
algebraic multigrid method the coarse grids selection can
be achieved automatically by the implicit knowledge in
the system matrix. Geometric multigrid method is good
enough on structured grids and isotropic problems. For other
problems on unstuctured grids or anisotropic problems, it is
not so satisfying any more .
The algebraic multigrid method(AMG), was developed
by Brandt, McCormick, Ruge and St¨uben in 1982. It was
explored early on by Stueben in 1983, and polularized by
Ruge and Stuben in 1987.. [1] [2] [3] [4] [5]˙
Different from GMG which fixes the coarsening approaches
and select smoothers, AMG fixes the relaxation and selects
the coarsening approaches. The approach we will discuss in
the paper is AMG by smoothed aggregation. It is introduced
by Petr V anek [6] [7], and made further explaination
in [8].. It exploits the information hidden in the system
matrix and then aggregates all the points on the current
level. The representatives from the aggregations gives a
good interpolation to all the other points, and also realize
semi-coarsening in the direction with weak connections.
We will firstly have a look at the basic geometric
multigrid method. Then we will discuss its advantages
and disadvantages, and see how algebraic multigrid method
could make improvements. After that, we will introduce the
coarse grids selection approach used in Smoothed Aggregation
AMG, and test the performance compared with GMG, and
compared with different parameters in the algorithm itself.
In this paper, all the discussions are around the linear system
characterized by a symmetric, positive, definite matrix. The
example problem we will use for implementation is a simple
2D Poisson equation on the domain Ω =]0, 1[2
α
∂2
u
∂x2
+β
∂2
u
∂y2
= −2π2
sin (πx) sin (πy), u = 0 on ∂Ω (1)
II. GEOMETRIC MULTIGRID METHOD
The basic multigrid algorithm to solve a large system of
linear equations Ax = b is demonstrated as follows:
1) Presmoothing: or relaxation, iterations (GS, Jacobi,
SOR, etc) on fine grids.
2) Coarsening/Restriction: restrict residual to coarse level.
3) Solve coarse grid equation: by a direct solver or recur-
sion of BMG.
4) Prolongation/Interpolation: interpolate the error to fine
level, and correct the solution.
5) Postsmoothing: iterations on fine grids.
With different coarse grids selection methods we can obtain
different MGs. Geometric multigrid method(GMG), which is
mostly used for solving linear systems produced on regular
geometries, achieves coarsening by the geometric knowledge.
In our example PDE (1) with α = 1, β = 1, the domain is the
square Ω. We discrete it with N equidistant points along each
dimension, that is, in total N2
grid points. We select every
second points in each dimension as the coarse grid points (see
Fig.1, red points denote points on coarser level), that is, totally
(N−1)2
4 (suppose N = 2k
− 1, k is an integer) grid points in
the coarser level.
Both bilinear interpolation and full weighting interpolation
Fig. 1. Coarse grids selection
can be chosen as the interpolation in GMG. The restriction
operator, according to Galerkin Principle, is the transpose
of the interpolation operator. In our implementation, we use
matlab function interp2() for restriction and interpolation( see
Fig.2), as we know how to generate coarser grid according to
what is mentioned above. We use a V-cycle in our algorithm,
and performs 2 Gauss-Seidel iterations as smoothing method.
When the number of grid points in each dimension is less than
4, the lowest level is reached. The direct solver on the lowest
level is also Gauss-Seidel. We trace the residual after each MG
iteration and use the the ratio of the current residual to the first
one as the criteria of teminating the algorithm. For PDE(1),
the domain is discretized by a mesh with N equidistant points
in each dimension. With a matrix T stores values in all the
points, and the initial guess sets 1 on the whole domain, the
GMG algorithm is:
Algorithm 1 GMG
set convergence factor cf = res current
res first = 1, solution
matrix T = zeros(N2
, 1);
while cf > 1e − 8 do
V-cycle iteration:
if N < 4 then
use direct solver Gauss-Seidel
else
presmoothing: perform 2 Gauss-Seidel iterations on T
compute residual res fine
restrict res fine to coarser grid
do V-cycle iteration on coarser level and get correc-
tion coarse
interpolate correction to fine level and get correc-
tion fine,T = T + correction finex
postsmoothing: perform 2 Gauss-Seidel iterations on T
as correction and return this correction to finer level.
end if
compute residual matrix res
end while=0
GMG is intuitive and straightforward, and for isotropic
problems its performance is outstanding. Also, The advantage
of geometric multigrid over algebraic multigrid is that the
former should perform better for non-linear problems since
non-linearities in the system are carried down to the coarse
levels through the re-discretization. [14] The following graphs
illustrate the performance of GMG solving the example PDE
(1) of isotropic problem( α = 1, β = 1). Fig.3 shows the
solution generated by GMG. Fig.4 and Fig.5 demonstrate the
convergence factors respectively with N=31 and N=63. The
algorithm operates only 5 MG iterations. Convergences factors
are quite small, which indicates a fast convergence.
Fig. 2. Restriction and Interpolation in GMG [13]
Fig. 3. Solution of isotropic problem by GMG, N=63
III. ALGEBRAIC MULTIGRID METHOD(AMG)
A. Availability of geometric knowledge
As we could see, GMG does a good job on structured
grids. In the problem above, we solve the Poisson equation
on a square with N · N nodes on the fine level, and we will
choose every second nodes in both directions for coarsening.
Obviously, this is not feasible for unstructured mesh or irregu-
lar geometries. In AMG, coarsening is achieved automatically
without knowledge on geometries. Only implicit information
in the original operator matrix is exploited.
Fig. 4. Convergence Factors by GMG on isotropic problem, N=31
Fig. 5. Convergence Factors by GMG on isotropic problem, N=63
B. Oscillatory smooth error
Another thing we need to reconsider about GMG is that if
it could work well for the anisotropic problem as well?
Fig.6 and Fig.7 plot the convergence factor curve when GMG
is applied on anisotropic problem:(Fig.6 with β = 0.01, Fig.7
with β = 0.00001)
Uxx + 0.01Uyy = −2π2
sin(πx) sin(πy)
We can see that compared with on isotropic problem, much
more MG iterations are needed and the convergence factors
are remarkablely increased. So the error damping and the
convergence slow down. Meanwhile compare Fig.6 with Fig.7,
obviously the more anisotropic the problem is, the more slowly
the errors converge. GMG becomes more and more inefficient
when the anisotropy increases. Now we discuss what causes
this problem.
For isotropic problem, in our example α = β = 1, the
5-point stencil is:


0 1 0
1 −4 1
0 1 0


Diffusion problem with isotropy means speeds along all
directions are the same and the smooth error in each
dimension keeps ”smooth”. For anisotropic problem, where
α << β or α >> β, the stencil is:


0 β 0
α −2α − 2β α
0 β 0


Fig. 6. Convergence Factors by GMG on Anisotropic problem, α = 1, β =
0.01, N=31
Fig. 7. Convergence Factors by GMG on isotropic problem, α = 1, β =
0.00001,N=31
then the speeds along different directions are not the same and
the smooth error in some dimension might have oscillations.
Anisotropy is the property of being directionally dependent,
as opposed to isotropy, which implies identical properties in
all directions. [9] The directional dependence in an anisotropic
Poisson equation appears in the difference between coefficients
of the partial derivatives:
α
∂2
u
∂x2
+ β
∂2
u
∂y2
= −2π2
sin (πx) sin (πy)
with α β or β α
For the simplicity, we suppose α = 1 and β α. We will
use this as our example in the following discussion.
Now let us discuss the effect brought by the anisotropy
of these coefficients. Given the system matrix A, the system
equation is Ax = b. We can decompose A to 3 parts: diagonal
matrix D, lower triangular matrix L, upper triangular matrix
U, that is, A = D + L + U. The following inference is based
on [11]. Suppose we use point wise Gauss-Seidel iterations as
smoother, then for the kth iteration:
A = D + L + U
=⇒(D + L)x = b − Ux
=⇒x(k+1)
= (D + L)−1
(b − Ax(k)
) + x(k)
=⇒x(k+1)
= (D + L)−1
Ae(k)
+ x(k)
Subtract both sides from the exact solution, we could obtain
the error recurence:
e(k+1)
= (Id + (L + D)−1
A)e(k)
One could observe that after several iterations of the relax-
ation, the error starts to converge very slowly. That inplies
e(k+1)
≈ e(k)
,or as a better view point
(D + L)−1
Ae, Ae e, Ae
For most iterations(e.g., Jacobi or Gauss Seidel), this holds if
[11]
D−1
Ae, Ae e, Ae (2)
It is easy to show from(1) that smooth error satisfies Ae, e
De, e , and we have from this [11]
1
2
i=j
(
−aij
2aii
)(
ei − ej
ei
)2
1 (3)
Since we expect a slow convergence after presmoothing, the
relation above must be satisfied. It suggests that, when aij
is ”big”, in order to fulfill this inequality, (ei − ej) must
be quite small. So ei and ej are close, or we could say ei
depends on ej and the error is kind of smooth in the direction
from point i to point j. And vise verse, when aij is ”small”,
ei and ej might differ a lot so that the error might oscillates
along this direction.
Hence, we know the components of the system matrix
A have effects on the smoothness of the error. Then what
determines those components? Obviously it is the coefficients
in the PDE.
So what do these components in the system matrix A
actually indicate? The relative magnitude of a component aij
implies how strong the connection between point i and point
j. If aij is relatively large compared to the other off-diagonal
components, then point i and point j have strong connection
and the value at point i plays quite a role in determining the
value of point j.
As the isotropic coefficients generate the same off-diagonal
components in A, while the anisotropic ones do not, we
could expect different connections in the two cases. For the
isotropic problem the connections among points along both
directions( x and y in our example) are similar. For the
anisotropic problem, connections along the direction with the
larger coefficient are stronger than the other one.
So now we could make a short conclusion here. The
anisotropy of the coefficients in the PDE give rise to different
off-diagonal components in the system matrix A: the large
coefficient causes relative larger aij in the direction with
strong connections, or we call direction of dependence. From
(2) we know that with ”big” aij the errors at the two points
don’t vary too much whereas with ”small” aij the errors at
the two points varies quickly.
Thus, not like isotropic coefficients, in anisotropic
problems, the smooth error is not ”smooth” in both direction.
In our example it is smooth in x-direction, and oscillatory in
y-direction. See Fig.8.
The anisotropic coefficients indicates that the points in x-
direction have strong connections while in y-direction have
weak connection. Fig.9 shows the error components along
two directions in an extrme example. With strong connections,
that is , in the direction of dependence, smooth error varies
slowly, see upper plot in Fig.9. This implies a continuity
and coarsening in this direction could later lead to a good
interpolation(see the points and line in red). Meanwhile, with
weak connections, that is, in the direction of independence, the
error might varies quite quickly, see lower plot in Fig.9. Due
to the oscillation of the errors, coarsening in this direction,
to the same extent as in the direction of dependence, could
probably arise a bad interpolation later, which might cause
and accelerate errors in the final solution. Like the red line
connected by the red points in the lower plot in Fig.9,
obviously it doesn’t keep the shape of the green one at all. One
solution of this problem is semi-coarsening, which suggest
coarsening only in the direction of dependence.(See Fig.10,
keep coarsening approach in the upper plot, when along the
direction which is indicated in the lower plot, select more
coarse grid points).
Hence we expect AMG could have improvements in both
aspects. It should not only generate the hierarchy of coarse
grids independent of the geometric knowledge, but also
damp the oscillation of the smooth error in the direction of
independence by an automatically semi-coarsening.
IV. AMG BY SMOOTHED AGGREGATION
A. Aggregations on each level
As we mentioned above, the system matrix has implicit
information about the connections among points in it. All the
points could be decomposed into disjoint subsets according
to their connections. In each subset, all the points have
strong connections and could be represented by one of them.
The other points must be able to be interpolated by these
”representatives”.
Now we need to define strong connection between points,
or what is called strongly-coupling in [12] :
if| aij |≥
√
aiiajj, < 1, ( is a threshold for strongly
coupling) then point i and point j are strongly coupled.
Fig. 8. Smooth error after 5 G-S iterations for anisotropic problem with
α = 1, β = 0.0001
Fig. 9. Smooth error components. Upper: along the direction with strong
connections; lower: along the direction with weak connections
Then we define strongly-coupled neighborhood of point i as:
[12]
Ni( ) = {j :| aij |≥
√
aiiajj}
The way to aggregate:
1) Travel all the points in the set, generate their strongly-
coupled neighborhoods Ni( ).
2) Set R = {i ∈ 1, ..., n : Ni( ) = {i}} and num
= 0. Notice here the isolated points, which contains
only itself in the strongly-coupled neighborhood, are
excluded.
3) Loop on all i ∈ R
Fig. 10. Semicoarsening
• if Ni ⊂ R, then set Ci = Ni, R = RNi, num =
num + 1.
Here Ci is used to store the aggregations. After the loop
num show the number of aggregations and there might
be still some points left in R.
4) Handling the remnants.
• Duplicate Ci → ˆCi
• Travel all the points i in R, if there exists some j so
that Nj intersects ˆCi, then add this i to Ci. In case
there are more than one such j, take the one with
strongest coupling. Then R = Ri.
Fig.11, Fig.12, and Fig.13 illustrate the aggregation from the
first level (finest grid) to the third level. Here for the threshold
we use = 0.08 · 0.5l−1
. Blue dots represent the finest grid
points, black ones for points on the second level and red circles
for points on the third level.
B. Prolongator
Based on the aggregations we could construct the prolon-
gator. We will firstly construct a tentative prolongator Ph
2h:
(Ph
2h)ij =
1 if i ∈ Cj,
0 otherwise
(4)
With this tentative prolongator, points in a same aggregation
will be assigned the same value as the representative of this
aggregation. In other words, it makes a piecewise constant
prolongation. We make a further improvement by using a
damped Jacobi smoother:
S = (Id − ωD−1
AF
)
and then
Ih
2h = SPh
2h
Fig. 11. Aggregations of points, N=15, α = 1, β = 0.01
Fig. 12. Aggregations of points, N=15, α = 0.01, β = 1
where AF
= (aF
ij) is the filtered matrix given by: [12]
aF
ij =
aij if j ∈ Ni( ),
0 otherwise
if i = j, aF
ii = aii−
n
j=1,j=i
(aij−aF
ij)
The smoother S smooths the ”edge” of the aggregations. Then
the value on the whole domain will vary continuously. Here
the filtration prevents the undesired overlaps of the coarse
space basis functions. By construction, AF
typically makes
the nonzero pattern of operator A on coarse grids follow the
9-point stencil. [12]
Usually we take ω = 2
3 , like in Jacobi, in most cases this
tends to damp the error very efficiently. is a threshold to
determine the strong connection. In our implementation we
Fig. 13. Aggregations of points, N=31, α = 1, β = 0.01
will try the effects on the solution by different choices of .
See Table II.
C. The Coarse Grid Operator
Here we use Galerkin principle to define the coarse grid
operator Acoarse. For the restrictor we use the transpose of
the prolongator;and the operator on the new coarse grids is
obtained by:
Acoarse = (Ih
2h)T
A(Ih
2h)
V. IMPLEMENTATION
A. Algorithm for One iteration of SA-AMG
We are trying to solve linear system equations. The follow-
ing is the algorithm for a SA-AMG iteration on level l.
Ax = b
1) Pre-smoothing: relaxation with v1 Jacobi or Gauss Sei-
del iterations. With G-S, M = −(L + D)−1
U, N =
(L + D)−1
,D is the diagonal matrix of A, and L, U
respectively are the lower and upper triangular of A.
• x = Mx + Nb
2) Coarse grid correction:
• Compute operator
a) Compute the tentative prolongator P
b) Compute the smoother S
c) Compute the prolongator Ih
2h and restrictor R:
Ih
2h = SP, R = (Ih
2h)T
• Restriction
a) System matrix on level l + 1:Acoarse = RAIh
2h
b) Restriction on residual: bcoarse = R(bl
− Ax)
• Smoothing
a) If it reaches the lowest level, Acoarsexcoarse =
bcoarse with direct solver and return xcoarse
b) If not, recur the algorithm on level l + 1
• Prolongation/Interpolation & Correction
a) x = x + Ih
2hxcoarse
3) Post-smoothing:v2 Jacobi or Gauss Seidel iterations
B. Algorithm of Computing Tentative Prolongator P
1) Generate aggregations (Ci)
2) Create a zero matrix P. Then for i = 1, ..., nl, if i ∈ Cj,
setPij = 1
C. Algorithm of Aggregations
1) For i = 1, ..., nl, compute the corresponding strongly-
coupled neighborhood and store in the corresponding
rows of matrix NGBmatrix;
2) Create array RR to store the number of elements in each
strongly-coupled neighborhood and array R to record the
status of all the points: 0 for isolated points, -1 for points
already in aggregations, ¿0 for the remaining points in
R.
3) For i = 1, ..., nl, if its Nl
i is a subset of the set of points
in R, then C(i, :) ← Nl
i
4) Travel all the points left in R, search whose Nl
i its Nl
i
intersects. Add this point to the aggregation whose repre-
sentative has the most strongly-coupled neighborhoods.
D. Other details
• In the complete algorithm for SA-AMG, the stop criteria
to terminate the SA-AMG iterations is that:
initialresidual
currentresidual < 10−8
• The initial guess could be 0 everywhere or some random
values.
• could be set changing along with levels or keep con-
stant.
E. Experiments
The following tables show the performance compari-
son(main iterations, final errors, average convergence factor,
coarsening ratio) with different parameters. In my implemen-
tation I assign 1 on all the grid points as the initial guess and
set the stop criteria to be when ratio of current residual to the
first residual is less than 10−8
Table I shows the coarsening ratios with different N. From the
table we can see that the coarsening ratio of level 1 to level
2, as well as of level 2 to level 3, are around(less than) 3. But
the ratios of last several levels of large N have a big jump, for
example, when N=63, the coarsening ratio of level 4 to level
5 is 9, much bigger than 3. That is because after several levels
of semi-coarsening, the error component in the direction with
weak connections reaches ”similar” frequencies as the current
error component in the strong connected direction. Then the
former direction also starts coarsening. Then the coarsening
ratios remarkably increase.
Table II shows how the threshold affect the performance(the
number of MG iterations and average convergence factor). In
the test, we also take the coefficients and the value of N into
account. From the table we can see equals 0.1, 0.05 and
TABLE I
COARSENING RATIO IN THE IMPLEMENTATION WITH DIFFERENT N,
α = 1, β = 0.01, = 0.08 · 0.5l−1
N level1 level2 level3 level4 level5 ratio
7 49 21 7 - - ≈ 2.67
15 225 75 30 5 - ≈ 3.8333
31 961 341 124 22 4 ≈ 4.1761
63 3969 1323 441 63 7 ≈ 5.5
0.08 · 0.5l−1
make no big difference in the performance. But
= 0.5 doesn’t work well according to the high number of
iterations and convergence, or when N=31, the program needs
much more iterations.In the table, acf is short for average
convergence factor and ni is short for number of iterations.
TABLE II
CONVERGENCE FACTORS AND NUMBER OF ITERATIONS IN THE
IMPLEMENTATION WITH DIFFERENT
α=1,
β=0.01
N = 0.5 = 0.1 = 0.05 = 0.08 · 0.5l−1
ni 7 14 5 5 5
acf 7 0.253 0.005 0.005 0.005
ni 15 49 9 9 9
acf 15 0.704 0.0998 0.0998 0.0998
ni 31 - 7 7 7
acf 31 - 0.039 0.039 0.039
α=1
β=1e−5
31
ni 31 - 7 7 7
acf 31 - 0.0405 0.040474 0.0405
α=0.01
β=1
31
ni 31 - 7 7 7
acf 31 - 0.039 0.039 0.039
Then we test with different numbers of iterations in the
smoothers(pre and post) and shows the result in Table III and
Table IV. The tests are operated respectivly with N=15 and
31. Obivously the more iterations we use in the smoother the
less the multigrid(MG) iterations we need, and the lower the
average convergence factor is. This is reasonable, since with
more smoother iterations we have a better result of relaxation.
Now let us test how the degree of anisotropy affect the
TABLE III
DIFFERENT SMOOTHER ITERATIONS, α = 1, β = 0.01, = 0.08 · 0.5l−1
N=15
smoother iterations 2 4 8
MGiterations 10 9 8
average convergence factor 0.130504 0.099797 0.068058
TABLE IV
DIFFERENT SMOOTHER ITERATIONS, α = 1, β = 0.01, = 0.08 · 0.5l−1
N=31
smoother iterations 2 4 8
MGiterations 8 7 6
average convergence factor 0.066292 0.039603 0.021967
performance. We keep α = 1, n = 31 and let β travels 1e-5,
1e-4, 1e-3,..., 1e4, 1e5. Fig.14 and Fig.15 respectively illustrate
the number of MG iterations and the average convergence
factor under corresponding β. Apparently, the performance is
better(fewer MG iterations and lower acf) when the anisotropy
is stronger. Fig.16 and Fig.17 plots these two parameters by
GMG. Obviously, GMG plays a good role when β = 1, that
is , isotropic problem. Compared with Fig.14 and Fig.15 it’s
clear that SA works much better for anisotropic problem. And
the more the anisotropy the problems are, the more obvious
the advantage of SA is.
Fig. 14. Number of MG iterations for anisotropic problem with different
β, N = 31
Fig. 15. Average convergence fators for anisotropic problem with different
β, N = 31
VI. CONCLUSION
Now we get some performance comparisons about the AMG
algorithm based on smoothed aggregation. There are several
factors which will influence the performance: iterations in
smoother, the degree of anisotropy of the problem, and the
threshold parameter . The more iterations in smoother is, the
fewer MG iterations and the lower the average convergence
factors are. Compare with GMG, SA has super advantages
when applied on anisotropic problem. And we expect to
be considerably small to generate good results. For isotropic
problem, GMG is straightforward and the performance is
excellent. However, for anisotropic problem, since AMG can
Fig. 16. Average convergence fators for anisotropic problem with different
β, by GMG, N=31
Fig. 17. Average convergence fators for anisotropic problem with different
β, by GMG, N=31
deal with the osicillation occured in the smooth error, it leads
to outstanding performance on anisotropic problem.
REFERENCES
[1] J.W.Ruge and K.Stueben, Efficient solution of finite difference and finite
element equations by algebraic multigrid(AMG), in Multigrid Methods
for Mathematics and its Applications Conference Series, Clarendon
Press, Oxford, 1985
[2] K.Stueben, Algebraic multigrid(AMG): experiences and comparisons,
Appl. Math. Comput., 13(?1983)
[3] A.Brandt, S.F.Mccormick, and J.W.Ruge, Algebraic multigrid(AMG) for
sparse matrix equations, in Sparsity and Its Applications, D.J.Evans, ed.
Cambridge Univ.Press, Cambridge, 1984
[4] J.W.Ruge and K.Stuben Algebraic multigrid(AMG), in Multigrid Meth-
ods, S.F.McCormick, ed., vol.3 of Frontiers in Applied Mathematics,
SIAM, Philadelphia, PA,1987
[5] J.W.Ruge ,Algebraic multigrid(AMG), for geodetic survey problems, in
Prelimary Proc. Internat. Multigrid Conference, Fort Collins, CO,1983,
Institute for Computational Studies at Colorado State University.
[6] P.Vanek, Acceleration of convergence of a two-level algorithm by
smoothing transfer operator, Applications of Mathematics, 37(1992)
[7] P.Vanek, Fast multigrid solver, Applications of Mathematics to appear
[8] P.Vanek, Jan Mandel, and Marian Brezina, Algebraic Multigrid by
Smoothed Aggregation for Second and Fourth Order Elliptic Problems,
1995
[9] WIKIPEDIA, http://en.wikipedia.org/wiki/Anisotropy
[10] K. St¨uben, An Introduction to Algebraic Multigrid, Appendix A
[11] Van Emden Henson, April, 1999, An Algebraic Multigrid Tutorial
[12] Petr Vanek, Jan Mandel, Marian Brezina, Algebraic Multigrid by
Smoothed Aggregation for Second and Fourth Order Elliptic Problems,
Jan 1995
[13] Klaus St¨uben, 2nd International FEFLOW User Conference Potsdam,
September 14-16, 2009, Algebraic Multigrid (AMG) Multigrid Methods
and Parallel Computing
[14] CFD-ONLINE, http://www.cfd-online.com/Wiki/Geometric multigrid -
FAS

More Related Content

What's hot

FINAL DEFENCE
FINAL DEFENCEFINAL DEFENCE
FINAL DEFENCE
Rahul KC
 

What's hot (17)

Civil engineering mock test
Civil engineering mock testCivil engineering mock test
Civil engineering mock test
 
FINAL DEFENCE
FINAL DEFENCEFINAL DEFENCE
FINAL DEFENCE
 
International Journal of Mathematics and Statistics Invention (IJMSI)
International Journal of Mathematics and Statistics Invention (IJMSI) International Journal of Mathematics and Statistics Invention (IJMSI)
International Journal of Mathematics and Statistics Invention (IJMSI)
 
Bsc it summer 2015 solved assignments
Bsc it summer   2015 solved assignmentsBsc it summer   2015 solved assignments
Bsc it summer 2015 solved assignments
 
International Journal of Computational Engineering Research(IJCER)
International Journal of Computational Engineering Research(IJCER)International Journal of Computational Engineering Research(IJCER)
International Journal of Computational Engineering Research(IJCER)
 
EXACT SOLUTIONS OF A FAMILY OF HIGHER-DIMENSIONAL SPACE-TIME FRACTIONAL KDV-T...
EXACT SOLUTIONS OF A FAMILY OF HIGHER-DIMENSIONAL SPACE-TIME FRACTIONAL KDV-T...EXACT SOLUTIONS OF A FAMILY OF HIGHER-DIMENSIONAL SPACE-TIME FRACTIONAL KDV-T...
EXACT SOLUTIONS OF A FAMILY OF HIGHER-DIMENSIONAL SPACE-TIME FRACTIONAL KDV-T...
 
Block Hybrid Method for the Solution of General Second Order Ordinary Differe...
Block Hybrid Method for the Solution of General Second Order Ordinary Differe...Block Hybrid Method for the Solution of General Second Order Ordinary Differe...
Block Hybrid Method for the Solution of General Second Order Ordinary Differe...
 
495Poster
495Poster495Poster
495Poster
 
FEA Quiz - worksheets
FEA Quiz - worksheetsFEA Quiz - worksheets
FEA Quiz - worksheets
 
A Deconvolution Approach to the Three Dimensional Identification of Cracks in...
A Deconvolution Approach to the Three Dimensional Identification of Cracks in...A Deconvolution Approach to the Three Dimensional Identification of Cracks in...
A Deconvolution Approach to the Three Dimensional Identification of Cracks in...
 
Fourth order improved finite difference approach to pure bending analysis o...
Fourth   order improved finite difference approach to pure bending analysis o...Fourth   order improved finite difference approach to pure bending analysis o...
Fourth order improved finite difference approach to pure bending analysis o...
 
U36123129
U36123129U36123129
U36123129
 
Solving the Poisson Equation
Solving the Poisson EquationSolving the Poisson Equation
Solving the Poisson Equation
 
BEM Solution for the Radiation BC Thermal Problem with Adaptive Basis Functions
BEM Solution for the Radiation BC Thermal Problem with Adaptive Basis FunctionsBEM Solution for the Radiation BC Thermal Problem with Adaptive Basis Functions
BEM Solution for the Radiation BC Thermal Problem with Adaptive Basis Functions
 
B04310408
B04310408B04310408
B04310408
 
Introduction to FEA
Introduction to FEAIntroduction to FEA
Introduction to FEA
 
IRJET- 5th Order Shear Deformation Theory for Fixed Deep Beam
IRJET- 5th Order Shear Deformation Theory for Fixed Deep BeamIRJET- 5th Order Shear Deformation Theory for Fixed Deep Beam
IRJET- 5th Order Shear Deformation Theory for Fixed Deep Beam
 

Viewers also liked

статья всесоюзная рыбалка
статья всесоюзная рыбалкастатья всесоюзная рыбалка
статья всесоюзная рыбалка
anisimoff
 
Mme Moulin SFR Environmental Labeling of Mobile Phones
Mme Moulin SFR Environmental Labeling of Mobile PhonesMme Moulin SFR Environmental Labeling of Mobile Phones
Mme Moulin SFR Environmental Labeling of Mobile Phones
IDATE DigiWorld
 
tamara nicole stylist
tamara nicole stylisttamara nicole stylist
tamara nicole stylist
tamara1
 
Соревнования по футболу среди 3-5 классов
Соревнования по футболу среди 3-5 классовСоревнования по футболу среди 3-5 классов
Соревнования по футболу среди 3-5 классов
anisimoff
 
Cпортивный калейдоскоп апрель-май 2016г..
Cпортивный калейдоскоп апрель-май 2016г..Cпортивный калейдоскоп апрель-май 2016г..
Cпортивный калейдоскоп апрель-май 2016г..
anisimoff
 
L'ECONOMIE TUNISIENNE ET LA PROMOTION DE LA CREATION D'ENTREPRISES CONJONCTUR...
L'ECONOMIE TUNISIENNE ET LA PROMOTION DE LA CREATION D'ENTREPRISES CONJONCTUR...L'ECONOMIE TUNISIENNE ET LA PROMOTION DE LA CREATION D'ENTREPRISES CONJONCTUR...
L'ECONOMIE TUNISIENNE ET LA PROMOTION DE LA CREATION D'ENTREPRISES CONJONCTUR...
yamen
 
Arc1126 project 2a brief observation deck @ cape rachado
Arc1126 project 2a brief observation deck @ cape rachadoArc1126 project 2a brief observation deck @ cape rachado
Arc1126 project 2a brief observation deck @ cape rachado
Darshiini Vig
 

Viewers also liked (20)

Hsp秋合宿記録271122
Hsp秋合宿記録271122Hsp秋合宿記録271122
Hsp秋合宿記録271122
 
статья всесоюзная рыбалка
статья всесоюзная рыбалкастатья всесоюзная рыбалка
статья всесоюзная рыбалка
 
18
1818
18
 
web design
web designweb design
web design
 
Mme Moulin SFR Environmental Labeling of Mobile Phones
Mme Moulin SFR Environmental Labeling of Mobile PhonesMme Moulin SFR Environmental Labeling of Mobile Phones
Mme Moulin SFR Environmental Labeling of Mobile Phones
 
G092 Iida, T., Ito, T., & Inoue, T. (2008). HIV-related knowledge and attitu...
G092  Iida, T., Ito, T., & Inoue, T. (2008). HIV-related knowledge and attitu...G092  Iida, T., Ito, T., & Inoue, T. (2008). HIV-related knowledge and attitu...
G092 Iida, T., Ito, T., & Inoue, T. (2008). HIV-related knowledge and attitu...
 
tamara nicole stylist
tamara nicole stylisttamara nicole stylist
tamara nicole stylist
 
Соревнования по футболу среди 3-5 классов
Соревнования по футболу среди 3-5 классовСоревнования по футболу среди 3-5 классов
Соревнования по футболу среди 3-5 классов
 
Pic1
Pic1Pic1
Pic1
 
Le Sport
Le SportLe Sport
Le Sport
 
Cпортивный калейдоскоп апрель-май 2016г..
Cпортивный калейдоскоп апрель-май 2016г..Cпортивный калейдоскоп апрель-май 2016г..
Cпортивный калейдоскоп апрель-май 2016г..
 
L'ECONOMIE TUNISIENNE ET LA PROMOTION DE LA CREATION D'ENTREPRISES CONJONCTUR...
L'ECONOMIE TUNISIENNE ET LA PROMOTION DE LA CREATION D'ENTREPRISES CONJONCTUR...L'ECONOMIE TUNISIENNE ET LA PROMOTION DE LA CREATION D'ENTREPRISES CONJONCTUR...
L'ECONOMIE TUNISIENNE ET LA PROMOTION DE LA CREATION D'ENTREPRISES CONJONCTUR...
 
日本在宅医療学会 クラウド型地域医療連携システムの活用 20110625
日本在宅医療学会 クラウド型地域医療連携システムの活用 20110625日本在宅医療学会 クラウド型地域医療連携システムの活用 20110625
日本在宅医療学会 クラウド型地域医療連携システムの活用 20110625
 
Web Squared (Web²) : Principes et exemples
Web Squared (Web²) : Principes et exemplesWeb Squared (Web²) : Principes et exemples
Web Squared (Web²) : Principes et exemples
 
достижение результатов
достижение результатовдостижение результатов
достижение результатов
 
Les filtres RSS dans Inoreader : détail de la syntaxe à utiliser (MAJ : mai 2...
Les filtres RSS dans Inoreader : détail de la syntaxe à utiliser (MAJ : mai 2...Les filtres RSS dans Inoreader : détail de la syntaxe à utiliser (MAJ : mai 2...
Les filtres RSS dans Inoreader : détail de la syntaxe à utiliser (MAJ : mai 2...
 
Arc1126 project 2a brief observation deck @ cape rachado
Arc1126 project 2a brief observation deck @ cape rachadoArc1126 project 2a brief observation deck @ cape rachado
Arc1126 project 2a brief observation deck @ cape rachado
 
Quels chiffres ajouter à son cv ?
Quels chiffres ajouter à son cv ?Quels chiffres ajouter à son cv ?
Quels chiffres ajouter à son cv ?
 
Mémoire sur la publicité et les médias sociaux Lisa HOCH
Mémoire sur la publicité et les médias sociaux  Lisa HOCHMémoire sur la publicité et les médias sociaux  Lisa HOCH
Mémoire sur la publicité et les médias sociaux Lisa HOCH
 
Indian Missiles
Indian Missiles Indian Missiles
Indian Missiles
 

Similar to SmoothedAggregation

2007 santiago marchi_cobem_2007
2007 santiago marchi_cobem_20072007 santiago marchi_cobem_2007
2007 santiago marchi_cobem_2007
CosmoSantiago
 
Grds international conference on pure and applied science (6)
Grds international conference on pure and applied science (6)Grds international conference on pure and applied science (6)
Grds international conference on pure and applied science (6)
Global R & D Services
 
Development of stereo matching algorithm based on sum of absolute RGB color d...
Development of stereo matching algorithm based on sum of absolute RGB color d...Development of stereo matching algorithm based on sum of absolute RGB color d...
Development of stereo matching algorithm based on sum of absolute RGB color d...
IJECEIAES
 

Similar to SmoothedAggregation (20)

10.1.1.34.7361
10.1.1.34.736110.1.1.34.7361
10.1.1.34.7361
 
MARGINAL PERCEPTRON FOR NON-LINEAR AND MULTI CLASS CLASSIFICATION
MARGINAL PERCEPTRON FOR NON-LINEAR AND MULTI CLASS CLASSIFICATION MARGINAL PERCEPTRON FOR NON-LINEAR AND MULTI CLASS CLASSIFICATION
MARGINAL PERCEPTRON FOR NON-LINEAR AND MULTI CLASS CLASSIFICATION
 
2007 santiago marchi_cobem_2007
2007 santiago marchi_cobem_20072007 santiago marchi_cobem_2007
2007 santiago marchi_cobem_2007
 
Randomized algorithm min cut problem and its solution using karger's algorithm
Randomized algorithm min cut problem and its solution using karger's algorithmRandomized algorithm min cut problem and its solution using karger's algorithm
Randomized algorithm min cut problem and its solution using karger's algorithm
 
An Intelligent Method for Accelerating the Convergence of Different Versions ...
An Intelligent Method for Accelerating the Convergence of Different Versions ...An Intelligent Method for Accelerating the Convergence of Different Versions ...
An Intelligent Method for Accelerating the Convergence of Different Versions ...
 
CHAPTER 3 numer.pdf
CHAPTER 3 numer.pdfCHAPTER 3 numer.pdf
CHAPTER 3 numer.pdf
 
Histogram Gabor Phase Pattern and Adaptive Binning Technique in Feature Selec...
Histogram Gabor Phase Pattern and Adaptive Binning Technique in Feature Selec...Histogram Gabor Phase Pattern and Adaptive Binning Technique in Feature Selec...
Histogram Gabor Phase Pattern and Adaptive Binning Technique in Feature Selec...
 
Hierarchical Approach for Total Variation Digital Image Inpainting
Hierarchical Approach for Total Variation Digital Image InpaintingHierarchical Approach for Total Variation Digital Image Inpainting
Hierarchical Approach for Total Variation Digital Image Inpainting
 
Lego like spheres and tori, enumeration and drawings
Lego like spheres and tori, enumeration and drawingsLego like spheres and tori, enumeration and drawings
Lego like spheres and tori, enumeration and drawings
 
Grds international conference on pure and applied science (6)
Grds international conference on pure and applied science (6)Grds international conference on pure and applied science (6)
Grds international conference on pure and applied science (6)
 
Genetic operators
Genetic operatorsGenetic operators
Genetic operators
 
Fast Incremental Community Detection on Dynamic Graphs : NOTES
Fast Incremental Community Detection on Dynamic Graphs : NOTESFast Incremental Community Detection on Dynamic Graphs : NOTES
Fast Incremental Community Detection on Dynamic Graphs : NOTES
 
SIFT/SURF can achieve scale, rotation and illumination invariant during image...
SIFT/SURF can achieve scale, rotation and illumination invariant during image...SIFT/SURF can achieve scale, rotation and illumination invariant during image...
SIFT/SURF can achieve scale, rotation and illumination invariant during image...
 
Ds33717725
Ds33717725Ds33717725
Ds33717725
 
Ds33717725
Ds33717725Ds33717725
Ds33717725
 
Graph Matching Algorithm-Through Isomorphism Detection
Graph Matching Algorithm-Through Isomorphism DetectionGraph Matching Algorithm-Through Isomorphism Detection
Graph Matching Algorithm-Through Isomorphism Detection
 
A Regularized Simplex Method
A Regularized Simplex MethodA Regularized Simplex Method
A Regularized Simplex Method
 
Joint3DShapeMatching
Joint3DShapeMatchingJoint3DShapeMatching
Joint3DShapeMatching
 
Image segmentation using normalized graph cut
Image segmentation using normalized graph cutImage segmentation using normalized graph cut
Image segmentation using normalized graph cut
 
Development of stereo matching algorithm based on sum of absolute RGB color d...
Development of stereo matching algorithm based on sum of absolute RGB color d...Development of stereo matching algorithm based on sum of absolute RGB color d...
Development of stereo matching algorithm based on sum of absolute RGB color d...
 

SmoothedAggregation

  • 1. Smoothed Aggregation Seminar Multigrid Methods Tingting Sun Fakult¨at f¨ur Informatik Technische Universit¨at M¨unchen Email: sun@in.tum.de Abstract— Multigrid methods are very efficient when solving large linear system equations. Among MG the most familiar approach with us is geometric multigrid method (GMG). GMG is intuitive and can achieve a satisfying performance when applied to isotropic problems on structured grids and regular geometries. When it comes to unstructured grids, irregular geometries, as well as anisotropic problems, things are not so simple any more. It is not feasible to predefine a grids hierarchy on a unstructured grid; also, in the cases of anisotropic problems, the smooth errors generated from smoother are not ”smooth” in all the directions. If we insist on GMG, the unsmoothness in the direction of anisotropy might arise some incorrectness in the restriction and the corresponding prolongation. Then the convergence would be slowed down. Compared with GMG, AMG is independent of the knowledge of the geometry. The coarsening strategy is generated automatically from the information in system matrix. In this paper we will develop a special AMG based on smoothed aggregation(SA). In the end, the specific algorithm for SA will be demonstrated. We will make comparisons with different parameters, and find the optimal performance. Index Terms— Algebraic Multigrid Method(AMG), Smoothed Aggregation(SA), Smooth Error I. INTRODUCTION Normal iterative method for linear system of equations can eliminate high frequency errors in each iteration while still keeping low frequency ones. Multigrid Method could complement this by solving the problems on a hierarchy of coarse grid levels. It is very efficient when applied to solve large linear system of equations. With different ways of creating the hierarchy of grids one can obtain different MG approaches. The coarse grids selection in geometric multigrid method utilizes the geometric informations, meanwhile, in algebraic multigrid method the coarse grids selection can be achieved automatically by the implicit knowledge in the system matrix. Geometric multigrid method is good enough on structured grids and isotropic problems. For other problems on unstuctured grids or anisotropic problems, it is not so satisfying any more . The algebraic multigrid method(AMG), was developed by Brandt, McCormick, Ruge and St¨uben in 1982. It was explored early on by Stueben in 1983, and polularized by Ruge and Stuben in 1987.. [1] [2] [3] [4] [5]˙ Different from GMG which fixes the coarsening approaches and select smoothers, AMG fixes the relaxation and selects the coarsening approaches. The approach we will discuss in the paper is AMG by smoothed aggregation. It is introduced by Petr V anek [6] [7], and made further explaination in [8].. It exploits the information hidden in the system matrix and then aggregates all the points on the current level. The representatives from the aggregations gives a good interpolation to all the other points, and also realize semi-coarsening in the direction with weak connections. We will firstly have a look at the basic geometric multigrid method. Then we will discuss its advantages and disadvantages, and see how algebraic multigrid method could make improvements. After that, we will introduce the coarse grids selection approach used in Smoothed Aggregation AMG, and test the performance compared with GMG, and compared with different parameters in the algorithm itself. In this paper, all the discussions are around the linear system characterized by a symmetric, positive, definite matrix. The example problem we will use for implementation is a simple 2D Poisson equation on the domain Ω =]0, 1[2 α ∂2 u ∂x2 +β ∂2 u ∂y2 = −2π2 sin (πx) sin (πy), u = 0 on ∂Ω (1) II. GEOMETRIC MULTIGRID METHOD The basic multigrid algorithm to solve a large system of linear equations Ax = b is demonstrated as follows: 1) Presmoothing: or relaxation, iterations (GS, Jacobi, SOR, etc) on fine grids. 2) Coarsening/Restriction: restrict residual to coarse level. 3) Solve coarse grid equation: by a direct solver or recur- sion of BMG. 4) Prolongation/Interpolation: interpolate the error to fine level, and correct the solution. 5) Postsmoothing: iterations on fine grids. With different coarse grids selection methods we can obtain different MGs. Geometric multigrid method(GMG), which is mostly used for solving linear systems produced on regular geometries, achieves coarsening by the geometric knowledge. In our example PDE (1) with α = 1, β = 1, the domain is the square Ω. We discrete it with N equidistant points along each dimension, that is, in total N2 grid points. We select every second points in each dimension as the coarse grid points (see Fig.1, red points denote points on coarser level), that is, totally (N−1)2 4 (suppose N = 2k − 1, k is an integer) grid points in the coarser level. Both bilinear interpolation and full weighting interpolation
  • 2. Fig. 1. Coarse grids selection can be chosen as the interpolation in GMG. The restriction operator, according to Galerkin Principle, is the transpose of the interpolation operator. In our implementation, we use matlab function interp2() for restriction and interpolation( see Fig.2), as we know how to generate coarser grid according to what is mentioned above. We use a V-cycle in our algorithm, and performs 2 Gauss-Seidel iterations as smoothing method. When the number of grid points in each dimension is less than 4, the lowest level is reached. The direct solver on the lowest level is also Gauss-Seidel. We trace the residual after each MG iteration and use the the ratio of the current residual to the first one as the criteria of teminating the algorithm. For PDE(1), the domain is discretized by a mesh with N equidistant points in each dimension. With a matrix T stores values in all the points, and the initial guess sets 1 on the whole domain, the GMG algorithm is: Algorithm 1 GMG set convergence factor cf = res current res first = 1, solution matrix T = zeros(N2 , 1); while cf > 1e − 8 do V-cycle iteration: if N < 4 then use direct solver Gauss-Seidel else presmoothing: perform 2 Gauss-Seidel iterations on T compute residual res fine restrict res fine to coarser grid do V-cycle iteration on coarser level and get correc- tion coarse interpolate correction to fine level and get correc- tion fine,T = T + correction finex postsmoothing: perform 2 Gauss-Seidel iterations on T as correction and return this correction to finer level. end if compute residual matrix res end while=0 GMG is intuitive and straightforward, and for isotropic problems its performance is outstanding. Also, The advantage of geometric multigrid over algebraic multigrid is that the former should perform better for non-linear problems since non-linearities in the system are carried down to the coarse levels through the re-discretization. [14] The following graphs illustrate the performance of GMG solving the example PDE (1) of isotropic problem( α = 1, β = 1). Fig.3 shows the solution generated by GMG. Fig.4 and Fig.5 demonstrate the convergence factors respectively with N=31 and N=63. The algorithm operates only 5 MG iterations. Convergences factors are quite small, which indicates a fast convergence. Fig. 2. Restriction and Interpolation in GMG [13] Fig. 3. Solution of isotropic problem by GMG, N=63 III. ALGEBRAIC MULTIGRID METHOD(AMG) A. Availability of geometric knowledge As we could see, GMG does a good job on structured grids. In the problem above, we solve the Poisson equation on a square with N · N nodes on the fine level, and we will choose every second nodes in both directions for coarsening. Obviously, this is not feasible for unstructured mesh or irregu- lar geometries. In AMG, coarsening is achieved automatically without knowledge on geometries. Only implicit information in the original operator matrix is exploited.
  • 3. Fig. 4. Convergence Factors by GMG on isotropic problem, N=31 Fig. 5. Convergence Factors by GMG on isotropic problem, N=63 B. Oscillatory smooth error Another thing we need to reconsider about GMG is that if it could work well for the anisotropic problem as well? Fig.6 and Fig.7 plot the convergence factor curve when GMG is applied on anisotropic problem:(Fig.6 with β = 0.01, Fig.7 with β = 0.00001) Uxx + 0.01Uyy = −2π2 sin(πx) sin(πy) We can see that compared with on isotropic problem, much more MG iterations are needed and the convergence factors are remarkablely increased. So the error damping and the convergence slow down. Meanwhile compare Fig.6 with Fig.7, obviously the more anisotropic the problem is, the more slowly the errors converge. GMG becomes more and more inefficient when the anisotropy increases. Now we discuss what causes this problem. For isotropic problem, in our example α = β = 1, the 5-point stencil is:   0 1 0 1 −4 1 0 1 0   Diffusion problem with isotropy means speeds along all directions are the same and the smooth error in each dimension keeps ”smooth”. For anisotropic problem, where α << β or α >> β, the stencil is:   0 β 0 α −2α − 2β α 0 β 0   Fig. 6. Convergence Factors by GMG on Anisotropic problem, α = 1, β = 0.01, N=31 Fig. 7. Convergence Factors by GMG on isotropic problem, α = 1, β = 0.00001,N=31 then the speeds along different directions are not the same and the smooth error in some dimension might have oscillations. Anisotropy is the property of being directionally dependent, as opposed to isotropy, which implies identical properties in all directions. [9] The directional dependence in an anisotropic Poisson equation appears in the difference between coefficients of the partial derivatives: α ∂2 u ∂x2 + β ∂2 u ∂y2 = −2π2 sin (πx) sin (πy) with α β or β α For the simplicity, we suppose α = 1 and β α. We will use this as our example in the following discussion. Now let us discuss the effect brought by the anisotropy of these coefficients. Given the system matrix A, the system equation is Ax = b. We can decompose A to 3 parts: diagonal matrix D, lower triangular matrix L, upper triangular matrix U, that is, A = D + L + U. The following inference is based on [11]. Suppose we use point wise Gauss-Seidel iterations as
  • 4. smoother, then for the kth iteration: A = D + L + U =⇒(D + L)x = b − Ux =⇒x(k+1) = (D + L)−1 (b − Ax(k) ) + x(k) =⇒x(k+1) = (D + L)−1 Ae(k) + x(k) Subtract both sides from the exact solution, we could obtain the error recurence: e(k+1) = (Id + (L + D)−1 A)e(k) One could observe that after several iterations of the relax- ation, the error starts to converge very slowly. That inplies e(k+1) ≈ e(k) ,or as a better view point (D + L)−1 Ae, Ae e, Ae For most iterations(e.g., Jacobi or Gauss Seidel), this holds if [11] D−1 Ae, Ae e, Ae (2) It is easy to show from(1) that smooth error satisfies Ae, e De, e , and we have from this [11] 1 2 i=j ( −aij 2aii )( ei − ej ei )2 1 (3) Since we expect a slow convergence after presmoothing, the relation above must be satisfied. It suggests that, when aij is ”big”, in order to fulfill this inequality, (ei − ej) must be quite small. So ei and ej are close, or we could say ei depends on ej and the error is kind of smooth in the direction from point i to point j. And vise verse, when aij is ”small”, ei and ej might differ a lot so that the error might oscillates along this direction. Hence, we know the components of the system matrix A have effects on the smoothness of the error. Then what determines those components? Obviously it is the coefficients in the PDE. So what do these components in the system matrix A actually indicate? The relative magnitude of a component aij implies how strong the connection between point i and point j. If aij is relatively large compared to the other off-diagonal components, then point i and point j have strong connection and the value at point i plays quite a role in determining the value of point j. As the isotropic coefficients generate the same off-diagonal components in A, while the anisotropic ones do not, we could expect different connections in the two cases. For the isotropic problem the connections among points along both directions( x and y in our example) are similar. For the anisotropic problem, connections along the direction with the larger coefficient are stronger than the other one. So now we could make a short conclusion here. The anisotropy of the coefficients in the PDE give rise to different off-diagonal components in the system matrix A: the large coefficient causes relative larger aij in the direction with strong connections, or we call direction of dependence. From (2) we know that with ”big” aij the errors at the two points don’t vary too much whereas with ”small” aij the errors at the two points varies quickly. Thus, not like isotropic coefficients, in anisotropic problems, the smooth error is not ”smooth” in both direction. In our example it is smooth in x-direction, and oscillatory in y-direction. See Fig.8. The anisotropic coefficients indicates that the points in x- direction have strong connections while in y-direction have weak connection. Fig.9 shows the error components along two directions in an extrme example. With strong connections, that is , in the direction of dependence, smooth error varies slowly, see upper plot in Fig.9. This implies a continuity and coarsening in this direction could later lead to a good interpolation(see the points and line in red). Meanwhile, with weak connections, that is, in the direction of independence, the error might varies quite quickly, see lower plot in Fig.9. Due to the oscillation of the errors, coarsening in this direction, to the same extent as in the direction of dependence, could probably arise a bad interpolation later, which might cause and accelerate errors in the final solution. Like the red line connected by the red points in the lower plot in Fig.9, obviously it doesn’t keep the shape of the green one at all. One solution of this problem is semi-coarsening, which suggest coarsening only in the direction of dependence.(See Fig.10, keep coarsening approach in the upper plot, when along the direction which is indicated in the lower plot, select more coarse grid points). Hence we expect AMG could have improvements in both aspects. It should not only generate the hierarchy of coarse grids independent of the geometric knowledge, but also damp the oscillation of the smooth error in the direction of independence by an automatically semi-coarsening. IV. AMG BY SMOOTHED AGGREGATION A. Aggregations on each level As we mentioned above, the system matrix has implicit information about the connections among points in it. All the points could be decomposed into disjoint subsets according to their connections. In each subset, all the points have strong connections and could be represented by one of them. The other points must be able to be interpolated by these ”representatives”. Now we need to define strong connection between points, or what is called strongly-coupling in [12] : if| aij |≥ √ aiiajj, < 1, ( is a threshold for strongly coupling) then point i and point j are strongly coupled.
  • 5. Fig. 8. Smooth error after 5 G-S iterations for anisotropic problem with α = 1, β = 0.0001 Fig. 9. Smooth error components. Upper: along the direction with strong connections; lower: along the direction with weak connections Then we define strongly-coupled neighborhood of point i as: [12] Ni( ) = {j :| aij |≥ √ aiiajj} The way to aggregate: 1) Travel all the points in the set, generate their strongly- coupled neighborhoods Ni( ). 2) Set R = {i ∈ 1, ..., n : Ni( ) = {i}} and num = 0. Notice here the isolated points, which contains only itself in the strongly-coupled neighborhood, are excluded. 3) Loop on all i ∈ R Fig. 10. Semicoarsening • if Ni ⊂ R, then set Ci = Ni, R = RNi, num = num + 1. Here Ci is used to store the aggregations. After the loop num show the number of aggregations and there might be still some points left in R. 4) Handling the remnants. • Duplicate Ci → ˆCi • Travel all the points i in R, if there exists some j so that Nj intersects ˆCi, then add this i to Ci. In case there are more than one such j, take the one with strongest coupling. Then R = Ri. Fig.11, Fig.12, and Fig.13 illustrate the aggregation from the first level (finest grid) to the third level. Here for the threshold we use = 0.08 · 0.5l−1 . Blue dots represent the finest grid points, black ones for points on the second level and red circles for points on the third level. B. Prolongator Based on the aggregations we could construct the prolon- gator. We will firstly construct a tentative prolongator Ph 2h: (Ph 2h)ij = 1 if i ∈ Cj, 0 otherwise (4) With this tentative prolongator, points in a same aggregation will be assigned the same value as the representative of this aggregation. In other words, it makes a piecewise constant prolongation. We make a further improvement by using a damped Jacobi smoother: S = (Id − ωD−1 AF ) and then Ih 2h = SPh 2h
  • 6. Fig. 11. Aggregations of points, N=15, α = 1, β = 0.01 Fig. 12. Aggregations of points, N=15, α = 0.01, β = 1 where AF = (aF ij) is the filtered matrix given by: [12] aF ij = aij if j ∈ Ni( ), 0 otherwise if i = j, aF ii = aii− n j=1,j=i (aij−aF ij) The smoother S smooths the ”edge” of the aggregations. Then the value on the whole domain will vary continuously. Here the filtration prevents the undesired overlaps of the coarse space basis functions. By construction, AF typically makes the nonzero pattern of operator A on coarse grids follow the 9-point stencil. [12] Usually we take ω = 2 3 , like in Jacobi, in most cases this tends to damp the error very efficiently. is a threshold to determine the strong connection. In our implementation we Fig. 13. Aggregations of points, N=31, α = 1, β = 0.01 will try the effects on the solution by different choices of . See Table II. C. The Coarse Grid Operator Here we use Galerkin principle to define the coarse grid operator Acoarse. For the restrictor we use the transpose of the prolongator;and the operator on the new coarse grids is obtained by: Acoarse = (Ih 2h)T A(Ih 2h) V. IMPLEMENTATION A. Algorithm for One iteration of SA-AMG We are trying to solve linear system equations. The follow- ing is the algorithm for a SA-AMG iteration on level l. Ax = b 1) Pre-smoothing: relaxation with v1 Jacobi or Gauss Sei- del iterations. With G-S, M = −(L + D)−1 U, N = (L + D)−1 ,D is the diagonal matrix of A, and L, U respectively are the lower and upper triangular of A. • x = Mx + Nb 2) Coarse grid correction: • Compute operator a) Compute the tentative prolongator P b) Compute the smoother S c) Compute the prolongator Ih 2h and restrictor R: Ih 2h = SP, R = (Ih 2h)T • Restriction a) System matrix on level l + 1:Acoarse = RAIh 2h b) Restriction on residual: bcoarse = R(bl − Ax) • Smoothing a) If it reaches the lowest level, Acoarsexcoarse = bcoarse with direct solver and return xcoarse
  • 7. b) If not, recur the algorithm on level l + 1 • Prolongation/Interpolation & Correction a) x = x + Ih 2hxcoarse 3) Post-smoothing:v2 Jacobi or Gauss Seidel iterations B. Algorithm of Computing Tentative Prolongator P 1) Generate aggregations (Ci) 2) Create a zero matrix P. Then for i = 1, ..., nl, if i ∈ Cj, setPij = 1 C. Algorithm of Aggregations 1) For i = 1, ..., nl, compute the corresponding strongly- coupled neighborhood and store in the corresponding rows of matrix NGBmatrix; 2) Create array RR to store the number of elements in each strongly-coupled neighborhood and array R to record the status of all the points: 0 for isolated points, -1 for points already in aggregations, ¿0 for the remaining points in R. 3) For i = 1, ..., nl, if its Nl i is a subset of the set of points in R, then C(i, :) ← Nl i 4) Travel all the points left in R, search whose Nl i its Nl i intersects. Add this point to the aggregation whose repre- sentative has the most strongly-coupled neighborhoods. D. Other details • In the complete algorithm for SA-AMG, the stop criteria to terminate the SA-AMG iterations is that: initialresidual currentresidual < 10−8 • The initial guess could be 0 everywhere or some random values. • could be set changing along with levels or keep con- stant. E. Experiments The following tables show the performance compari- son(main iterations, final errors, average convergence factor, coarsening ratio) with different parameters. In my implemen- tation I assign 1 on all the grid points as the initial guess and set the stop criteria to be when ratio of current residual to the first residual is less than 10−8 Table I shows the coarsening ratios with different N. From the table we can see that the coarsening ratio of level 1 to level 2, as well as of level 2 to level 3, are around(less than) 3. But the ratios of last several levels of large N have a big jump, for example, when N=63, the coarsening ratio of level 4 to level 5 is 9, much bigger than 3. That is because after several levels of semi-coarsening, the error component in the direction with weak connections reaches ”similar” frequencies as the current error component in the strong connected direction. Then the former direction also starts coarsening. Then the coarsening ratios remarkably increase. Table II shows how the threshold affect the performance(the number of MG iterations and average convergence factor). In the test, we also take the coefficients and the value of N into account. From the table we can see equals 0.1, 0.05 and TABLE I COARSENING RATIO IN THE IMPLEMENTATION WITH DIFFERENT N, α = 1, β = 0.01, = 0.08 · 0.5l−1 N level1 level2 level3 level4 level5 ratio 7 49 21 7 - - ≈ 2.67 15 225 75 30 5 - ≈ 3.8333 31 961 341 124 22 4 ≈ 4.1761 63 3969 1323 441 63 7 ≈ 5.5 0.08 · 0.5l−1 make no big difference in the performance. But = 0.5 doesn’t work well according to the high number of iterations and convergence, or when N=31, the program needs much more iterations.In the table, acf is short for average convergence factor and ni is short for number of iterations. TABLE II CONVERGENCE FACTORS AND NUMBER OF ITERATIONS IN THE IMPLEMENTATION WITH DIFFERENT α=1, β=0.01 N = 0.5 = 0.1 = 0.05 = 0.08 · 0.5l−1 ni 7 14 5 5 5 acf 7 0.253 0.005 0.005 0.005 ni 15 49 9 9 9 acf 15 0.704 0.0998 0.0998 0.0998 ni 31 - 7 7 7 acf 31 - 0.039 0.039 0.039 α=1 β=1e−5 31 ni 31 - 7 7 7 acf 31 - 0.0405 0.040474 0.0405 α=0.01 β=1 31 ni 31 - 7 7 7 acf 31 - 0.039 0.039 0.039 Then we test with different numbers of iterations in the smoothers(pre and post) and shows the result in Table III and Table IV. The tests are operated respectivly with N=15 and 31. Obivously the more iterations we use in the smoother the less the multigrid(MG) iterations we need, and the lower the average convergence factor is. This is reasonable, since with more smoother iterations we have a better result of relaxation. Now let us test how the degree of anisotropy affect the TABLE III DIFFERENT SMOOTHER ITERATIONS, α = 1, β = 0.01, = 0.08 · 0.5l−1 N=15 smoother iterations 2 4 8 MGiterations 10 9 8 average convergence factor 0.130504 0.099797 0.068058 TABLE IV DIFFERENT SMOOTHER ITERATIONS, α = 1, β = 0.01, = 0.08 · 0.5l−1 N=31 smoother iterations 2 4 8 MGiterations 8 7 6 average convergence factor 0.066292 0.039603 0.021967 performance. We keep α = 1, n = 31 and let β travels 1e-5, 1e-4, 1e-3,..., 1e4, 1e5. Fig.14 and Fig.15 respectively illustrate the number of MG iterations and the average convergence
  • 8. factor under corresponding β. Apparently, the performance is better(fewer MG iterations and lower acf) when the anisotropy is stronger. Fig.16 and Fig.17 plots these two parameters by GMG. Obviously, GMG plays a good role when β = 1, that is , isotropic problem. Compared with Fig.14 and Fig.15 it’s clear that SA works much better for anisotropic problem. And the more the anisotropy the problems are, the more obvious the advantage of SA is. Fig. 14. Number of MG iterations for anisotropic problem with different β, N = 31 Fig. 15. Average convergence fators for anisotropic problem with different β, N = 31 VI. CONCLUSION Now we get some performance comparisons about the AMG algorithm based on smoothed aggregation. There are several factors which will influence the performance: iterations in smoother, the degree of anisotropy of the problem, and the threshold parameter . The more iterations in smoother is, the fewer MG iterations and the lower the average convergence factors are. Compare with GMG, SA has super advantages when applied on anisotropic problem. And we expect to be considerably small to generate good results. For isotropic problem, GMG is straightforward and the performance is excellent. However, for anisotropic problem, since AMG can Fig. 16. Average convergence fators for anisotropic problem with different β, by GMG, N=31 Fig. 17. Average convergence fators for anisotropic problem with different β, by GMG, N=31 deal with the osicillation occured in the smooth error, it leads to outstanding performance on anisotropic problem. REFERENCES [1] J.W.Ruge and K.Stueben, Efficient solution of finite difference and finite element equations by algebraic multigrid(AMG), in Multigrid Methods for Mathematics and its Applications Conference Series, Clarendon Press, Oxford, 1985 [2] K.Stueben, Algebraic multigrid(AMG): experiences and comparisons, Appl. Math. Comput., 13(?1983) [3] A.Brandt, S.F.Mccormick, and J.W.Ruge, Algebraic multigrid(AMG) for sparse matrix equations, in Sparsity and Its Applications, D.J.Evans, ed. Cambridge Univ.Press, Cambridge, 1984 [4] J.W.Ruge and K.Stuben Algebraic multigrid(AMG), in Multigrid Meth- ods, S.F.McCormick, ed., vol.3 of Frontiers in Applied Mathematics, SIAM, Philadelphia, PA,1987 [5] J.W.Ruge ,Algebraic multigrid(AMG), for geodetic survey problems, in Prelimary Proc. Internat. Multigrid Conference, Fort Collins, CO,1983, Institute for Computational Studies at Colorado State University. [6] P.Vanek, Acceleration of convergence of a two-level algorithm by smoothing transfer operator, Applications of Mathematics, 37(1992) [7] P.Vanek, Fast multigrid solver, Applications of Mathematics to appear [8] P.Vanek, Jan Mandel, and Marian Brezina, Algebraic Multigrid by Smoothed Aggregation for Second and Fourth Order Elliptic Problems, 1995 [9] WIKIPEDIA, http://en.wikipedia.org/wiki/Anisotropy [10] K. St¨uben, An Introduction to Algebraic Multigrid, Appendix A
  • 9. [11] Van Emden Henson, April, 1999, An Algebraic Multigrid Tutorial [12] Petr Vanek, Jan Mandel, Marian Brezina, Algebraic Multigrid by Smoothed Aggregation for Second and Fourth Order Elliptic Problems, Jan 1995 [13] Klaus St¨uben, 2nd International FEFLOW User Conference Potsdam, September 14-16, 2009, Algebraic Multigrid (AMG) Multigrid Methods and Parallel Computing [14] CFD-ONLINE, http://www.cfd-online.com/Wiki/Geometric multigrid - FAS