SlideShare une entreprise Scribd logo
1  sur  43
Télécharger pour lire hors ligne
Solution of System of Linear Equations
&
Eigen Values and Eigen Vectors
P. Sam Johnson
March 30, 2015
P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March 30, 2015 1/43
Overview
Linear systems
Ax = b (1)
occur widely in applied mathematics.
They occur as direct formulations of “real world” problems ; but more
often, they occur as a part of numerical analysis.
There are two general categories of numerical methods for solving (1).
Direct methods are methods with a finite number of steps ; and they end
with the exact solution provided that all arithmetic operations are exact.
Iterative methods are used in solving all types of linear systems, but they
are most commonly used for large systems.
P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March 30, 2015 2/43
Overview
Iterative methods are faster than the direct methods. Even the round-off
errors in iterative methods are smaller.
In fact, iterative method is a self correcting process and any error made
at any stage of computation gets automatically corrected in the
subsequent steps.
We start with a simple example of a linear system and we discuss
consistency of system of linear equations, using the concept of
determinant.
Two iterative methods – Gauss-Jacobi and Gauss-Seidel methods are
discussed, to solve a given linear system numerically.
P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March 30, 2015 3/43
Overview
Eigen values are a special set of scalars associated with a linear system of
equations. The determination of the eigen values and eigen vectors of a
system is extremely important in physics and engineering, where it is
equivalent to matrix diagonalization and arises in such common
applications as stability analysis, the physics of rotating bodies, and small
oscillations of vibrating systems, to name only a few.
Each eigen value is paired with a corresponding vector, called eigen
vector.
We discuss properties of eigen values and eigen vectors in the lecture. Also
we give a method, called Rayeigh’s Power Method, to find out the
largest eigen value in magnitude (also called, dominant eigen value). The
special advantage of the power method is that the eigen vector corresponds
to the dominant eigen value is also calculated at the same time.
P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March 30, 2015 4/43
An Example
A shopkeeper offers two standard packets because he is convinced that
north indians each more wheat than rice and south indians each more rice
than wheat. Packet one P1 : 5kg wheat and 2kg rice ; Packet two P2 :
2kg wheat and 5kg rice. Notation. (m, n) : m kg wheat and n kg rice.
Suppose I need 19kg of wheat and 16kg of rice. Then I need to buy x
packets of P1 and y packets of P2 so that x(5, 2) + y(2, 5) = (10, 16).
Hence I have to solve the following system of linear equations
5x + 2y = 10
2x + 5y = 16.
Suppose I need 34 kg of wheat and 1 kg of rice. Then I must buy 8
packets of P1 and −3 packets of P2. What does this mean? I buy 8
packets of P1 and from these I make three packets of P2 and give them
back to the shopkeeper.
P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March 30, 2015 5/43
Consistency of System of Linear Equations – Graphically
(1 Equation, 2 Variables)
The solution of the equation
ax + by = c
is the set of all points satisfy the equation forms a straight line in the
plane through the point (c/b, 0) and with slope −a/b.
Two lines
parallel – no solution
intersect – unique solution
same – infinitely many solutions.
P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March 30, 2015 6/43
Consistency of System of Linear Equations
Consider the system of m linear equations in n unkowns
a11x1 + a12x2 + · · · + a1nxn = b1
a21x1 + a22x2 + · · · + a2nxn = b2
... +
... + · · · +
... =
...
am1x1 + am2x2 + · · · + amnxn = bm.
Its matrix form is





a11 a12 . . . a1n
a21 a22 . . . a2n
...
... . . .
...
am1 am2 . . . amn










x1
x2
...
xm





=





b1
b2
...
bm.





That is, Ax = b, where A is called the coefficient matrix.
P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March 30, 2015 7/43
To determine whether these equations are consistent or not, we find the
ranks of the coefficient matrix A and the augmented matrix
K =





a11 a12 . . . a1n b1
a21 a22 . . . a2n b2
...
... · · ·
...
...
am1 am2 . . . amn bm





= [A b].
We denote the rank of A by r(A).
1. r(A) = r(K), then the linear system Ax = b is inconsistent and has
no solution.
2. r(A) = r(K) = n, then the linear system Ax = b is consistent and
has a unique solution.
3. r(A) = r(K) < n, then the linear system Ax = b is consistent and
has an infinite number of solutions.
P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March 30, 2015 8/43
Solution of Linear Simultaneous Equations
Simultaneous linear equations occur quite often in engineering and science.
The analysis of electronic circuits consisting of invariant elements, analysis
of a network under sinusoidal steady-state conditions, determination of the
output of a chemical plant, finding the cost of chemical reactions are some
of the problems which depend on the solution of simultaneous linear
algebraic equations, the solution of such equations can be obtained by
direct or iterative methods (successive approximation methods).
Some direct methods are as follows:
1. Method of determinants – Cramer’s rule
2. Matrix inversion method
3. Gauss elimination method
4. Gauss-Jordan method
5. Factorization (triangulization) method.
P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March 30, 2015 9/43
Iterative Methods
Direct methods yield the solution after a certain amount of computation.
On the other hand, an iterative method is that in which we start from an
approximation to the true solution and obtain better and better
approximations from a computation cycle repeated as often as may be
necessary for achieving a desired accuracy.
Thus in an iterative method, the amount of computation depends on the
degree of accuracy required.
For large systems, iterative methods are faster than the direct methods.
Even the round-off errors in iterative methods are smaller. In fact,
iteration is a self correcting process and any error made at any stage of
computation gets automatically corrected in the subsequent steps.
P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March 30, 2015 10/43
Diagonally Dominant
Definition
An n × n matrix A is said to be diagonally dominant if, in each row, the
absolute value of each leading diagonal element is greater than or equal to
the sum of the absolute values of the remaining elements in that row.
The matrix A =


10 −5 −2
4 −10 3
1 6 10

 is a diagonally dominant and the
matrix A =


2 3 −1
5 8 −4
1 1 1

 is not a diagonally dominant matrix.
P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March 30, 2015 11/43
Diagonal System
Definition
In the sytem of simultaneous linear equations in n unknowns AX = B, if A
is diagonally dominant, then the system is said to be diagonal system.
Thus the system of linear equations
a1x + b1y + c1z = d1
a2x + b2y + c2z = d2
a3x + b3y + c3z = d3
is a diagonal system if
|a1| ≥ |b1| + |c1|
|b2| ≥ |a2| + |c2|
|c3| ≥ |a3| + |b3|.
P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March 30, 2015 12/43
The process of iteration in solving AX = B will converge quickly if the
cofficient matrix A is diagonally dominant.
If the coefficient matrix is not diagonally dominant we must rearrange the
equations in such a way that the resulting coefficient matrix becomes
dominant, if possible, before we apply the iteration method.
Exercises
1. Is the system of equations diagonally dominant? If not, make it diagonally dominant.
3x + 9y − 2z = 10
4x + 2y + 13z = 19
4x − 2y + z = 3.
2. Check whether the system of equations
x + 6y − 2z = 5
4x + y + z = 6
−3x + y + 7z = 5.
is a diagonal system. If not, make it a diagonal system.
P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March 30, 2015 13/43
Jacobi Iteration Method (also known as Gauss-Jacobi)
Simple iterative methods can be designed for systems in which the
coefficients of the leading diagonal are large as comparted to others.
We now describe three such methods. Gauss-Jacobi Iteration Method
Consider the system of linear equations
a1x + b1y + c1z = d1
a2x + b2y + c2z = d2
a3x + b3y + c3z = d3.
If a1, b2, c3 are large as compared to other coefficients (if not, rearrange
the given linear equations), solve for x, y, z respectively. That is, the
cofficient matrix is diagonally dominant.
P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March 30, 2015 14/43
Then the system can be written as
x =
1
a1
(d1 − b1y − c1z)
y =
1
b2
(d2 − a2x − c2z) (2)
z =
1
c3
(d3 − a3x − b3y).
Let us start with the initital approximations x0, y0, z0 for the values of
x, y, z repectively. In the absence of any better estimates for x0, y0, z0,
these may each be taken as zero.
Substituting these on the right sides of (2), we gt the first approximations.
This process is repeated till the difference between two consecutive
approximations is negligible.
P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March 30, 2015 15/43
Exercises
3. Solve, by Jacobi’s iterative method, the equations
20x + y − 2z = 17
3x + 20y − z = −18
2x − 3y + 20z = 25.
4. Solve, by Jacobi’s iterative method correct to 2 decimal places, the equations
10x + y − z = 11.19
x + 10y + z = 28.08
−x + y + 10z = 35.61.
5. Solve the equations, by Gauss-Jacobi iterative method
10x1 − 2x2 − x3 − x4 = 3
−2x1 + 10x2 − x3 − x4 = 15
−x1 − x2 + 10x3 − 2x4 = 27
−x1 − x2 − 2x3 + 10x4 = −9.
P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March 30, 2015 16/43
Gauss-Seidel Iteration Method
This is a modification of Jacobi’s Iterative Method.
Consider the system of linear equations
a1x + b1y + c1z = d1
a2x + b2y + c2z = d2
a3x + b3y + c3z = d3.
If a1, b2, c3 are large as compared to other coefficients (if not, rearrange
the given linear equations), solve for x, y, z respectively.
Then the system can be written as
x =
1
a1
(d1 − b1y − c1z)
y =
1
b2
(d2 − a2x − c2z) (3)
z =
1
c3
(d3 − a3x − b3y).
P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March 30, 2015 17/43
Here also we start with the initial approximations x0, y0, z0 for x, y, z
respectively, (each may be taken as zero).
Subsituting y = y0, z = z0 in the first of the equations (3), we get
x1 =
1
a1
(d1 − b1y0 − c1z0).
Then putting x = x1, z = z0 in the second of the equations (3), we get
y1 =
1
b2
(d2 − a2x1 − c2z0).
Next subsituting x = x1, y = y1 in the third of the equations (3), we get
z1 =
1
c3
(d3 − a3x1 − b3y1)
and so on.
P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March 30, 2015 18/43
That is, as soon as a new approximation for an unknown is found, it is
immediately used in the next step.
This process of iteration is repeatedly till the values of x, y, z are obtained
to desired degree of accuracy.
Since the most recent approximations of the unknowns are used which
proceeding to the next step, the convergence in the Gauss-Seidel
method is twice as fast as in Gauss-Jacobi method.
P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March 30, 2015 19/43
Convergence Criteria
The condition for convergence of Gauss-Jacobi and Gauss-Seidel methods
is given by the following rule:
The process of iteration will converge if in each equation of the system,
the absolute value of the largest co-efficient is greater than the sum of the
absolute values of all the remaining coefficients.
That is, if the coefficient matrix is a diagonal dominant matrix, then the
process of iteration will converge.
For a given system of linear equations, before finding approximations by
using Gauss-Jacobi / Gauss-Seidel methods, convergence criteria should be
verified. If the coefficient matrix is a not diagonal dominant matrix,
rearrange the equations, so that the new coefficient matrix is diagonal
dominant. Note that all systems of linear equations are not diagonal
dominant.
P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March 30, 2015 20/43
Exercises
6. Apply Gauss-Seidel iterative method to solve the equations
20x + y − 2z = 17
3x + 20y − z = −18
2x − 3y + 20z = 25.
7. Solve the equations, by Gauss-Jacobi and Gauss-Seidel methods (and compare the values)
27x + 6y − z = 85
x + y + 54z = 110
6x + 15y + 2z = 72.
8. Apply Gauss-Seidel iterative method to solve the equations
10x1 − 2x2 − x3 − x4 = 3
−2x1 + 10x2 − x3 − x4 = 15
−x1 − x2 + 10x3 − 2x4 = 27
−x1 − x2 − 2x3 + 10x4 = −9.
P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March 30, 2015 21/43
Relaxation Method
Consider the system of linear equations
a1x + b1y + c1z = d1
a2x + b2y + c2z = d2
a3x + b3y + c3z = d3.
We define the residuals Rx , Ry , Rz by the relations
Rx = d1 − a1x − b1y − c1z
Ry = d2 − a2x − b2y − c2z (4)
Rz = d3 − a3x − b3y − c3z.
P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March 30, 2015 22/43
To start with we assume that x = y = z = 0 and calculate the initial
residuals.
Then the residuals are reduced step by step, by giving increments to
the variables.
We note from the equations (4) that if x is increased by 1 (keeping y and
z constant), Rx , Ry and Rz decrease by a1, a2, a3 respectively. This is
shown in the following table along with the effects on the residuals when y
and z are given unit increments.
δRx δRy δRz
δx = 1 −a1 −a2 −a3
δy = 1 −b1 −b2 −b3
δz = 1 −c1 −c2 −c3
The table is the negative of transpose of the coefficient matrix.
P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March 30, 2015 23/43
At each step, the numerically largest residual is reduced to almost zero.
How can one reduce a particular residual?
To reduce a particular residual, the value of the corresponding variable is
changed.
For example, to reduce Rx by p, x should be increased by p/a1.
When all the residuals have been reduced to almost zero, the increments
in x, y, z are added separately to give the desired solution.
Verification Process : The residuals are not all negligible when the
computed values of x, y, z are substituted in (4), then there is some
mistake and the entire process should be rechecked.
P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March 30, 2015 24/43
Convergence of the Relaxation Method
Relaxation method can be applied successfully only if there is at least one
row in which diagonal element of the coefficient matrix dominate the
other coefficients in the corresponding row.
That is,
|a1| ≥ |b1| + |c1|
|b2| ≥ |a2| + |c2|
|c3| ≥ |a3| + |b3|
should be valid for at least one row.
P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March 30, 2015 25/43
Exercises
9. Solve by relaxation method, the equations
9x − 2y + z = 50
x + 5y − 3z = 18
−2x + 2y + 7z = 19.
10. Solve by relaxation method, the equations
10x − 2y − 3z = 205
−2x + 10y − 2z = 154
−2x − y + 10z = 120.
P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March 30, 2015 26/43
Eigen Values and Eigen Vectors
For any n × n matrix A, consider the polynomial
χA(λ) := |λI − A| =
λ − a11 −a12 · · · −a1n
−a21 λ − a22 · · · −a2n
· · · · · · · · · · · ·
−an1 −an2 · · · λ − ann
. (5)
Clearly this is a monic polynomial of degree n. By the fundamental
theorem of algebra, χ(A) has exactly n (not necessarily distinct) roots.
χA(λ) the characteristic polynomial of
A
χA(λ) = 0 the characteristic equation of A
the roots of χA(λ) the characteristic roots of A
distinct roots of χA(λ) the spectrum of A
P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March 30, 2015 27/43
Few Observations on Characteristic Polynomials
The constant terms and the coefficient of λn−1 in χA(λ) are (−1)n|A|
and tr(A).
The sum of the characteristic roots of A is tr(A) and the product of
the characteristic roots of A is |A|.
Since λI − AT = (λI − A)T , characteristic polynomials of A and AT
are the same.
Since λI − P−1AP = P−1(λI − A)P, similar matrices have the same
characteristic polynomials.
P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March 30, 2015 28/43
Eigen Values and Eigen Vectors
Definition
A complex number λ is an eigen value of A if there exists x = 0 in Cn
such that Ax = λx. Any such (non-zero) x is an eigen vector of A
corresponding to the eigen value λ.
When we say that x is an eigen vector of A we mean that x is an eigen
vector of A corresponding to some eigen value of A.
λ is an eigen value of A iff the system (λI − A)x = 0 has a non-trivial
solution.
λ is a characteristic root of A iff λI − A is singular.
Theorem
A number λ is an eigen value of A iff λ is a characteristic root of A.
P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March 30, 2015 29/43
The preceding theorem shows that eigen values are the same as
characteristic roots. However, by ‘the characteristic roots of A’ we mean
the n roots of the characteristic polynomial of A whereas ‘the eigen values
of A’ would mean the distinct characteristic roots of A.
Equivalent names:
Eigen values proper values, latent roots, etc.
Eigen vectors characteristic vectors, latent vectors, etc.
P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March 30, 2015 30/43
Properties of Eigen Values
1. The sum and product of all eigen values of a matrix A is the sum of
principal diagonals and determinant of A, respectively. Hence a
matrix is not invertible if and only if it has a zero eigen value.
2. The eigen values of A and AT (the transpose of A) are same.
3. If λ1, λ2, . . . , λn are eigen values of A, then
(a) 1/λ1, 1/λ2, . . . , 1/λn are eigen values of A−1
.
(b) kλ1, kλ2, . . . , kλn are eigen values of kA.
(c) kλ1 + m, kλ2 + m, . . . , kλn + m are eigen values of kA + mI.
(d) λp
1, λp
2, . . . , λp
n are eigen values of Ap
, for any positive integer p.
4. The transformation of A by a non-singular matrix P to P−1AP is
called a similarity transformation.
Any similarity transformation applied to a matrix leaves its eigen
values unchanged.
5. The eigen values of a real symmetric matrix are real.
P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March 30, 2015 31/43
Cayley - Hamilton Theorem
Theorem
For every matrix A, the characteristic polynomial of A annihilates A.
That is, every matrix satisfies its own characteristic equation.
Main uses of Cayley-Hamilton theorem
1. To evaluate large powers A.
2. To evaluate a polynomial in A with large degree even if A is singular.
3. To express A−1 as a polynomial in A whereas A is non-singular.
P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March 30, 2015 32/43
Dominant Eigen Value
We have seen that the eigen values of an n × n matrix A are obtained by
solving its characteristic equation
λn
+ an−1λn−1
+ cn−2λn−2
+ · · · + c0 = 0.
For large values of n, polynomial equations like this one are difficult and
time-consuming to solve.
Moreover, numerical techniques for approximating roots of polynomial
equations of high degree are sensitive to rounding errors. We now look at
an alternative method for approximating eigen values.
As presented here, the method can be used only to find the eigen
value of A, that is, largest in absolute value – we call this eigen value the
dominant eigen value of A.
P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March 30, 2015 33/43
Dominant Eigen Vector
Dominant eigen values are of primary interest in many physical
applications.
Let λ1, λ2, . . . , and λn be the eigen values of an n × n matrix A.
λ1 is called the dominant eigen value of A if
|λi | > |λj |, i = 2, 3, . . . , n.
For example, the matrix
1 0
0 −1
has no dominant eigen value.
The eigen vectors corresponding to λ1 are called dominant eigen vectors
of A.
P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March 30, 2015 34/43
Power Method
In many engineering problems, it is required to compute the numerically
largest eigen values (dominant eigen values) and the corresponding eigen
vectors (dominant eigen vectors).
In such cases, the following iterative method is quite convenient which is
also well-suited for machine computations.
Let x1, x2, . . . , xn be the eigen vectors corresponding to the eigen values
λ1, λ2, . . . , λn.
We assume that the eigen values of A are real and distint with
|λ1| > |λ2| > · · · > |λn|.
P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March 30, 2015 35/43
As the vectors x1, x2, . . . , xn are linearly independent, any arbitrary column
vector can be written as
x = k1x1 + k2x2 + · · · + knxn.
We now operate A repeatedly on the vector x.
Then
Ax = k1λ1x1 + k2λ2x2 + · · · + knλnxn.
Hence
Ax = λ1 k1x1 + k2
λ2
λ1
x2 + · · · + kn
λn
λ1
xn .
and through iteration we obtain,
Ar
x = λ1 k1x1 + k2
λ2
λ1
r
x2 + · · · + kn
λn
λ1
r
xn (6)
provided λ1 = 0.
P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March 30, 2015 36/43
Since λ1 is dominant, all terms inside the brackets have limit zero except
the first term.
That is, for large values of r, the vector
k1x1 + k2
λ2
λ1
r
x2 + · · · + kn
λn
λ1
r
xn
will converge to k1x1, that is the eigen vector of λ1.
The eigenvalue is obtained as
λ1 = lim
n→∞
(Ar+1x)p
(Ar x)p
p = 1, 2, . . . , n
where the index p signifies the pth component in the corresponding vector.
That is, if we take the ratio of any corresponding components of Ar+1x
and Ar x, this ratio should therefore have a limit λ1.
P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March 30, 2015 37/43
Linear Convergence
Neglecting the terms of λ2, λ3, . . . , λn in (6) we get
Ar+1x
k1λr+1
1
− x1 =
λ2
λ1
Ar x
k1λr
1
− x1
which shows that the convergence is linear with convergence ratio
λ2
λ1
.
Thus, the convergence is faster than if this ratio is small.
P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March 30, 2015 38/43
Working Rule To Find Dominant Eigen Value
Choose x = 0, an arbitrary real vector.
Generally, we choose x as


1
1
1

 , or,


1
0
0

 .
In other words, for the purpose of computation, we generally take x with 1
as the first component.
We start with a column vector x which is as near the solution as possible
and evaluate Ax which is written as
λ(1)
x(1)
after normalization. This gives the first approximation λ(1) to the eigen
value and x(1) is the eigen vector.
P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March 30, 2015 39/43
Rayeigh’s Power Method
Similarly, we evaluate Ax(2) = λ(2)x(2) which gives the second
approximation. We repeat this process till [x(r) − x(r−1)] becomes
negligible.
Then λ(r) will be the dominant (largest) eigen value and x(r), the
corresponding eigen vector (dominant eigen vector).
The iteration method of finding the dominant eigen value of a square
matrix is called the Rayeigh’s power method.
At each stage of iteration, we take out the largest component from y(r) to
find x(r) where
y(r+1)
= Ax(r)
, r = 0, 1, 2, . . . .
P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March 30, 2015 40/43
To find the Smallest Eigen Value
To find the smallest eigen value of a given matrix A, apply power method
to the matrix A−1.
Find the dominant eigen value of A−1 and then take is reciprocal.
Exercise
11. Say true or false with justification: If λ is the dominant eigen value of
A and β is the dominant eigen value of A − λI, then the smallest
eigen value of A is λ + β.
P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March 30, 2015 41/43
Exercises
12. Determine the largest eigen value and the corresponding eigen vector
of the matrix
5 4
1 2
.
13. Find the largest eigen value and the corresponding eigen vector of the
matrix 

2 −1 0
−1 2 −1
0 −1 2


using power method. Take [1, 0, 0]T as an initial eigen vector.
14. Obtain by power method, the numerically dominant eigen value and
eigen vector of the matrix


15 −4 −3
−10 12 −6
−20 4 −2.


P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March 30, 2015 42/43
References
1. Richard L. Burden and J. Douglas Faires, Numerical Analysis -
Theory and Applications, Cengage Learning, Singapore.
2. Kendall E. Atkinson, An Introduction to Numerical Analysis, Wiley
India.
3. David Kincaid and Ward Cheney, Numerical Analysis -
Mathematics of Scientific Computing, American Mathematical
Society, Providence, Rhode Island.
4. S.S. Sastry, Introductory Methods of Numerical Analysis, Fourth
Edition, Prentice-Hall, India.
P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March 30, 2015 43/43

Contenu connexe

Tendances

Chapter 1 Logic of Compound Statements
Chapter 1 Logic of Compound StatementsChapter 1 Logic of Compound Statements
Chapter 1 Logic of Compound Statements
guestd166eb5
 
Sampling distribution
Sampling distributionSampling distribution
Sampling distribution
Danu Saputra
 

Tendances (20)

Hypothesis Testing
Hypothesis TestingHypothesis Testing
Hypothesis Testing
 
Straight lines
Straight linesStraight lines
Straight lines
 
Functional analysis
Functional analysis Functional analysis
Functional analysis
 
Numerical analysis ppt
Numerical analysis pptNumerical analysis ppt
Numerical analysis ppt
 
Real analysis
Real analysis Real analysis
Real analysis
 
Newton’s Divided Difference Formula
Newton’s Divided Difference FormulaNewton’s Divided Difference Formula
Newton’s Divided Difference Formula
 
linear transformation and rank nullity theorem
linear transformation and rank nullity theorem linear transformation and rank nullity theorem
linear transformation and rank nullity theorem
 
Determinants
DeterminantsDeterminants
Determinants
 
Properties of straight lines
Properties of straight linesProperties of straight lines
Properties of straight lines
 
Method of direct proof
Method of direct proofMethod of direct proof
Method of direct proof
 
Matrices
MatricesMatrices
Matrices
 
Continuity Of Functions
Continuity Of FunctionsContinuity Of Functions
Continuity Of Functions
 
Chapter 1 Logic of Compound Statements
Chapter 1 Logic of Compound StatementsChapter 1 Logic of Compound Statements
Chapter 1 Logic of Compound Statements
 
Sampling distribution
Sampling distributionSampling distribution
Sampling distribution
 
Power series
Power series Power series
Power series
 
Simple linear regression
Simple linear regressionSimple linear regression
Simple linear regression
 
DOUBLE INTEGRALS PPT GTU CALCULUS (2110014)
DOUBLE INTEGRALS PPT GTU CALCULUS (2110014) DOUBLE INTEGRALS PPT GTU CALCULUS (2110014)
DOUBLE INTEGRALS PPT GTU CALCULUS (2110014)
 
Interpolation
InterpolationInterpolation
Interpolation
 
power series & radius of convergence
power series & radius of convergencepower series & radius of convergence
power series & radius of convergence
 
Lesson 2: Limits and Limit Laws
Lesson 2: Limits and Limit LawsLesson 2: Limits and Limit Laws
Lesson 2: Limits and Limit Laws
 

En vedette

eigen valuesandeigenvectors
eigen valuesandeigenvectorseigen valuesandeigenvectors
eigen valuesandeigenvectors
8laddu8
 
Solution of System of Linear Equations
Solution of System of Linear EquationsSolution of System of Linear Equations
Solution of System of Linear Equations
mofassair
 

En vedette (20)

Eigen value and eigen vector
Eigen value and eigen vectorEigen value and eigen vector
Eigen value and eigen vector
 
Solution of linear system of equations
Solution of linear system of equationsSolution of linear system of equations
Solution of linear system of equations
 
Solution of engineering problems
Solution of engineering problemsSolution of engineering problems
Solution of engineering problems
 
eigen valuesandeigenvectors
eigen valuesandeigenvectorseigen valuesandeigenvectors
eigen valuesandeigenvectors
 
Eigen value , eigen vectors, caley hamilton theorem
Eigen value , eigen vectors, caley hamilton theoremEigen value , eigen vectors, caley hamilton theorem
Eigen value , eigen vectors, caley hamilton theorem
 
Eigen values and eigen vector ppt
Eigen values and eigen vector pptEigen values and eigen vector ppt
Eigen values and eigen vector ppt
 
B.tech semester i-unit-v_eigen values and eigen vectors
B.tech semester i-unit-v_eigen values and eigen vectorsB.tech semester i-unit-v_eigen values and eigen vectors
B.tech semester i-unit-v_eigen values and eigen vectors
 
Myp10 system of linear equations with solution
Myp10 system of linear equations with solutionMyp10 system of linear equations with solution
Myp10 system of linear equations with solution
 
Application of exponential functions
Application of exponential functionsApplication of exponential functions
Application of exponential functions
 
Applications of systems of equations
Applications of systems of equationsApplications of systems of equations
Applications of systems of equations
 
Linear algebra
Linear algebra Linear algebra
Linear algebra
 
Hashing Part One
Hashing Part OneHashing Part One
Hashing Part One
 
Quantum mechanics
Quantum mechanicsQuantum mechanics
Quantum mechanics
 
A true quantum mechanics
A true quantum mechanicsA true quantum mechanics
A true quantum mechanics
 
Solution of System of Linear Equations
Solution of System of Linear EquationsSolution of System of Linear Equations
Solution of System of Linear Equations
 
Applications of the surface finite element method
Applications of the surface finite element methodApplications of the surface finite element method
Applications of the surface finite element method
 
Unit ii
Unit iiUnit ii
Unit ii
 
Face recogntion
Face recogntionFace recogntion
Face recogntion
 
Eigen value and vectors
Eigen value and vectorsEigen value and vectors
Eigen value and vectors
 
Quantum Mechanics - Super Position
Quantum Mechanics - Super PositionQuantum Mechanics - Super Position
Quantum Mechanics - Super Position
 

Similaire à Solution to linear equhgations

Linear equations rev
Linear equations revLinear equations rev
Linear equations rev
Yash Jain
 
Pair of linear equations
Pair of linear equationsPair of linear equations
Pair of linear equations
Yash Jain
 
Linear equations rev - copy
Linear equations rev - copyLinear equations rev - copy
Linear equations rev - copy
Yash Jain
 
Direct Methods to Solve Lineal Equations
Direct Methods to Solve Lineal EquationsDirect Methods to Solve Lineal Equations
Direct Methods to Solve Lineal Equations
Lizeth Paola Barrero
 

Similaire à Solution to linear equhgations (20)

Rankmatrix
RankmatrixRankmatrix
Rankmatrix
 
Matrices 2_System of Equations.pdf
Matrices 2_System of Equations.pdfMatrices 2_System of Equations.pdf
Matrices 2_System of Equations.pdf
 
1. Linear Algebra for Machine Learning: Linear Systems
1. Linear Algebra for Machine Learning: Linear Systems1. Linear Algebra for Machine Learning: Linear Systems
1. Linear Algebra for Machine Learning: Linear Systems
 
01.02 linear equations
01.02 linear equations01.02 linear equations
01.02 linear equations
 
Linear Algebra and its use in finance:
Linear Algebra and its use in finance:Linear Algebra and its use in finance:
Linear Algebra and its use in finance:
 
Linear equations rev
Linear equations revLinear equations rev
Linear equations rev
 
Chapter 3: Linear Systems and Matrices - Part 1/Slides
Chapter 3: Linear Systems and Matrices - Part 1/SlidesChapter 3: Linear Systems and Matrices - Part 1/Slides
Chapter 3: Linear Systems and Matrices - Part 1/Slides
 
Pair of linear equations
Pair of linear equationsPair of linear equations
Pair of linear equations
 
Linear equations rev - copy
Linear equations rev - copyLinear equations rev - copy
Linear equations rev - copy
 
Direct Methods to Solve Lineal Equations
Direct Methods to Solve Lineal EquationsDirect Methods to Solve Lineal Equations
Direct Methods to Solve Lineal Equations
 
Direct methods
Direct methodsDirect methods
Direct methods
 
Direct methods
Direct methodsDirect methods
Direct methods
 
Bmb12e ppt 1_r
Bmb12e ppt 1_rBmb12e ppt 1_r
Bmb12e ppt 1_r
 
Linear Algebra Presentation including basic of linear Algebra
Linear Algebra Presentation including basic of linear AlgebraLinear Algebra Presentation including basic of linear Algebra
Linear Algebra Presentation including basic of linear Algebra
 
Ma3bfet par 10.5 31 julie 2014
Ma3bfet par 10.5 31 julie 2014Ma3bfet par 10.5 31 julie 2014
Ma3bfet par 10.5 31 julie 2014
 
Linear algebra03fallleturenotes01
Linear algebra03fallleturenotes01Linear algebra03fallleturenotes01
Linear algebra03fallleturenotes01
 
Matrices and System of Linear Equations ppt
Matrices and System of Linear Equations pptMatrices and System of Linear Equations ppt
Matrices and System of Linear Equations ppt
 
Presentation on matrix
Presentation on matrixPresentation on matrix
Presentation on matrix
 
Module 1 Theory of Matrices.pdf
Module 1 Theory of Matrices.pdfModule 1 Theory of Matrices.pdf
Module 1 Theory of Matrices.pdf
 
Tutorial on EM algorithm – Part 1
Tutorial on EM algorithm – Part 1Tutorial on EM algorithm – Part 1
Tutorial on EM algorithm – Part 1
 

Solution to linear equhgations

  • 1. Solution of System of Linear Equations & Eigen Values and Eigen Vectors P. Sam Johnson March 30, 2015 P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March 30, 2015 1/43
  • 2. Overview Linear systems Ax = b (1) occur widely in applied mathematics. They occur as direct formulations of “real world” problems ; but more often, they occur as a part of numerical analysis. There are two general categories of numerical methods for solving (1). Direct methods are methods with a finite number of steps ; and they end with the exact solution provided that all arithmetic operations are exact. Iterative methods are used in solving all types of linear systems, but they are most commonly used for large systems. P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March 30, 2015 2/43
  • 3. Overview Iterative methods are faster than the direct methods. Even the round-off errors in iterative methods are smaller. In fact, iterative method is a self correcting process and any error made at any stage of computation gets automatically corrected in the subsequent steps. We start with a simple example of a linear system and we discuss consistency of system of linear equations, using the concept of determinant. Two iterative methods – Gauss-Jacobi and Gauss-Seidel methods are discussed, to solve a given linear system numerically. P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March 30, 2015 3/43
  • 4. Overview Eigen values are a special set of scalars associated with a linear system of equations. The determination of the eigen values and eigen vectors of a system is extremely important in physics and engineering, where it is equivalent to matrix diagonalization and arises in such common applications as stability analysis, the physics of rotating bodies, and small oscillations of vibrating systems, to name only a few. Each eigen value is paired with a corresponding vector, called eigen vector. We discuss properties of eigen values and eigen vectors in the lecture. Also we give a method, called Rayeigh’s Power Method, to find out the largest eigen value in magnitude (also called, dominant eigen value). The special advantage of the power method is that the eigen vector corresponds to the dominant eigen value is also calculated at the same time. P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March 30, 2015 4/43
  • 5. An Example A shopkeeper offers two standard packets because he is convinced that north indians each more wheat than rice and south indians each more rice than wheat. Packet one P1 : 5kg wheat and 2kg rice ; Packet two P2 : 2kg wheat and 5kg rice. Notation. (m, n) : m kg wheat and n kg rice. Suppose I need 19kg of wheat and 16kg of rice. Then I need to buy x packets of P1 and y packets of P2 so that x(5, 2) + y(2, 5) = (10, 16). Hence I have to solve the following system of linear equations 5x + 2y = 10 2x + 5y = 16. Suppose I need 34 kg of wheat and 1 kg of rice. Then I must buy 8 packets of P1 and −3 packets of P2. What does this mean? I buy 8 packets of P1 and from these I make three packets of P2 and give them back to the shopkeeper. P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March 30, 2015 5/43
  • 6. Consistency of System of Linear Equations – Graphically (1 Equation, 2 Variables) The solution of the equation ax + by = c is the set of all points satisfy the equation forms a straight line in the plane through the point (c/b, 0) and with slope −a/b. Two lines parallel – no solution intersect – unique solution same – infinitely many solutions. P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March 30, 2015 6/43
  • 7. Consistency of System of Linear Equations Consider the system of m linear equations in n unkowns a11x1 + a12x2 + · · · + a1nxn = b1 a21x1 + a22x2 + · · · + a2nxn = b2 ... + ... + · · · + ... = ... am1x1 + am2x2 + · · · + amnxn = bm. Its matrix form is      a11 a12 . . . a1n a21 a22 . . . a2n ... ... . . . ... am1 am2 . . . amn           x1 x2 ... xm      =      b1 b2 ... bm.      That is, Ax = b, where A is called the coefficient matrix. P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March 30, 2015 7/43
  • 8. To determine whether these equations are consistent or not, we find the ranks of the coefficient matrix A and the augmented matrix K =      a11 a12 . . . a1n b1 a21 a22 . . . a2n b2 ... ... · · · ... ... am1 am2 . . . amn bm      = [A b]. We denote the rank of A by r(A). 1. r(A) = r(K), then the linear system Ax = b is inconsistent and has no solution. 2. r(A) = r(K) = n, then the linear system Ax = b is consistent and has a unique solution. 3. r(A) = r(K) < n, then the linear system Ax = b is consistent and has an infinite number of solutions. P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March 30, 2015 8/43
  • 9. Solution of Linear Simultaneous Equations Simultaneous linear equations occur quite often in engineering and science. The analysis of electronic circuits consisting of invariant elements, analysis of a network under sinusoidal steady-state conditions, determination of the output of a chemical plant, finding the cost of chemical reactions are some of the problems which depend on the solution of simultaneous linear algebraic equations, the solution of such equations can be obtained by direct or iterative methods (successive approximation methods). Some direct methods are as follows: 1. Method of determinants – Cramer’s rule 2. Matrix inversion method 3. Gauss elimination method 4. Gauss-Jordan method 5. Factorization (triangulization) method. P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March 30, 2015 9/43
  • 10. Iterative Methods Direct methods yield the solution after a certain amount of computation. On the other hand, an iterative method is that in which we start from an approximation to the true solution and obtain better and better approximations from a computation cycle repeated as often as may be necessary for achieving a desired accuracy. Thus in an iterative method, the amount of computation depends on the degree of accuracy required. For large systems, iterative methods are faster than the direct methods. Even the round-off errors in iterative methods are smaller. In fact, iteration is a self correcting process and any error made at any stage of computation gets automatically corrected in the subsequent steps. P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March 30, 2015 10/43
  • 11. Diagonally Dominant Definition An n × n matrix A is said to be diagonally dominant if, in each row, the absolute value of each leading diagonal element is greater than or equal to the sum of the absolute values of the remaining elements in that row. The matrix A =   10 −5 −2 4 −10 3 1 6 10   is a diagonally dominant and the matrix A =   2 3 −1 5 8 −4 1 1 1   is not a diagonally dominant matrix. P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March 30, 2015 11/43
  • 12. Diagonal System Definition In the sytem of simultaneous linear equations in n unknowns AX = B, if A is diagonally dominant, then the system is said to be diagonal system. Thus the system of linear equations a1x + b1y + c1z = d1 a2x + b2y + c2z = d2 a3x + b3y + c3z = d3 is a diagonal system if |a1| ≥ |b1| + |c1| |b2| ≥ |a2| + |c2| |c3| ≥ |a3| + |b3|. P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March 30, 2015 12/43
  • 13. The process of iteration in solving AX = B will converge quickly if the cofficient matrix A is diagonally dominant. If the coefficient matrix is not diagonally dominant we must rearrange the equations in such a way that the resulting coefficient matrix becomes dominant, if possible, before we apply the iteration method. Exercises 1. Is the system of equations diagonally dominant? If not, make it diagonally dominant. 3x + 9y − 2z = 10 4x + 2y + 13z = 19 4x − 2y + z = 3. 2. Check whether the system of equations x + 6y − 2z = 5 4x + y + z = 6 −3x + y + 7z = 5. is a diagonal system. If not, make it a diagonal system. P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March 30, 2015 13/43
  • 14. Jacobi Iteration Method (also known as Gauss-Jacobi) Simple iterative methods can be designed for systems in which the coefficients of the leading diagonal are large as comparted to others. We now describe three such methods. Gauss-Jacobi Iteration Method Consider the system of linear equations a1x + b1y + c1z = d1 a2x + b2y + c2z = d2 a3x + b3y + c3z = d3. If a1, b2, c3 are large as compared to other coefficients (if not, rearrange the given linear equations), solve for x, y, z respectively. That is, the cofficient matrix is diagonally dominant. P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March 30, 2015 14/43
  • 15. Then the system can be written as x = 1 a1 (d1 − b1y − c1z) y = 1 b2 (d2 − a2x − c2z) (2) z = 1 c3 (d3 − a3x − b3y). Let us start with the initital approximations x0, y0, z0 for the values of x, y, z repectively. In the absence of any better estimates for x0, y0, z0, these may each be taken as zero. Substituting these on the right sides of (2), we gt the first approximations. This process is repeated till the difference between two consecutive approximations is negligible. P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March 30, 2015 15/43
  • 16. Exercises 3. Solve, by Jacobi’s iterative method, the equations 20x + y − 2z = 17 3x + 20y − z = −18 2x − 3y + 20z = 25. 4. Solve, by Jacobi’s iterative method correct to 2 decimal places, the equations 10x + y − z = 11.19 x + 10y + z = 28.08 −x + y + 10z = 35.61. 5. Solve the equations, by Gauss-Jacobi iterative method 10x1 − 2x2 − x3 − x4 = 3 −2x1 + 10x2 − x3 − x4 = 15 −x1 − x2 + 10x3 − 2x4 = 27 −x1 − x2 − 2x3 + 10x4 = −9. P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March 30, 2015 16/43
  • 17. Gauss-Seidel Iteration Method This is a modification of Jacobi’s Iterative Method. Consider the system of linear equations a1x + b1y + c1z = d1 a2x + b2y + c2z = d2 a3x + b3y + c3z = d3. If a1, b2, c3 are large as compared to other coefficients (if not, rearrange the given linear equations), solve for x, y, z respectively. Then the system can be written as x = 1 a1 (d1 − b1y − c1z) y = 1 b2 (d2 − a2x − c2z) (3) z = 1 c3 (d3 − a3x − b3y). P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March 30, 2015 17/43
  • 18. Here also we start with the initial approximations x0, y0, z0 for x, y, z respectively, (each may be taken as zero). Subsituting y = y0, z = z0 in the first of the equations (3), we get x1 = 1 a1 (d1 − b1y0 − c1z0). Then putting x = x1, z = z0 in the second of the equations (3), we get y1 = 1 b2 (d2 − a2x1 − c2z0). Next subsituting x = x1, y = y1 in the third of the equations (3), we get z1 = 1 c3 (d3 − a3x1 − b3y1) and so on. P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March 30, 2015 18/43
  • 19. That is, as soon as a new approximation for an unknown is found, it is immediately used in the next step. This process of iteration is repeatedly till the values of x, y, z are obtained to desired degree of accuracy. Since the most recent approximations of the unknowns are used which proceeding to the next step, the convergence in the Gauss-Seidel method is twice as fast as in Gauss-Jacobi method. P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March 30, 2015 19/43
  • 20. Convergence Criteria The condition for convergence of Gauss-Jacobi and Gauss-Seidel methods is given by the following rule: The process of iteration will converge if in each equation of the system, the absolute value of the largest co-efficient is greater than the sum of the absolute values of all the remaining coefficients. That is, if the coefficient matrix is a diagonal dominant matrix, then the process of iteration will converge. For a given system of linear equations, before finding approximations by using Gauss-Jacobi / Gauss-Seidel methods, convergence criteria should be verified. If the coefficient matrix is a not diagonal dominant matrix, rearrange the equations, so that the new coefficient matrix is diagonal dominant. Note that all systems of linear equations are not diagonal dominant. P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March 30, 2015 20/43
  • 21. Exercises 6. Apply Gauss-Seidel iterative method to solve the equations 20x + y − 2z = 17 3x + 20y − z = −18 2x − 3y + 20z = 25. 7. Solve the equations, by Gauss-Jacobi and Gauss-Seidel methods (and compare the values) 27x + 6y − z = 85 x + y + 54z = 110 6x + 15y + 2z = 72. 8. Apply Gauss-Seidel iterative method to solve the equations 10x1 − 2x2 − x3 − x4 = 3 −2x1 + 10x2 − x3 − x4 = 15 −x1 − x2 + 10x3 − 2x4 = 27 −x1 − x2 − 2x3 + 10x4 = −9. P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March 30, 2015 21/43
  • 22. Relaxation Method Consider the system of linear equations a1x + b1y + c1z = d1 a2x + b2y + c2z = d2 a3x + b3y + c3z = d3. We define the residuals Rx , Ry , Rz by the relations Rx = d1 − a1x − b1y − c1z Ry = d2 − a2x − b2y − c2z (4) Rz = d3 − a3x − b3y − c3z. P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March 30, 2015 22/43
  • 23. To start with we assume that x = y = z = 0 and calculate the initial residuals. Then the residuals are reduced step by step, by giving increments to the variables. We note from the equations (4) that if x is increased by 1 (keeping y and z constant), Rx , Ry and Rz decrease by a1, a2, a3 respectively. This is shown in the following table along with the effects on the residuals when y and z are given unit increments. δRx δRy δRz δx = 1 −a1 −a2 −a3 δy = 1 −b1 −b2 −b3 δz = 1 −c1 −c2 −c3 The table is the negative of transpose of the coefficient matrix. P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March 30, 2015 23/43
  • 24. At each step, the numerically largest residual is reduced to almost zero. How can one reduce a particular residual? To reduce a particular residual, the value of the corresponding variable is changed. For example, to reduce Rx by p, x should be increased by p/a1. When all the residuals have been reduced to almost zero, the increments in x, y, z are added separately to give the desired solution. Verification Process : The residuals are not all negligible when the computed values of x, y, z are substituted in (4), then there is some mistake and the entire process should be rechecked. P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March 30, 2015 24/43
  • 25. Convergence of the Relaxation Method Relaxation method can be applied successfully only if there is at least one row in which diagonal element of the coefficient matrix dominate the other coefficients in the corresponding row. That is, |a1| ≥ |b1| + |c1| |b2| ≥ |a2| + |c2| |c3| ≥ |a3| + |b3| should be valid for at least one row. P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March 30, 2015 25/43
  • 26. Exercises 9. Solve by relaxation method, the equations 9x − 2y + z = 50 x + 5y − 3z = 18 −2x + 2y + 7z = 19. 10. Solve by relaxation method, the equations 10x − 2y − 3z = 205 −2x + 10y − 2z = 154 −2x − y + 10z = 120. P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March 30, 2015 26/43
  • 27. Eigen Values and Eigen Vectors For any n × n matrix A, consider the polynomial χA(λ) := |λI − A| = λ − a11 −a12 · · · −a1n −a21 λ − a22 · · · −a2n · · · · · · · · · · · · −an1 −an2 · · · λ − ann . (5) Clearly this is a monic polynomial of degree n. By the fundamental theorem of algebra, χ(A) has exactly n (not necessarily distinct) roots. χA(λ) the characteristic polynomial of A χA(λ) = 0 the characteristic equation of A the roots of χA(λ) the characteristic roots of A distinct roots of χA(λ) the spectrum of A P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March 30, 2015 27/43
  • 28. Few Observations on Characteristic Polynomials The constant terms and the coefficient of λn−1 in χA(λ) are (−1)n|A| and tr(A). The sum of the characteristic roots of A is tr(A) and the product of the characteristic roots of A is |A|. Since λI − AT = (λI − A)T , characteristic polynomials of A and AT are the same. Since λI − P−1AP = P−1(λI − A)P, similar matrices have the same characteristic polynomials. P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March 30, 2015 28/43
  • 29. Eigen Values and Eigen Vectors Definition A complex number λ is an eigen value of A if there exists x = 0 in Cn such that Ax = λx. Any such (non-zero) x is an eigen vector of A corresponding to the eigen value λ. When we say that x is an eigen vector of A we mean that x is an eigen vector of A corresponding to some eigen value of A. λ is an eigen value of A iff the system (λI − A)x = 0 has a non-trivial solution. λ is a characteristic root of A iff λI − A is singular. Theorem A number λ is an eigen value of A iff λ is a characteristic root of A. P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March 30, 2015 29/43
  • 30. The preceding theorem shows that eigen values are the same as characteristic roots. However, by ‘the characteristic roots of A’ we mean the n roots of the characteristic polynomial of A whereas ‘the eigen values of A’ would mean the distinct characteristic roots of A. Equivalent names: Eigen values proper values, latent roots, etc. Eigen vectors characteristic vectors, latent vectors, etc. P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March 30, 2015 30/43
  • 31. Properties of Eigen Values 1. The sum and product of all eigen values of a matrix A is the sum of principal diagonals and determinant of A, respectively. Hence a matrix is not invertible if and only if it has a zero eigen value. 2. The eigen values of A and AT (the transpose of A) are same. 3. If λ1, λ2, . . . , λn are eigen values of A, then (a) 1/λ1, 1/λ2, . . . , 1/λn are eigen values of A−1 . (b) kλ1, kλ2, . . . , kλn are eigen values of kA. (c) kλ1 + m, kλ2 + m, . . . , kλn + m are eigen values of kA + mI. (d) λp 1, λp 2, . . . , λp n are eigen values of Ap , for any positive integer p. 4. The transformation of A by a non-singular matrix P to P−1AP is called a similarity transformation. Any similarity transformation applied to a matrix leaves its eigen values unchanged. 5. The eigen values of a real symmetric matrix are real. P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March 30, 2015 31/43
  • 32. Cayley - Hamilton Theorem Theorem For every matrix A, the characteristic polynomial of A annihilates A. That is, every matrix satisfies its own characteristic equation. Main uses of Cayley-Hamilton theorem 1. To evaluate large powers A. 2. To evaluate a polynomial in A with large degree even if A is singular. 3. To express A−1 as a polynomial in A whereas A is non-singular. P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March 30, 2015 32/43
  • 33. Dominant Eigen Value We have seen that the eigen values of an n × n matrix A are obtained by solving its characteristic equation λn + an−1λn−1 + cn−2λn−2 + · · · + c0 = 0. For large values of n, polynomial equations like this one are difficult and time-consuming to solve. Moreover, numerical techniques for approximating roots of polynomial equations of high degree are sensitive to rounding errors. We now look at an alternative method for approximating eigen values. As presented here, the method can be used only to find the eigen value of A, that is, largest in absolute value – we call this eigen value the dominant eigen value of A. P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March 30, 2015 33/43
  • 34. Dominant Eigen Vector Dominant eigen values are of primary interest in many physical applications. Let λ1, λ2, . . . , and λn be the eigen values of an n × n matrix A. λ1 is called the dominant eigen value of A if |λi | > |λj |, i = 2, 3, . . . , n. For example, the matrix 1 0 0 −1 has no dominant eigen value. The eigen vectors corresponding to λ1 are called dominant eigen vectors of A. P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March 30, 2015 34/43
  • 35. Power Method In many engineering problems, it is required to compute the numerically largest eigen values (dominant eigen values) and the corresponding eigen vectors (dominant eigen vectors). In such cases, the following iterative method is quite convenient which is also well-suited for machine computations. Let x1, x2, . . . , xn be the eigen vectors corresponding to the eigen values λ1, λ2, . . . , λn. We assume that the eigen values of A are real and distint with |λ1| > |λ2| > · · · > |λn|. P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March 30, 2015 35/43
  • 36. As the vectors x1, x2, . . . , xn are linearly independent, any arbitrary column vector can be written as x = k1x1 + k2x2 + · · · + knxn. We now operate A repeatedly on the vector x. Then Ax = k1λ1x1 + k2λ2x2 + · · · + knλnxn. Hence Ax = λ1 k1x1 + k2 λ2 λ1 x2 + · · · + kn λn λ1 xn . and through iteration we obtain, Ar x = λ1 k1x1 + k2 λ2 λ1 r x2 + · · · + kn λn λ1 r xn (6) provided λ1 = 0. P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March 30, 2015 36/43
  • 37. Since λ1 is dominant, all terms inside the brackets have limit zero except the first term. That is, for large values of r, the vector k1x1 + k2 λ2 λ1 r x2 + · · · + kn λn λ1 r xn will converge to k1x1, that is the eigen vector of λ1. The eigenvalue is obtained as λ1 = lim n→∞ (Ar+1x)p (Ar x)p p = 1, 2, . . . , n where the index p signifies the pth component in the corresponding vector. That is, if we take the ratio of any corresponding components of Ar+1x and Ar x, this ratio should therefore have a limit λ1. P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March 30, 2015 37/43
  • 38. Linear Convergence Neglecting the terms of λ2, λ3, . . . , λn in (6) we get Ar+1x k1λr+1 1 − x1 = λ2 λ1 Ar x k1λr 1 − x1 which shows that the convergence is linear with convergence ratio λ2 λ1 . Thus, the convergence is faster than if this ratio is small. P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March 30, 2015 38/43
  • 39. Working Rule To Find Dominant Eigen Value Choose x = 0, an arbitrary real vector. Generally, we choose x as   1 1 1   , or,   1 0 0   . In other words, for the purpose of computation, we generally take x with 1 as the first component. We start with a column vector x which is as near the solution as possible and evaluate Ax which is written as λ(1) x(1) after normalization. This gives the first approximation λ(1) to the eigen value and x(1) is the eigen vector. P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March 30, 2015 39/43
  • 40. Rayeigh’s Power Method Similarly, we evaluate Ax(2) = λ(2)x(2) which gives the second approximation. We repeat this process till [x(r) − x(r−1)] becomes negligible. Then λ(r) will be the dominant (largest) eigen value and x(r), the corresponding eigen vector (dominant eigen vector). The iteration method of finding the dominant eigen value of a square matrix is called the Rayeigh’s power method. At each stage of iteration, we take out the largest component from y(r) to find x(r) where y(r+1) = Ax(r) , r = 0, 1, 2, . . . . P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March 30, 2015 40/43
  • 41. To find the Smallest Eigen Value To find the smallest eigen value of a given matrix A, apply power method to the matrix A−1. Find the dominant eigen value of A−1 and then take is reciprocal. Exercise 11. Say true or false with justification: If λ is the dominant eigen value of A and β is the dominant eigen value of A − λI, then the smallest eigen value of A is λ + β. P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March 30, 2015 41/43
  • 42. Exercises 12. Determine the largest eigen value and the corresponding eigen vector of the matrix 5 4 1 2 . 13. Find the largest eigen value and the corresponding eigen vector of the matrix   2 −1 0 −1 2 −1 0 −1 2   using power method. Take [1, 0, 0]T as an initial eigen vector. 14. Obtain by power method, the numerically dominant eigen value and eigen vector of the matrix   15 −4 −3 −10 12 −6 −20 4 −2.   P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March 30, 2015 42/43
  • 43. References 1. Richard L. Burden and J. Douglas Faires, Numerical Analysis - Theory and Applications, Cengage Learning, Singapore. 2. Kendall E. Atkinson, An Introduction to Numerical Analysis, Wiley India. 3. David Kincaid and Ward Cheney, Numerical Analysis - Mathematics of Scientific Computing, American Mathematical Society, Providence, Rhode Island. 4. S.S. Sastry, Introductory Methods of Numerical Analysis, Fourth Edition, Prentice-Hall, India. P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March 30, 2015 43/43