SlideShare une entreprise Scribd logo
1  sur  190
1
Matrices I
SOLO HERMELIN
Updated: 30.03.11http://www.solohermelin.com
2
SOLO Matrices I
Table of Content
Introduction to Algebra
Matrices
Vectors and Vector Spaces
Matrix
Operations with Matrices
Domain and Codomain of a Matrix A
Transpose AT
of a Matrix A
Conjugate A* and Conjugate Transpose AH
=(A*
)T
of a Matrix A
Sum and Difference of Matrices A and B
Multiplication of a Matrix by a Scalar
Multiplication of a Matrix by a Matrix
Kronecker Multiplication of a Matrix by a Matrix
Partition of a Matrix
Elementary Operations with a Matrix
Rank of a Matrix
Equivalence of Two Matrices
3
SOLO Matrices I
Table of Content (continue – 1)
Matrices
Square Matrices
Trace of a Square Matrix, Diagonal Square Matrix
Identity Matrix, Null Matrix, Triangular Matrices
Hessenberg Matrix
Toeplitz Matrix, Hankel Matrix
Householder Matrix
Vandermonde Matrix
Hermitian Matrix, Skew-Hermitian Matrix, Unitary Matrix
Matrices & Determinants History
L, U Factorization of a Square Matrix A by Elementary Operations
Invertible Matrices
Diagonalization of a Square Matrix A by Elementary Operations
4
SOLO Matrices I
Table of Content (continue – 2)
Matrices
Square Matrices
Determinant of a Square Matrix – det A or |A|
Eigenvalues and Eigenvectors of Square Matrices Anxn
Jordan Normal (Canonical) Form
Cayley-Hamilton Theorem
Matrix Decompositions
Companion Matrix
References
5
SOLO Algebra
Set and Set Operations
A collection of objects sharing a common property is called a Set. We use the notation
{ }PpropertyhasxxS :=
We write Sx ∈
S1 is a subset of S if every element of S1
is an element of S
{ }SxSxxSS ∈→∈∀=⊂ 11
:
x is an element of S
1
2 { }elementsno=∅ Null (Empty) set
{ }2121
: SxorSxxSS ∈∈= Union of sets3
{ }2121
: SxandSxxSS ∈∈= Intersection of sets4
{ }2121
: SxandSxxSS ∉∈=− Difference of sets5
{ }Ω=Ω∈∉= SSandxandSxxS : Complement of S
relative to Ω
6
21
SS 
1
S
2
S
21
SS 
1
S
2
S
21
SS −
1
S
2
S
Ω
S
S
6
SOLO Algebra
Set and Set Operations
A collection of objects sharing a common property is called a Set. We use the notation
{ }PpropertyhasxxS :=
We write Sx ∈
S1 is a subset of S if every element of S1
is an element of S
{ }SxSxxSS ∈→∈∀=⊂ 11
:
x is an element of S
1
2 { }elementsno=∅ Null (Empty) set
{ }2121
: SxorSxxSS ∈∈= Union of sets3
{ }2121
: SxandSxxSS ∈∈= Intersection of sets4
{ }2121
: SxandSxxSS ∉∈=− Difference of sets5
{ }Ω=Ω∈∉= SSandxandSxxS : Complement of S
relative to Ω
6
21
SS 
1
S
2
S
21
SS 
1
S
2
S
21
SS −
1
S
2
S
Ω
S
S
7
SOLO Algebra
Group
A nonempty set G is said to be a group if in G there is defined an operation * such that:
GbaGba ∈∀∈ ,* Closure1
( ) ( ) Gcbacbacba ∈∀= ,,**** Associativity2
3 GaaaeeatsGe ∈∀==∈∃ **.., Identity element
4 eabbatsGbGa ==∈∃∈∀ **..,, Inverse element b = a-1
Lemma1: A group G has exactly one identity element
Proof: If e and f are both identity elements, then
fe
ffeef
eeffe
=⇒



==
==
**
**
Lemma2: Every element in G has exactly one inverse element
Proof: If b and c are both inverse elements of x, then
cxebx ** ==   cxbbxb
ee
**** =
*b
→ → cebe ** = → cb =
8
SOLO Algebra
Ring
A Ring is a set R equipped with two binary operations +: R ×R→R (called addition),
and •: R ×R→R (called multiplication), such that:
(R,+) is an Abelian Group with identity element 0:
( ) ( )cbacba ++=++
aaa =+=+ 00
abba +=+
0..,, =+−=−+∈−∃∈∀ aaaatsRaRa
(R,.) is associative
( ) ( )cbacba ••=••
Multiplication distributes over addition:
( ) ( ) ( )cabacba •+•=+•
( ) ( ) ( )cbcacba •+•=•+
RbaRba ∈∀∈+ ,, Closure
Associativity
Identity element
Inverse element
Group
Properties
Abelian Group property
9
SOLO Algebra
Field
A Field is a Ring satisfying two additional conditions:
(1) There also exists an identity e with respect to multiplication, i.e.:
aaa =•=• 11
(2) All but the zero element have inverse with respect to multiplication
1..,,0& 111
=•=•∈=∃≠∈∀ −−−
aaaatsRabaRa
10
Synthetic Geometry
Euclid 300BC
Algebras HistorySOLO
Extensive Algebra
Grassmann 1844
Binary Algebra
Boole 1854
Complex Algebra
Wessel, Gauss 1798
Spin Algebra
Pauli, Dirac 1928
Syncopated Algebra
Diophantes 250AD
Quaternions
Hamilton 1843
Tensor Calculus
Ricci 1890
Vector Calculus
Gibbs 1881
Clifford Algebra
Clifford 1878
Differential Forms
E. Cartan 1908
First Printing
1482
http://modelingnts.la.asu.edu/html/evolution.html
Geometric Algebra
and Calculus
Hestenes 1966
Matrix Algebra
Cayley 1854
Determinants
Sylvester 1878
Analytic Geometry
Descartes 1637
Table of Content
11
SOLO Matrices
Definitions:
Vectors and Vector Spaces
Vector: A n-dimensional n-Vector is an ordered set of elements x1, x2,…,xn over a
field F. One other way is to define it as Row Matrix or a Column Matrix
[ ]n
n
xxxr
x
x
x
c 

21
2
1
, =












=
we have where T is the Transpose operation.crrc TT
== &
Scalar: A one-dimensional Vector with its element a real or a complex number.
Null Vector: A n-dimensional Vector with all elements equal zero.
Equality of two Vectors:
niforyxyx ii ,1==⇔=
[ ]000,
0
0
0


=












= rc oo
12
SOLO Matrices
12
VECTOR SPACE
Given the complex numbers .
A Vector Space V (Linear Affine Space) with elements over C if its elements
satisfy the following conditions:
I. Exists a operation of Addition with the following properties:
Commutative (Abelian) Law for Addition1
Associative Law for Addition2
Exists a unique vector3
II. Exists a operation of Multiplication by a Scalar with the following properties:
4
Inverse
5
Associative Law for Multiplication6
Distributive Law for Multiplication7
Commutative Law for Multiplication8
We can write:
( ) ( )
( )
( )
( )
00101010 3
575
=⋅→==+=⋅+⋅=⋅+ xxxxxxxx
( ) yxyx βαα +=+
( ) xxx βαβα +=+
( ) ( )xx βαβα =
xx =⋅1
0.. =+∈∃∈∀ yxtsVyVx
xx =+ 0 0
( ) ( )zyxzyx ++=++
xyyx +=+
Vzyx ∈,,
C∈γβα ,,
13
SOLO Matrices
Linear Dependence and Independence
Vectors and Vector Spaces
Vectors are said to be Linear Independent if:mvvv ,,, 21 
00 212211 =====+++ mmm ifonlyandifvvv αααααα 
Vectors are said to be Linear Dependent if :mvvv ,,, 21 
0&02211 ≠=+++ imm somevvv αααα 
k
m
ki
i
iik vv αα /
1 









−= ∑
≠
=
If the vectors are Linear Dependent, the vectors whose coefficients
αk ≠ 0 in can be obtained as a Linear Combination of
other Vectors
mvvv ,,, 21  kv
011 =++++ mmkk vvv ααα 
14
SOLO Matrices
Linear Dependence and Independence
Vectors and Vector Spaces
Theorem
If Vectors are said to be Linear Independent and vectors
are Linear Dependent, than can be expressed as a Unique Linear
Combination of .
mvvv ,,, 21  121 ,,,, +mm vvvv 
mvvv ,,, 21 
1+mv
Proof
0&0 1112211 ≠=++++ +++ mmmmm vvvv ααααα  since αm+1 = 0 implies
mvvv ,,, 21  are Linear Dependent, and this is a contradiction.
therefore: ( ) 122111 / ++ +++−= mmmm vvvv αααα 
q.e.d.
121 ,,,, +mm vvvv  Linear Dependent implies that exists some (more than one) αi ≠ 0 s.t.
To prove Uniqueness suppose that there are two expressions
( ) nivvvv ii
tIndependenLinearvvm
i
iii
m
i
ii
m
i
iim
m
,10
,,
111
1
1
=∀=⇒=−⇒== ∑∑∑ ===
+ γβγβγβ

15
SOLO Matrices
Basis of a Vector Space V
Vectors and Vector Spaces
A set of Vectors of a n-Vector Space is called a Basis of V if these n
Vectors are Linearly Independent and every Vector can be Uniquely expressed
as a Linear Combination of those Vectors:
nvvv ,,, 21 
y
∑=
=
n
i
iivy
1
α
16
SOLO Matrices
Vectors and Vector Spaces
Relation Between Two Bases of a Vector Space V
If we have Two Bases of Vectors , we can writenn wwwandvvv ,,,,,, 2121 




























=














⇒







+++=
+++=
+++=
n
A
nnnn
n
n
nnnnnnn
nn
nn
v
v
v
w
w
w
vvvw
vvvw
vvvw
nxn

  









2
1
21
22221
11211
2
1
2211
22221212
12121111
ααα
ααα
ααα
ααα
ααα
ααα
In the same way




























=














⇒







+++=
+++=
+++=
n
B
nnnn
n
n
nnnnnnn
nn
nn
w
w
w
v
v
v
wwwv
wwwv
wwwv
nxn

  









2
1
21
22221
11211
2
1
2211
22221212
12121111
βββ
βββ
βββ
βββ
βββ
βββ
Therefore














=














=














n
nxnnxn
n
nxn
n v
v
v
AB
w
w
w
B
v
v
v

2
1
2
1
2
1
Bnxn is called the Inverse of the Square
Matrix Anxn and is written as Anxn
-1
.














=














=














n
nxnnxn
n
nxn
n w
w
w
BA
v
v
v
A
w
w
w

2
1
2
1
2
1
nnxnnxn IBA =
nnxnnxn IAB =
17
SOLO
Inner Product
If V is a complex Vector Space, for the Inner Product (a scalar) < , >
between the elements (complex numbers) is defined by:
Vzyx ∈∀ ,,
*
,, >>=<< xyyx1 Commutative law
><+>>=<+< zxyxzyx ,,,2 Distributive law
Cyxyx ∈∀><>=< λλλ ,,3
00,&0, =⇔=><≥>< xxxxx4
Using to we can show that:1 4
( ) ( ) ( )
><+><=><+><=>+<=>+< xyxyyxyxyyxxyy ,,,,,, 21
1
*
2
*
1
2
*
21
1
21
( ) ( )
><=><=><=>< yxxyxyyx ,,,, *
2
***
2
λλλλ
( )
>=<>=<⇒><+><=>+>=<< xxxxxx ,000,0,0,00,0,
2
Matrices
Vectors and Vector Spaces
18
SOLO
Inner Product
( ) **
:, xyyxyx TT
==><
We can define the Inner Product in a Vector Space as
Matrices
therefore
( ) ∑=
=+++>=<⇒












=












=
n
i
iinn
nn
yxyxyxyxyx
y
y
y
y
x
x
x
x
1
**
2
*
21
*
1
2
1
2
1
,& 

Outer Product
( ) [ ]














=












==><
**
2
*
1
*
2
*
22
*
12
*
1
*
21
*
11
**
2
*
1
2
1
*
:
nnnn
n
n
n
n
T
yxyxyx
yxyxyx
yxyxyx
yyy
x
x
x
yxyx






Vectors and Vector Spaces
19
SOLO
(Identity)
00 =⇔= xx2
1 Vxx ∈∀≥ 0
(Non-negativity)
xx λλ =4
Norm of a Vector .x
Vyxyxyxyx ∈∀+≤+≤− ,3 (Triangle Inequalities)
Matrices
The Norm of a Vector is defined by the following relations:
If V is an Inner Product space, than we can induce the norm: [ ] 2/1
, ><= xxx
and
We can see that 0,
2/1
1
2
2/1
1
*2/1
≥





=





=>=< ∑∑ ==
n
i
i
n
i
ii xxxxxx 1
0,100
2/1
1
2
=⇒=∀=⇒=





= ∑=
xnixxx i
n
i
i
2
Vectors and Vector Spaces
20
SOLO
Inner Product
yxyx ≤>< ,
Cauchy, Bunyakovsky, Schwarz Inequality known as Schwarz Inequality
Let x, y be the elements of an Inner Product space V, than :
0,,,,,
2*
≥><+><+><+>>=<++< yyxyyxxxyxyx ααααα
Assuming that (for which the equality holds)
we choose:
><
><
−=
yy
yx
,
,
α
we have:
0,
,
,
,
,,
,
,,
, 2
2*
≥><
><
><
+
><
><><
−
><
><><
−>< yy
yy
yx
yy
xyyx
yy
yxyx
xx
which reduce to:
0
,
,
,
,
,
,
,
222
≥
><
><
+
><
><
−
><
><
−><
yy
yx
yy
yx
yy
yx
xx
or:
><≥⇔≥><−><>< yxyxyxyyxx ,0,,,
2
q.e.d.
Augustin Louis Cauchy
)1789-1857(
Viktor Yakovlevich
Bunyakovsky
1804 - 1889
Hermann Amandus
Schwarz
1843 - 1921
MatricesVectors and Vector Spaces
0≠y
21
SOLO
Inner Product
Cauchy Inequality
Let ai, bi (i = 1,…,n) be complex numbers, than :












≤ ∑∑∑ ===
n
i
i
n
i
i
n
i
ii baba
1
2
1
2
2
1
Augustin Louis Cauchy
)1789-1857(
Viktor Yakovlevich
Bunyakovsky
1804 - 1889
Hermann Amandus
Schwarz
1843 - 1921
Buniakowsky-Schwarz Inequality
( ) ( ) ( )[ ] ( )[ ]∫∫∫ ≤ dttgdttfdttgtf
22
2
Buniakowsky, V., “Sur quelques inéqualités concernant
Les intégrales ordinaires et les intégrales aux différences
finite”, Mémoires de l’Acad. de St. Pétersbourg (VII),(1859)
Schwarz, H.A., “Über ein die Flächen kleinstein
Flächeninhalts betreffendes Problem der
Variationsrechnung”, Acta Soc. Scient. Fen., 15, 315-362,
(1885)
Matrices
Vectors and Vector Spaces
22
SOLO
Inner Product
[ ] 2/1
, ><= xxx
Parallelogram law
Given an Inner Product space V, than is a norm on V.
Moreover for any x,y є X the parallelogram law
2222
22 yxyxyx +=−++
is valid.
Proof
q.e.d.
x
y
yx +
yx −
22
22
22,2,2
,,,,
,,,,
,,
yxyyxx
yyxyyxxx
yyxyyxxx
yxyxyxyxyxyx
+>=<+><=
><+><−><−><+
><+><+><+>=<
>−−<+>++=<−++
Matrices
Vectors and Vector Spaces
23
SOLO
Inner Product
Let compute:
From this we can see that
><+><=
><−><+><+><−
><+><+><+>=<
>−−<−>++=<−−+
xyyx
yyxyyxxx
yyxyyxxx
yxyxyxyxyxyx
,2,2
,,,,
,,,,
,,
22
><+><−=
><−><+><−><−
><+><+><−>=<
><−><+><+><−
><+><+><+>=<
>−−<−>++=<−−+
xyiyxi
yyxyiyxixx
yyxyiyxixx
yiyixyiyixxx
yiyixyiyixxx
yixyixyixyixyixyix
,2,2
,,,,
,,,,
,,,,
,,,,
,,
22
><=−−++−−+ yxyixiyixiyxyx ,4
2222
*2222
,4,4 ><>=<=−++−−−+ yxxyyixiyixiyxyx
MatricesVectors and Vector Spaces
24
SOLO
Norm of a Vector .
Matrices
Let use the Norm definition to develop the following relations:
yxyx
yyxxyx
yyyxxyxxyxyxyx
,Re2
,,
,,,,,
22
22
2
++=
+++=
+++=++=+
We obtain the Triangle Inequalities
yxyxyxyxyx ,2,2
22222
++≤+≤−+
( ) ( ) yxyxyxyx ,Re,Im,Re,
22
≥+=use the fact that:
to obtain:
use the Scwarz Inequality: ><≥ yxyx ,
yxyxyxyxyx 22
22222
++≤+≤−+to obtain:
or: ( ) ( )222
yxyxyx +≤+≤−
( ) ( )yxyxyx +≤+≤−
Vectors and Vector Spaces
x
25
SOLO
Norm of a Vector .
Matrices
Other Definitions of Vector Norms
∑=
=
n
i
ixx
1
The following definitions satisfy Vector Norm Properties:
1
2 { }i
i
xx max=
( ) ( )[ ] ( )[ ] [ ] ∑∑= =
====
n
i
n
j
jiij
TT
xxqxQxxTTxxTxTx
1 1
*2/1*
2/1
**
2/1
**3
Vectors and Vector Spaces
x
Return to
Table of Content
26
SOLO Matrices
Matrix
A Matrix A over a field F is a rectangular array of elements in F.
If A is over a field of real numbers, A is called a Real Matrix.
If A is over a field of complex numbers, A is called a Complex Matrix.
A n rows by m columns Matrix A, n x m Matrix, is defined as:
[ ]






















==












=
s
w
o
r
n
r
r
r
ccc
aaa
aaa
aaa
A
n
columnsm
m
nmnn
m
m
nxm
  





2
1
21
21
22221
11211
aij (i=1,n,j=1,m) are called the elements of A, and we use also the notation:
{ }ijaAnxm
=
Return to
Table of Content
27
SOLO Matrices
Definitions:
Any complex matrix A with n rows (r1, r2,…,rn) and m columns (c1,c2,…,cm)
[ ]m
n
nxm ccc
r
r
r
A ,,, 21
2
1


=














=
can be considered as a linear function (or mapping or transformation) for a
m-dimensional domain to a n-dimensional codomain.
( ) ( ){ }AcodomyAdomxxAyA nxmxnxm ∈⇒∈= 11;:
In the same way its conjugate transpose:
[ ]H
n
HH
H
m
H
H
H
mxn
rrr
c
c
c
A ,,, 21
2
1


=














=
is a linear function (or mapping or transformation) for a n-dimensional codomain to
a m-dimensional domain.
( ) ( ){ }AcdomxAcodomyyAxA mxnx
HH
mxn ∈⇒∈= 111111 ;:
Operations with Matrices
28
SOLO Matrices
Domain and Codomain of a Matrix A
The domain of A can be decomposed into orthogonal subspaces:
( ) ( ) ( )ANARAdom H
⊥
⊕= ( )H
AR
( )AN
( )H
AN
( )AR
xAy =
11 yAx H
=
( )Adomxmx ∈1
11mx
x
( )Acodomy nx
∈11
1nx
yR (AH
) – is the row space of AH
(dimension r)
N (A) – is the null-space of A (x∈ N (A) ⇔ A x = 0)
or the kernel of A (ker (A)) (dimension m-r)
The codomain of A (domain of AH
) can be decomposed into orthogonal subspaces:
( ) ( ) ( )H
ANARAcodom
⊥
⊕=
R (A) – is the column space of A (dimension r)
N (AH
) – is the null-space of AH
(dimension n-r)
Operations with Matrices
Return to
Table of Content
29
SOLO Matrices
Operations with Matrices
The Transpose AT
of a Matrix A is obtained by interchanging the rows with the columns.
For












=
nmnn
m
m
aaa
aaa
aaa
Anxm




21
22221
11211
Transpose AT
of a Matrix A
the transpose is
( )












==
nmmm
n
n
TT
aaa
aaa
aaa
AA mxnnxm




21
22212
12111
From the definition it is obvious that (AT
)T
= A
Return to
Table of Content
30
SOLO Matrices
Operations with Matrices
The Conjugate AT
of a Matrix A is obtained by tacking the conjugate complex of each
of the elements of A.
{ }*
**
2
*
1
*
2
*
22
*
21
*
1
*
12
*
11
*
ij
nmnn
m
m
a
aaa
aaa
aaa
A nxm =














=




Conjugate A*
of a Matrix A
the transpose is
( )














==
**
2
*
1
*
2
*
22
*
12
*
1
*
21
*
11
*
nmmm
n
n
TH
aaa
aaa
aaa
AA nxmmxn




Conjugate Transpose AH
=(A*
)T
of a Matrix A
Return to
Table of Content
31
SOLO Matrices
Operations with Matrices
The sum/difference of two matrices A and B of the same dimensions n x m is obtained
by adding/subtracting the elements bij to/from elements aij.
Sum and Difference of Matrices A and B of the same dimensions n x m
{ }ijij
nmnmnnnn
mm
mm
ba
bababa
bababa
bababa
BA nxmnxm
±=












±±±
±±±
±±±
=±




2211
2222222121
1112121111
Given the following transformations
1111 , mxnxmnxmxnxmnx xBzxAy ==
( ) 11111 mxnxmnxmmxnxmmxnxmnxnx xBAxBxAzy ±=±=±
Return to
Table of Content
32
SOLO Matrices
Operations with Matrices
Multiplication of a Matrix by a Scalar
The product of a Matrix by a Scalar is a Matrix in which each Element is multiplied
by the Scalar.
{ }ij
nmnn
m
m
a
aaa
aaa
aaa
Anxm
α
ααα
ααα
ααα
α =












=




21
22221
11211
Given the following operations
1111 , mxnxmnxmxnxmnx xAzxAy α==
Return to
Table of Content
33
SOLO Matrices
Operations with Matrices
Multiplication of a Matrix by a Matrix
Consider the two consecutive transformations:
nxp
npnn
p
p
mpmm
p
p
nmnn
m
m
C
ccc
ccc
ccc
bbb
bbb
bbb
aaa
aaa
aaa
BA mxpnxm
=














=


























=












21
22221
11211
21
22221
11211
21
22221
11211
where




























===












pmpmm
p
p
pxmx
m
z
z
z
bbb
bbb
bbb
zBx
x
x
x
mxp






2
1
21
22221
11211
11
2
1
11
2
1
21
22221
11211
1
2
1
pxmxpnxmmxnxm
zBA
x
x
x
aaa
aaa
aaa
xAy
y
y
y
mnmnn
m
m
nx
n
=
























===


















34
SOLO Matrices
Operations with Matrices
Multiplication of a Matrix by a Matrix (continue -1)
The Multiplication of a Matrix by a Matrix is possible between Matrices in which the
number of the columns in the first Matrix is equal to the number of rows in the second
Matrix .
nxp
npnn
p
p
mpmm
p
p
nmnn
m
m
C
ccc
ccc
ccc
bbb
bbb
bbb
aaa
aaa
aaa
BA mxpnxm
=














=


























=












21
22221
11211
21
22221
11211
21
22221
11211
where
∑=
=
m
j
jkijik bac
1
:
35
SOLO Matrices
Operations with Matrices
Multiplication of a Matrix by a Matrix (continue - 2)
CABBCA )()( =Matrix multiplication is associative:
Transpose of Matrix Multiplication
TTT
ABAB =)(
Matrix product is compatible with scalar
multiplication: ( ) ( )BABAAB ααα ==)(
Matrix multiplication is distributive over
matrix addition: ( ) CBCACBACABACBA +=++=+ ,)(
In general Matrix Multiplication is not Commutative ABAB ≠
Return to
Table of Content
36
SOLO Matrices
Operations with Matrices
Kronecker Multiplication of a Matrix by a Matrix
( ) ( )pmxrnnmnn
m
m
rprr
p
p
nmnn
m
m
BaBaBa
BaBaBa
BaBaBa
bbb
bbb
bbb
aaa
aaa
aaa
BA rxpnxm
⋅⋅












=














⊗












=⊗












21
22221
11211
21
22221
11211
21
22221
11211
:Leopold Kronecker
(1823 –1891)
( )
( )
( ) ( ) ( )
( ) ( )CBACBA
BABABA
CBCACBA
CABACBA
⊗⊗=⊗⊗
⊗=⊗=⊗
⊗+⊗=⊗+
⊗+⊗=+⊗
ααα
Properties
Return to
Table of Content
37
SOLO Matrices
Operations with Matrices
Partition of a Matrix
( )
( ) ( ) ( )










=
























=
−−−
−
+
+++++
+
+
pmxqnxpqn
pmqxqxp
nxm
AA
AA
aaaa
aaaa
aaaa
aaaa
A
nmnpnpn
mqpqpqq
qmqpqpq
mpp
1221
1211
11
111111
11
111111




















=
qpq
p
aa
aa
A qxp



1
111
11 : ( )










=
+
+
−
qmqp
mp
aa
aa
A pmqx



1
111
12 :
( )










=
++
−
npn
pqq
aa
aa
A xpqn



1
111
21 : ( ) ( )










=
+
+++
−−
nmnp
mqpq
aa
aa
A pmxqn



1
111
12 :
38
SOLO Matrices
Operations with Matrices
Partition of a Matrix (continue)
( )
( ) ( ) ( )
( )
( ) ( ) ( )
( ) ( ) ( ) ( ) ( ) ( )
( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) 







++
++
=




















=
−−−−−−−−−−
−−−−−−
−−−
−
−−−
−
srxpmpmxqnsrpxxpqnxspmpmxqnpxsxpqn
srxpmpmqxsrpxqxpxspmpmqxpxsqxp
srxpmxspm
srpxpxs
pmxqnxpqn
pmqxqxp
mxrnxm
BABABABA
BABABABA
BB
BB
AA
AA
BA
2222122121221121
2212121121121111
2221
1211
2221
1211






Return to
Table of Content
39
SOLO Matrices
Operations with Matrices
Elementary Operations with a Matrix
j
iEE ji cr
↑
←
















==
100
00
001





ααα
AE irα
jcEA α
The Elementary Operations on rows/columns of a Matrix Anxm are reversible (invertible)
The reverse operation is to multiply the row/column elements by the scalar inverse.
( ) ( )
j
iEE ji cr
↑
←
















==
100
0/10
001
/1/1





ααα
( ) ( ) nrrrr IEEAAEE iiii
=⇒= αααα /1/1
( ) ( ) mcccc IEEAEEA jjjj
=⇒= αααα /1/1
1. Multiple the elements of a row/column by a nonzero scalar
The reverse operations are written as:
( ) ( ) ( )( ) 1
/1
1
/1 &
−−
== iiii rrrr EEEE αααα
( ) ( ) ( )( ) 1
/1
1
/1 &
−−
== jjjj cccc EEEE αααα
40
SOLO Matrices
Operations with Matrices
Elementary Operations with a Matrix (continue – 1)
The Elementary Operations on rows/columns of a Matrix Anxm are reversible (invertible)






















=→+
1000
010
0010
0001







α
α jij rrrE
←i
←j
nrrrrrr IEE jijjij
=






















=












































−
=→+→−
1000
0100
0010
0001
1000
010
0010
0001
1000
010
0010
0001





















αα
αα
The reverse operation is to multiply each element of row i by the scalar (-α) and
add to elements of row j
AAEE jijjij rrrrrr =→+→− αα
























++++
=→+
nnnjnin
injnijjjiijiij
inijiii
nji
rrr
aaaa
aaaaaaaa
aaaa
aaaa
AE jij







1
11
1
11111
αααα
α
←i
←j
2.a Multiply each element of row i by the scalar α and add to elements of row j






















−
=→−
1000
010
0010
0001







α
α jij rrrE
←i
←j
41
SOLO Matrices
Operations with Matrices
Elementary Operations with a Matrix (continue – 2)
The Elementary Operations on rows/columns of a Matrix Anxm are reversible (invertible)
ji
E jij ccc
↑↑






















=→+
1000
0100
010
0001







α
α
←i
←j
ncccccc IEE jijjij
=






















=












































−
=→+→−
1000
0100
0010
0001
1000
0100
010
0001
1000
0100
010
0001





















αα
αα
The reverse operation is to multiply each element of column i by the scalar (-α) and
add to elements of column j
AEEA jijjij cccccc =→+→− αα






















−
=→−
1000
0100
010
0001







α
α jij cccE
←i
←j
ji
aaaaa
aaaaa
aaaaa
aaaaa
EA
nnninjnin
jnjijjjij
iniiijiii
niji
ccc jij
↑↑
























+
+
+
+
=→+







α
α
α
α
α
1
1
1
111111
←i
←j
2.b Multiply each element of column i by the scalar α and add to elements of column j
42
SOLO Matrices
Operations with Matrices
Elementary Operations with a Matrix (continue – 3)
The Elementary Operations on rows/columns of a Matrix Anxm are reversible (invertible)






















=↔
1000
0010
0100
0001







ji rrE
←i
←j
nrrrr IEE jiij
=






















=












































=↔↔
1000
0100
0010
0001
1000
0010
0100
0001
1000
0010
0100
0001





















The reverse operation is again interchange row j with row i
AAEE jiij rrrr =↔↔
























=↔
nnnjnin
inijiii
jnjjjij
nji
rr
aaaa
aaaa
aaaa
aaaa
AE ji







1
1
1
11111
←i
←j
( ) jiij rrrr EE ↔
−
↔ =
1
3.a Interchange row i with row j
43
SOLO Matrices
Operations with Matrices
Elementary Operations with a Matrix (continue – 4)
The Elementary Operations on rows/columns of a Matrix Anxm are reversible (invertible)
ji
E ji rr
↑↑






















=↔
1000
0010
0100
0001







ncccc IEE ijji
=






















=












































=↔↔
1000
0100
0010
0001
1000
0010
0100
0001
1000
0010
0100
0001





















The reverse operation is again interchange column j with column i
AEEA ijji cccc =↔↔
ji
aaaa
aaaa
aaaa
aaaa
EA
nnninjn
jnjijjj
iniiiji
nij
cc ji
↑↑
























=↔







1
1
1
11111
( ) jiij cccc EE ↔
−
↔ =
1
3.b Interchange column i with column j
Return to
Table of Content
44
SOLO Matrices
Operations with Matrices
Rank of a Matrix
Given a Matrix Anxm we want, by using Elementary (reversible) Operations to reduce it to
a Main Diagonal Unit Matrix and zeros in all other positions.












=
nmnn
m
m
aaa
aaa
aaa
Anxm




21
22221
11211
Assume that a11 ≠ 0. If this is not the case interchange the first row/column
(a Elementary operation) until this is satisfied. Divide the elements of the first row by a11.
For i=2,n multiply the first row by (–ai1/a11) and add to i row (a Elementary operation)
to obtain:






















−−
−−
=
m
n
nm
n
n
mm
m
a
a
a
aa
a
a
a
a
a
a
aa
a
a
a
a
a
a
a
AE nxm
1
11
1
12
11
1
2
1
11
21
212
11
21
22
11
1
11
12
1
0
0
1




45
SOLO Matrices
Operations with Matrices
Rank of a Matrix (continue – 1)
Repeat this procedure for second column (starting at the new a22), third column (starting
at the new a33), and so on, as long as we ca obtain non-zero elements on the main diagonal,
using the rows bellow. At the end we obtain:
r
a
aa
aaa
AEEE nm
mr
mr
rowrowrrow nxm
↑






















=
0000
0000
'100
''10
'''1
22
1112
1_2__







 ←r
Define the multiplications of Elementary Operations as: 1_2__: rowrowrrow EEEP =
Those Elementary Operations can be reversed in opposite order to obtain:
( ) ( ) ( ) 1
_
1
2_
1
1_
1
:
−−−−
= rrowrowrow EEEP  nIPP =−1
46
SOLO Matrices
Operations with Matrices
Rank of a Matrix (continue – 2)
Now use column operation starting with the first column in order to nullify all the elements
above the Main Unit Diagonal:
( )
( ) ( ) ( ) 







=






















=
−−−
−
rmxrnxrrn
rmrxr
rcccrowrowrrow
I
EEEAEEE nxm
00
0
0000
0000
0100
0010
0001
_2_1_1_2__








Define the multiplications of Elementary Operations as: rccc EEEQ _2_1_: =
Those Elementary Operations can be reversed in opposite order to obtain:
( ) ( ) ( ) 1
1_
1
2_
1
_
1
:
−−−−
= ccrc EEEQ  mIQQ =−1
47
SOLO Matrices
Operations with Matrices
Rank of a Matrix (continue – 3)
We obtained:

1111
00
0
00
0 −−−−






=⇒





= Q
I
PQQAPP
I
QPA r
I
I
r
m
nxm
n
nxm 
The maximum number of Linearly Independent Rows of A = r
11
00
0 −−






= Q
I
PA r
nxm
From the relation we can see that the maximum number of
Linearly Independent Rows and the maximum number of Linearly Independent
Columns of Matrix PAQ is r.






=
00
0rI
QPAnxm






=














=





=
−−
−−
−−
−
0000
0
00
0 12
1
11
1
22
1
21
1
12
1
11
1
1 QQ
QQ
QQI
Q
I
PA rr
nxm
Since the maximum number of
Linearly Independent Rows of Matrix PA is also r. But the Elementary Operations P
are not changing the number of Linearly Independent Rows of A, therefore:
The maximum number of Linearly Independent Columns of A = r








=













=





= −
−
−−
−−
−
0
0
00
0
00
0
21
1
11
1
22
1
21
1
12
1
11
1
1
P
PI
PP
PPI
PQA rr
nxm
Since the maximum number of
Linearly Independent Columns of Matrix A Q is also r. But the Elementary Operations Q
are not changing the number of Linearly Independent Columns of A, therefore:
48
SOLO Matrices
Operations with Matrices
Rank of a Matrix (continue – 4)
We obtained:






=
00
0rI
QPAnxm
The maximum number of Linearly Independent Rows of Anxm
= The maximum number of Linearly Independent Columns of Anxm
= r ≤ min (m,n)
:= Rank of Matrix Anxm
11
00
0 −−






= Q
I
PA r
nxm
( ) ( ) ( )TrT
mxn
TT
P
I
QAAnxm
11
00
0 −−






==
Since in the Transpose of A we interchanged the columns with the rows of A:
nxmmxn
T
ARankARank =
49
SOLO Matrices
Operations with Matrices
Rank of a Matrix (continue – 5)
Proof
( )
( ) mxpmxp
mxp
BRankBARank
ARankBARank
nxm
nxmnxm
≤
≤
Rank of A B:
( )mnrARank nxm
,min≤=Assume:






=














=





=⇒





=
−−
−−
−−
−
0000
0
00
0
00
0 12
1
11
1
22
1
21
1
12
1
11
1
1 QQ
QQ
QQI
Q
I
PA
I
QPA rrr
nxmnxm
( ) ( ) ( )
( ) ( ) ( )
( ) ( ) ( )
( )( ) ( )( ) ( )
( ) ( ) ( )
( ) ( ) ( ) 







=
















=
−−−
−
−−−
−
−−−
−
−−
rpxrnxrrn
rprxrxr
rpxrmxrrm
rprxrxr
rmxrnxrrn
rmrxrxr
mxp
HH
BB
BBQQ
BPAnxm
0000
1211
1221
121112
1
11
1
Therefore (P A B) has at most r nonzero rows:
ARankrABRankPABRank
rNonsingulaP
=≤=
Since
( ) ( ) BRankBRankABRankABRankABRankABAB TTTTTTT
=≤==⇒=
q.e.d.
50
SOLO Matrices
Operations with Matrices
Rank of a Matrix (continue – 6)
( )
( ) nxnnxnnxn
nxnnxnnxn
BRankARankmBARank
BRankARankBARank
nxn
nxn
+≤+
+≤+
If A and B are Square nxn Matrices then:
[3] K. Ogata, “State Space Analysis of Control Systems”, Prentice Hall, Inc., 1967,
p.104
( ) ( )nxpmxnmxnnxpmxn
BRankARankBARanknBRankARank nxp ,min≤≤−+
Sylvester’s Inequality:
James Joseph Sylvester
(1814 – 1887)
[4] T. Kailath, “Linear Systems”, Prentice Hall, Inc.,
1980, p.654
Return to
Table of Content
51
SOLO Matrices
Operations with Matrices
Equivalence of Two Matrices
Proof
Two Matrices Anxm and Bnxm are said to be Equivalent, if and only if there exist a
Nonsingular Matrix Pnxn and a Nonsingular Matrix Qmxm such that A=P B Q.
This is the same to saying that A and B are Equivalent if and only if they have the same
rank.
Since A and B have the same rank r, we can write:
T
I
SBH
I
GA rr






=





=
00
0
,
00
0
where G,H, S, T are square invertible matrices.
q.e.d.
QBPHTBSGATBS
I
QP
r
==⇒=




 −−−−

1111
00
0
P and Q are square invertible matrices since
( ) ( ) THHTQGSSGPHTQSGP 1111111111
,:&: −−−−−−−−−−
====⇒==
Return to
Table of Content
52
SOLO Matrices
Square Matrices
In a Square Matrix Number of Rows = Number of Columns = n












=
nnnn
n
n
aaa
aaa
aaa
Anxn




21
22221
11211
Trace of a Square Matrix
∑=
==
n
i
iiaAtrAoftrace nxnnxn
1
Diagonal Square Matrix
{ }ijij
nn
a
a
a
a
Dnxn
δ=












=




00
00
00
22
11
Return to
Table of Content
53
SOLO Matrices
Square Matrices
Identity Matrix
Triangular Matrices
{ }ijnnxn
II δ=












==
100
010
001




A Matrix whose elements below or above the main diagonal are all zero is called
a Triangular Matrix












=
nnnn aaa
aa
a
Lnxn




21
2221
11
0
00
nxnnxnnxnnxnnxn
AAIIA ==
Null Matrix
{ }0=nxn
O
nxnnxnnxnnxnnxn
OOIIO ==
Upper Triangular Matrix Lower Triangular Matrix












=
nn
n
n
a
aa
aaa
Unxn




00
0 222
11211
Return to
Table of Content
54
SOLO Matrices
Square Matrices
Hessenberg Matrix
An Upper Hessenberg Matrix has zero entries below the first
subdiagonal: ( ) ( )
( ) ( )
( ) ( )
( ) ( )
( ) 























=
−
−−
−−
−−
−−
nnnn
nnn
nnn
nnn
nnn
H
aa
aaaa
aaaaa
aaaaaa
aaaaaa
U nxn
1
4142443
313233332
21222232211
11121131211
0000
00
0






An Lower Hessenberg Matrix has zero entries below the first
superdiagonal:
( ) ( ) ( ) ( )
( ) ( ) ( ) ( ) ( )
























=
−−−−−
−−−−
nnnnnn
nnnnnn
nnnn
H
aaaaa
aaaaa
aaaa
aaaa
aaa
aa
L nxn







4321
141312111
42322212
34333231
232221
1211
0
0
00
000
A Hessenberg Matrix is an “almost” Triangular Matrix.
Return to
Table of Content
55
SOLO Matrices
Square Matrices
Toeplitz Matrix
A Toeplitz Matrix or a “Diagonal-constant Matrix”,
named after Otto Toeplitz, is a Matrix in which each
descending Diagonal from left to right is constant.
Otto Toeplitz
(1881 – 1940)



















=
−
−
−−
−
+−−−
0121
101
21
012
101
1210
aaaa
aaa
aa
aaa
aaa
aaaa
T
n
n
nxn






Hankel Matrix
A Hankel Matrix is closed related to a Toeplitz Matrix (a
Hankel Matrix is an upside-down Toeplitz Matrix), named
after Hermann Hankel, is a Matrix in which each
uprising Diagonal from left to right is constant.






















=
−+−+−
+−
−−
−−−
+−−−
nnn
n
n
nxn
aaa
a
aa
aaa
aaaa
H
2121
12
32
321
1210







Hermann Hankel
(1839 – 1873)
Return to
Table of Content
56
SOLO Matrices
Householder Matrix
nˆ
( )xnn T 
ˆˆ
( )xnn T 
ˆˆ
x

'x

O A
We want to compute the reflection of
over a plane defined by the normal ( )1ˆˆˆ =nnn T
x

From the Figure we can see that:
( ) ( ) xHxnnIxnnxx TT 
=−=−= ˆˆ2ˆˆ2'
1ˆˆˆˆ2: =−= nnnnIH TT
We can see that H is symmetric:
( ) HnnInnIH TTTT
=−=−= ˆˆ2ˆˆ2
In fact H is also a rotation of around OA so it must be orthogonal, i.e.
HT
H=H HT
=I.
x

( ) ( )  InnnnnnInnInnIHHHH TTTTTT
=+−=−−== ˆˆˆˆ4ˆˆ4ˆˆ2ˆˆ2
1
Alston Scott Householder
1904-1993
Square Matrices
Return to
Table of Content
57
SOLO Matrices
Square Matrices
Vandermonde Matrix
( )
















=
−−− 11
2
1
1
22
2
2
1
21
21
111
,,,
n
n
nn
n
n
n
xxx
xxx
xxx
xxxVnxn






Vandermonde Matrix is a nxn Matrix that has in its j row the entries
x1
j-1
x2
j-1
… xn
j-1
Alexandre-Théophile
Vandermonde
1735 - 1796
Return to
Table of Content
58
SOLO
Hermitian = Symmetric if A has real components
Hermitian Matrix: AH
= A, Symmetric Matrix: AT
= A
Matrices
Pease, “Methods of Matrix Algebra”, Mathematics in Science and Engineering Vol.16,
Academic Press 1965
Definitions:
Adjoint Operation (H):
AH
= (A*)T
(* is complex conjugate and T is transpose of the matrix)
Skew-Hermitian = Anti-Symmetric if A has real components.
Skew-Hermitian: AH
= -A, Anti-Symmetric Matrix: AT
=-A
Unitary Matrix: UH
= U-1,
Orthonormal Matix: OT
= O-1
Unitary = Orthonormal if A has real components.
Charles Hermite
1822 - 1901
Square Matrices
Hermitian Matrix, Skew-Hermitian Matrix, Unitary Matrix
Return to
Table of Content
59
SOLO Matrices
Square Matrices
Singular, Non-singular and Inverse of a Non-singular Square Matrix Anxn
We obtained:






=
00
0rI
QPAnxn
11
00
0 −−






= Q
I
PA r
nxn
Singular Square Matrix Anxn: r < n Only r rows/columns of A are Linearly Independent
Non-singular Square Matrix Anxn: r = n The n rows/columns of A are Linearly Independent
For a Non-singular Matrix (r=n):
n
I
n IQQQPPQAPQQPQIPA
n
nxn
===⇒== −−−−−−− 1111111

and:
 n
I
n IPPPQQPPQAQPQIPA
n
nxn
===⇒== −−−−−−− 1111111
The Matrix (Q P) is the Inverse of the Non-singular Matrix A: PQAnxn
=
−1
This result explains the Gauss–Jordan elimination algorithm that can be used
to determine whether a given square matrix is invertible and to find the
inverse Return to
Table of Content
60
SOLO Matrices
Invertible Matrices
Matrix Inversion
• Gauss–Jordan elimination is an algorithm that can be used to determine
whether a given matrix is invertible and to find the inverse.
• An alternative is the LU decomposition which generates an upper and a lower
triangular matrices which are easier to invert.
• For special purposes, it may be convenient to invert matrices by treating mxn-
by-mxn matrices as m-by-m matrices of n-by-n matrices, and applying one or
another formula recursively (other sized matrices can be padded out with dummy
rows and columns).
• For other purposes, a variant of Newton's method may be convenient
(particularly when dealing with families of related matrices, so inverses of earlier
matrices can be used to seed generating inverses of later matrices).
Square Matrices
61
SOLO Matrices
Invertible Matrices
Square Matrices
Gaussian elimination, which first appeared in the
text Nine Chapters on the Mathematical Art written
in 200 BC, was used by Gauss in his work which
studied the orbit of the asteroid Pallas. Using
observations of Pallas taken between 1803 and 1809,
Gauss obtained a system of six linear equations in six
unknowns. Gauss gave a systematic method for
solving such equations which is precisely Gaussian
elimination on the coefficient matrix.
Sketch of the orbits of Ceres and Pallas, by Gauss
http://www.math.rutgers.edu/~cherlin/History/Papers1999/
weiss.html
Gauss published his methods in 1809 as "Theoria motus
corporum coelestium in sectionibus conicus solem ambientium,"
or, "Theory of the motion of heavenly bodies moving about the
sun in conic sections."
62
SOLO Matrices
Invertible Matrices
Gauss-Jordan elimination
In Linear Algebra, Gauss–Jordan elimination is an
algorithm for getting matrices in reduced row echelon form
using elementary row operations. It is variation of Gaussian
elimination. Gaussian elimination places zeros below each
pivot in the matrix, starting with the top row and working
downwards. Matrices containing zeros below each pivot are
said to be in row echelon form. Gauss–Jordan elimination
goes a step further by placing zeros above and below each
pivot; such matrices are said to be in reduced row echelon
form. Every matrix has a reduced row echelon form, and
Gauss–Jordan elimination is guaranteed to find it.
Carl Friedrich Gauss
(1777–1855)
Wilhelm Jordan
( 1842–1899)
See example
Square Matrices
63
SOLO Matrices
Invertible Matrices
Gauss-Jordan elimination
If the original square matrix, A, is given by the following expression:










−
−−
−
=
210
121
012
33xA
Then, after augmenting A Matrix by the Identity Matrix, the following is obtained:
[ ]










−
−−
−
=
100210
010121
001012
IA
Perform the following:
1. row1 + row2 →row1 equivalent with left multiplication by










=→+
100
010
011
121 rrrE
[ ]










−
−−
−
=→+
100210
010121
011111
121
IAE rrr
Square Matrices
64
SOLO Matrices
Invertible Matrices
Gauss-Jordan elimination
2. row1 + row2 →row2 equivalent with left multiplication by










=→+
100
011
001
221 rrrE
[ ]










−
−−
−
=→+
100210
010121
011111
121
IAE rrr
[ ]










−
−
−
=→+→+
100210
021230
011111
121221
IAEE rrrrrr
3. (1/3) row2 →row2 equivalent with left multiplication by










=
→
100
03/10
001
22
3
1
rr
E
[ ]












−
−
−
=→+→+
→
100210
0
3
2
3
1
3
2
10
011111
121221
22
3
1 IAEEE rrrrrr
rr
1. row1 + row2 →row1
Square Matrices
65
SOLO Matrices
Invertible Matrices
Gauss-Jordan elimination
3. (1/3) row2 →row2










=→+
110
010
001
332 rrrE
[ ]












−
−
−
=→+→+
→
100210
0
3
2
3
1
3
2
10
011111
121221
22
3
1 IAEEE rrrrrr
rr
4. row2+row3 →row3 equivalent with left multiplication by
[ ]
















−
−
=→+→+
→
→+
1
3
2
3
1
3
4
00
0
3
2
3
1
3
2
10
011111
121221
22
332
3
1 IAEEEE rrrrrr
rr
rrr
5. row1-row2 →row1 equivalent with left multiplication by









 −
=→−
100
010
011
121 rrrE
[ ]


















−
−
=→+→+
→
→+→−
1
3
2
3
1
3
4
00
0
3
2
3
1
3
2
10
0
3
1
3
2
3
1
01
121221
22
332121
3
1 IAEEEEE rrrrrr
rr
rrrrrr
Square Matrices
66
SOLO Matrices
Invertible Matrices
Gauss-Jordan elimination
5. row1-row2 →row1














=
→
4
3
00
010
001
33
4
3
rr
E
[ ]


















−
−
=→+→+
→
→+→−
1
3
2
3
1
3
4
00
0
3
2
3
1
3
2
10
0
3
1
3
2
3
1
01
121221
22
332121
3
1 IAEEEEE rrrrrr
rr
rrrrrr
6. (4/3) row3 →row3 equivalent with left multiplication by
[ ]


















−
−
=→+→+
→
→+→−
→
4
3
2
1
4
1
100
0
3
2
3
1
3
2
10
0
3
1
3
2
3
1
01
121221
22
332121
33
3
1
4
3 IAEEEEEE rrrrrr
rr
rrrrrr
rr
7. (1/3) row3+row1 →row1 equivalent with left multiplication by














=
→+
100
010
3
1
01
113
3
1
rrr
E
[ ]


















−=→+→+
→
→+→−
→→+
4
3
2
1
4
1
100
0
3
2
3
1
3
2
10
4
1
2
1
4
3
001
121221
22
332121
33113
3
1
4
3
3
1 IAEEEEEEE rrrrrr
rr
rrrrrr
rrrrr
Square Matrices
67
SOLO Matrices
Invertible Matrices
Gauss-Jordan elimination
7. (1/3) row3+row1 →row1












=
→+
100
3
2
10
001
223
3
2
rrr
E
[ ]


















−=→+→+
→
→+→−
→→+
4
3
2
1
4
1
100
0
3
2
3
1
3
2
10
4
1
2
1
4
3
001
121221
22
332121
33113
3
1
4
3
3
1 IAEEEEEEE rrrrrr
rr
rrrrrr
rrrrr
8. (2/3) row3+row2 →row2 equivalent with left multiplication by
[ ] [ ]BIIAEEEEEEEE
B
rrrrrr
rr
rrrrrr
rrrrrrrr
=


















=→+→+
→
→+→−
→→+→+
4
3
2
1
4
1
100
2
1
1
2
1
010
4
1
2
1
4
3
001
121221
22
332121
33113223
3
1
4
3
3
1
3
2
  
We found [ ] [ ] 1−
=⇒=⇒= ABIABBIIAB
1
3
1
4
3
3
1
3
2
4
3
2
1
4
1
2
1
1
2
1
4
1
2
1
4
3
: 121221
22
332121
33113223
−
→+→+
→
→+→−
→→+→+
=


















== AEEEEEEEEB rrrrrr
rr
rrrrrr
rrrrrrrr
[ ] [ ]11 −−
=− AIIAAinationlimeJordanGaussTherefore
Square Matrices
Return to
Table of Content
68
The first to use the term 'matrix' was Sylvester in 1850.
Sylvester defined a matrix to be an oblong arrangement of
terms and saw it as something which led to various
determinants from square arrays contained within it. After
leaving America and returning to England in 1851, Sylvester
became a lawyer and met Cayley, a fellow lawyer who shared
his interest in mathematics. Cayley quickly saw the significance
of the matrix concept and by 1853 Cayley had published a note
giving, for the first time, the inverse of a matrix.
Arthur Cayley
1821 - 1895
Cayley in 1858 published “Memoir on the Theory of Matrices”
which is remarkable for containing the first abstract definition of
a matrix. He shows that the coefficient arrays studied earlier for
quadratic forms and for linear transformations are special cases
of his general concept. Cayley gave a matrix algebra defining
addition, multiplication, scalar multiplication and inverses. He
gave an explicit construction of the inverse of a matrix in terms of
the determinant of the matrix. Cayley also proved that, in the case
of 2 2 matrices, that a matrix satisfies its own characteristic
equation.
James Joseph
Sylvester
1814 - 1897
http://www-groups.dcs.st-and.ac.uk/~history/HistTopics/Matrices_and_determinants.html#86
Return to
Table of Content
69
SOLO Matrices
Square Matrices
L, U Factorization of a Square Matrix A by Elementary Operations
Given a Square Matrix Number of Rows = Number of Columns = n












=
nnnn
n
n
aaa
aaa
aaa
Anxn




21
22221
11211
Consider the following Simple Operations on the rows/columns of A to obtain
a U Triangular Matrix (all elements bellow the Main Diagonal are 0) :
1. Multiple the elements of a row/column by a nonzero scalar
2. Multiply each element of row i by the scalar α and add to elements of row j
j
iEE ji cr
↑
←
















==
100
00
001





αααAE irα jcEA α






















=↔+
1000
010
0010
0001







α
α jij rrrE
AE jij rrr →+α
L,U factorization was proposed by Heinz Rutishauser in 1955.
70
SOLO Matrices
Square Matrices
L, U Factorization of a Matrix A by Elementary Operations
Given a Square Matrix Number of Rows = Number of Columns = n for example:










−
−−
−
=
210
121
012
33xA
Consider the following Simple Operations on the rows/columns of A to obtain
a U1 Triangular Matrix (all elements bellow the Main Diagonal are 0) :










=
→+
13/20
010
001
332
3
2
rrr
E












−
−
−
=
→+
210
1
2
3
0
012
221
2
1 AE
rrr










=
→+
100
012/1
001
221
2
1
rrr
E
1
2
1
3
2
3
4
00
1
2
3
0
012
221332
UAEE
rrrrrr
=
















−
−
=
→+→+
1. (1/2) row1+row2 →row1 equivalent with left multiplication by
2. (2/3) row2 + row3 →row3 equivalent with left multiplication by
71
SOLO Matrices
Square Matrices
L, U Factorization of a Matrix A by Elementary Operations










=




















=
→→+
13/23/1
012/1
001
100
012/1
001
13/20
010
001
11221
2
1
2
1
rrrrr
EE
1
2
1
3
2
3
4
00
1
2
3
0
012
221332
UAEE
rrrrrr
=
















−
−
=
→+→+
we found:
To Undo the Simple Operations and to obtain again A, let perform:










−
=
→+
13/20
010
001
332
3
2
rrr
E
we can see that:










=




















−
=
→+→+
100
010
001
13/20
010
001
13/20
010
001
332332
3
2
3
2
rrrrrr
EE
332
3
2
rrr
E
→+− is the Inverse Operation to and we write
1
3
2
3
2
332332
−
→+→+− 







=
rrrrrr
EE
332
3
2
rrr
E
→+










−=
→+−
100
012/1
001
221
2
1
rrr
E










=




















−=
→+→+−
100
010
001
100
012/1
001
100
012/1
001
221221
2
1
2
1
rrrrrr
EE
221
2
1
rrr
E
→+− is the Inverse Operation to and we write
1
2
1
2
1
221221
−
→+→+− 







=
rrrrrr
EE
221
2
1
rrr
E
→+
1. (-2/3) row2 + row3 →row3 equivalent with left multiplication by
2. (-1/2) row1+row2 →row1 equivalent with left multiplication by
72
SOLO Matrices
Square Matrices
L, U Factorization of a Matrix A by Elementary Operations
LEE
rrrrrr
=










−
−=










−









−=
→+−→+−
13/20
012/1
001
13/20
010
001
100
012/1
001
221332
2
1
3
2
AUEEAEEEE
rrrrrrrrrrrrrrrrrr
=







=















→+−→+−→+→+→+−→+−
1
3
2
2
1
2
1
3
2
3
2
2
1
332221221332332221
we found:
Therefore we obtained an L U factorization of the Square Matrix A:
AUL =










−
−−
−
=
















−
−










−
−=
210
121
012
3
4
00
1
2
3
0
012
13/20
012/1
001
1
We can have 1 on the diagonal of U Matrix, by introducing the Diagonal Matrix D:
UDLA =










−
−




















−
−=










−
−−
−
=
100
2/310
02/11
4/300
03/20
002/1
13/20
012/1
001
210
121
012
Return to
Table of Content
73
SOLO Matrices
Square Matrices
Diagonalization of a Square Matrix A by Elementary Operations
we found:
1
2
1
3
2
3
4
00
1
2
3
0
012
221332
UAEE
rrrrrr
=
















−
−
=
→+→+
1. (3/2) row2 + row1 →row1 equivalent with left multiplication by
2. (4/3) row3 + row2 →row2 and (9/8) row3 + row1 →row1 equivalent with left
multiplication by














=
→+
100
010
0
2
3
1
112
2
3
rrr
E


















−
−
=
→+→+→+
3
4
00
1
2
3
0
2
3
02
221332112
2
1
3
2
2
3 AEEE
rrrrrrrrr














=
→+→+
100
010
8
9
3
4
1
223113
3
4
8
9
rrrrrr
EEDAEAEEEEE
rrrrrrrrrrrrrrr
=
















==
→+→+→+→+→+
3
4
00
0
2
3
0
002
221332112223113
2
1
3
2
2
3
3
4
8
9
Return to
Table of Content
74
Let A be any square n by n matrix over a field F
SOLO Matrices
Determinant of a Square Matrix – det A or |A|
[ ]n
nnnnknn
nk
nk
ccc
r
r
r
aaaa
aaaa
aaaa
A 





21
2
1
21
222221
111211
=












=












=
To each Matrix A we associate a scalar called Determinant; i.e. det A or |A|
defined by the following 4 properties:
1 The Determinant of the Identity Matrix In is 1.
If the Matrix A has two identical rows/columns the Determinant of A is zero.2
0det
1
=






















nr
r
r
r



α
α
[ ] 0det 1 =ncccc  αα
←i row
↑
i column
1
1000
0100
0010
0001
detdet =




















=






nI
75
Let A be any square n by n matrix over a field F
SOLO Matrices
Determinant of a Square Matrix – det A or |A|
To each Matrix A we associate a scalar called Determinant; i.e. det A or |A|
defined by the following 4 properties:
3 If each element of a row/column of the Matrix A is the sum of two terms, the
Determinant of A is the sum of the two Determinants formed by the separation
of the terms
















+
















=
















+
n
k
n
k
n
kk
r
r
r
r
r
r
r
rr
r






'detdet'det
111
[ ] [ ] [ ]nknknkk cccccccccc  'detdet'det 111 +=+
[ ]n
nnnnknn
nk
nk
ccc
r
r
r
aaaa
aaaa
aaaa
A 





21
2
1
21
222221
111211
=












=












=
76
Let A be any square n by n matrix over a field F
SOLO Matrices
Determinant of a Square Matrix – det A or |A|
4 If the elements of a row/column of the Matrix A have a common factor λ than
the Determinant of A is equal to the product of λ and the Determinant of the
Matrix obtained by dividing the previous row/column by λ.
















=
















nknn
knkk
n
nknn
knkk
n
aaa
aaa
aaa
aaa
aaa
aaa










21
21
11211
21
21
11211
detdet λλλλ
[ ]n
nnnnknn
nk
nk
ccc
r
r
r
aaaa
aaaa
aaaa
A 





21
2
1
21
222221
111211
=












=












=
77
SOLO Matrices & Determinants History
http://www-groups.dcs.st-and.ac.uk/~history/HistTopics/Matrices_and_determinants.html#86
The idea of a determinant appeared in Japan and Europe at almost
exactly the same time although Seki in Japan certainly published first. In
1683 Seki wrote “Method of solving the dissimulated problems “ which
contains matrix methods written as tables. Without having any word which
corresponds to 'determinant' Seki still introduced determinants and gave
general methods for calculating them based on examples. Using his
'determinants' Seki was able to find determinants of 2x2,
3x3, 4x4 and 5x5 matrices and applied them to solving equations but not
systems of linear equations.
Takakazu Shinsuke
Seki
1642 - 1708Rather remarkably the first appearance of a determinant in Europe
appeared in exactly the same year 1683. In that year Leibniz wrote to de
l'Hôpital. He explained that the system of equations
10 + 11x + 12y = 0
20 + 21x + 22y = 0
30 + 31x + 32y = 0
had a solution because
302112322011312210312012302211322110 ⋅⋅+⋅⋅+⋅⋅=⋅⋅+⋅⋅+⋅⋅
which is exactly the condition that the coefficient matrix has determinant 0.
Gottfried Wilhelm
von Leibniz
1646 - 1716
78
SOLO Matrices & Determinants History
http://www-groups.dcs.st-and.ac.uk/~history/HistTopics/Matrices_and_determinants.html#86
Leibniz used the word 'resultant' for certain combinatorial sums of terms of a
determinant. He proved various results on resultants including what is
essentially Cramer's rule. He also knew that a determinant could be expanded
using any column - what is now called the Laplace expansion. As well as
studying coefficient systems of equations which led him to determinants, Leibniz
also studied coefficient systems of quadratic forms which led naturally towards
matrix theory.
Gottfried Wilhelm von
Leibniz
1646 - 1716
Gabriel Cramer
(1704-1752)
In the 1730's Maclaurin wrote Treatise of algebra although it was not
published until 1748, two years after his death. It contains the first
published results on determinants proving Cramer's rule for 2x2 and 3x3
systems and indicating how the 4x4 case would work. Cramer gave the
general rule for n n systems in a paper Introduction to the analysis of
algebraic curves (1750). It arose out of a desire to find the equation of a
plane curve passing through a number of given points.
Cramer does go on to explain precisely how one calculates these terms as
products of certain coefficients in the equations and how one determines the
sign. He also says how the n numerators of the fractions can be found by
replacing certain coefficients in this calculation by constant terms of the
system.
Colin Maclaurin
1698 - 1746
79
An axiomatic definition of a determinant was used by
Weierstrass in his lectures and, after his death, it was
published in 1903 in the note ‘On Determinant Theory‘.
In the same year Kronecker's lectures on determinants were
also published, again after his death. With these two
publications the modern theory of determinants was in
place but matrix theory took slightly longer to become a
fully accepted theory.
Karl Theodor Wilhelm
Weierstrass
1815 - 1897
Leopold Kronecker
1823 - 1891
Determinant
Weirstrass Definition of Determinant of a nxn Matrix A:
(1)det (A) is linear in the rows of A
(2) Interchanging two rows change the sign of det (A)
(3) det (In) = 1
For each positive integer n, there is exactly one function
with these three properties.
http://www-groups.dcs.st-and.ac.uk/~history/HistTopics/Matrices_and_determinants.html#86
http://www.sandgquinn.org/stonehill/MA251/notes/Weierstrass.pdf
80
Let A be any square n by n matrix over a field F
SOLO Matrices
Determinant of a Square Matrix – det A or |A|
Using the 4 properties that define the Determinant of a Square Matrix more
properties can be derived
5 If in a Matrix Determinant we interchange two rows/columns the sign of the
Determinant will change.
Proof
[ ]nji cccc 1detgiven
( )
[ ]
( )
[ ]
( )
[ ]nji
by
nii
njiji
cccccccc
cccccc

  


1
20
1
3
1
2
detdet
det0
+=
++=
[ ] [ ]
( )
  

20
11 detdet
by
njjnij cccccccc ++
therefore
[ ] [ ]nijnji cccccccc  11 detdet −=
q.e.d.
[ ]n
nnnnknn
nk
nk
ccc
r
r
r
aaaa
aaaa
aaaa
A 





21
2
1
21
222221
111211
=












=












=
81
Let A be any square n by n matrix over a field F
SOLO Matrices
Determinant of a Square Matrix – det A or |A|
Using the 4 properties that define the Determinant of a Square Matrix more
properties can be derived
6 The Matrix Determinant is unchanged if we add to a row/column any linear
combination of the other rows/columns.
Proof
[ ]nji cccc 1detgiven
q.e.d.
[ ]
[ ]
( )
[ ]ni
ij
j
by
njjj
nin
ij
j
jji
ccccccc
ccccccc

  


1
20
1
11
detdet
detdet
=+
=










+
∑
∑
≠
≠
λ
λ
[ ]n
nnnnknn
nk
nk
ccc
r
r
r
aaaa
aaaa
aaaa
A 





21
2
1
21
222221
111211
=












=












=
82
Let A be any square n by n matrix over a field F
SOLO Matrices
Determinant of a Square Matrix – det A or |A|
Using the 4 properties that define the Determinant of a Square Matrix more
properties can be derived
7 If a row/column is a Linear Combination of other rows/columns the
Determinant is zero.
Proof
q.e.d.
[ ]
( )
0detdet
20
11 ==










∑∑
≠≠ ij
j
by
njjjn
ij
j
jj ccccccc
  
 λλ
[ ]n
nnnnknn
nk
nk
ccc
r
r
r
aaaa
aaaa
aaaa
A 





21
2
1
21
222221
111211
=












=












=
83
Let A be any square n by n matrix over a field F
SOLO Matrices
Determinant of a Square Matrix – det A or |A|
Using the 4 properties that define the Determinant of a Square Matrix more
properties can be derived
8 Leibniz formula for determinants
( )
( )n
n
i
n
iii
i
n
iii
i
nikii
L
nnnknn
nk
nk
iiinPermutatioL
aaa
aaaa
aaaa
aaaa
A
kk
k
nn
n
nk
,,,
1detdet
21
1
21
222221
111211
1
11 11
1






 
=
−=












= ∑ ∑ ∑
− −≠ ≠
The meaning of this equation is that in the product there are no two elements
of the same row or the same column, and the sign of the product is a function
of the position of each element in the Matrix. The sign of each element, in the
product, is given by
{ } ( ){ }
( ) ( )
( ) ( )
( ) ( ) ( ) ( ) ( ) 













−−−−−
−−−+−
−−+−+
=−=
+++++
++
++
+
nnknnnn
nk
nk
ji
ijasign
11111
11
11
1
321
22
11




Gottfried Wilhelm
Leibniz
(1646 – 1716)
[ ]n
nnnnknn
nk
nk
ccc
r
r
r
aaaa
aaaa
aaaa
A 





21
2
1
21
222221
111211
=












=












=
84
Let A be any square n by n matrix over a field F
SOLO Matrices
Determinant of a Square Matrix – det A or |A|
Proof
8
( )
( )n
n
i
n
iii
i
n
iii
i
nikii
L
nnnknn
nk
nk
iiinPermutatioL
aaa
aaaa
aaaa
aaaa
A
kk
k
nn
n
nk
,,,
1detdet
21
1
21
222221
111211
1
11 11
1






 
=
−=












= ∑ ∑ ∑
− −≠ ≠
From Properties (3) and (4) of the Determinant:
( ) ( ) ( ) ( )
[ ]010:detdetdetdet
1
321
4,3
1
2
1
4,3
21
222221
111211
1 2
2
1
21
1
1
1







=


















=














=












= ∑ ∑∑ ==
i
n
i
n
i
n
i
i
ii
n
i
n
i
i
nnnknn
nk
nk
ewhere
r
r
e
e
aa
r
r
e
a
aaaa
aaaa
aaaa
A
↑
i column
↑
1st
row
coeff
2nd
row
Coeff
↓
From Properties (2) if two rows are identical the determinant is zero, therefore,
in the summation of i2 we can delete the case i2=i1.
[ ]n
nnnnknn
nk
nk
ccc
r
r
r
aaaa
aaaa
aaaa
A 





21
2
1
21
222221
111211
=












=












=
85
Let A be any square n by n matrix over a field F
SOLO Matrices
Determinant of a Square Matrix – det A or |A|
Proof (continue 1)
8 ( )
( )n
n
i
n
iii
i
n
iii
i
nikii
L
nnnknn
nk
nk
iiinPermutatioL
aaa
aaaa
aaaa
aaaa
A
kk
k
nn
n
nk
,,,
1detdet
21
1
21
222221
111211
1
11 11
1






 
=
−=












= ∑ ∑ ∑
− −≠ ≠
From Properties (2),(3) and (4) of the Determinant:
( ) ( ) ( ) ( )
[ ]
( ) ( ) ( )
∑∑ ∑
∑ ∑∑
≠ ≠
==
−














=
=


















=














=












=
n
i
n
ii
i
n
iiii
i
i
i
i
niii
i
n
i
n
i
n
i
i
ii
n
i
n
i
i
nnnknn
nk
nk
nn
n
n
n
e
e
e
aaa
ewhere
r
r
e
e
aa
r
r
e
a
aaaa
aaaa
aaaa
A
1
12
2
121
2
1
21
1 2
2
1
21
1
1
1
,,,
21
4,3,2
1
321
4,3
1
2
1
4,3
21
222221
111211
det
010:detdetdetdet










↑
i column
[ ]n
nnnnknn
nk
nk
ccc
r
r
r
aaaa
aaaa
aaaa
A 





21
2
1
21
222221
111211
=












=












=
86
Let A be any square n by n matrix over a field F
SOLO Matrices
Determinant of a Square Matrix – det A or |A|
Proof (continue 1)
q.e.d.
8 ( )
( )n
n
i
n
iii
i
n
iii
i
nikii
L
nnnknn
nk
nk
iiinPermutatioL
aaa
aaaa
aaaa
aaaa
A
kk
k
nn
n
nk
,,,
1detdet
21
1
21
222221
111211
1
11 11
1






 
=
−=












= ∑ ∑ ∑
− −≠ ≠
Let interchange the position of the rows to obtain a Unit Matrix, where, according
with Property (5), each interchange will cause a change in determinant sign.
We also use Property (1) that the determinant of the Unit Matrix is 1:
( )∑ ∑ ∑
− −≠ ≠
−=












=
n
i
n
iii
i
n
iii
i
nikii
L
nnnknn
nk
nk
kk
k
nn
n
nk
aaa
aaaa
aaaa
aaaa
A
1
11 11
1
,, ,,
1
21
222221
111211
1detdet
 





( )
( )
( )L
n
L
i
i
i
e
e
e
e
e
e
n
1det1det
1
2
1
2
1
−=












−=















where L is the Number of Permutations necessary to go from
(i1,i2,…,in) to (1,2,…,n)
[ ]n
nnnnknn
nk
nk
ccc
r
r
r
aaaa
aaaa
aaaa
A 





21
2
1
21
222221
111211
=












=












=
87
Let A be any square n by n matrix over a field F
SOLO Matrices
Determinant of a Square Matrix – det A or |A|
Using the 4 properties that define the Determinant of a Square Matrix more
properties can be derived
9 A Determinant can be expanded along a row or column using Laplace's Formula:
( )∑∑ =
+
=
−==
n
k
ki
ki
ik
n
k
kiik MaCaA
1
,
1
, 1det
where the Ci,k represents the i,k element of the matrix cofactors, i.e.
Ci,k is ( − 1)i + k
times the minor Mi,k, which is the determinant of the
matrix that results from A by removing the i-th row and the k-th
column, and n is the length of the matrix.
Pierre-Simon,
marquis de Laplace
1749 - 1827
( ) ( )
( ) ( )( ) ( ) ( )( ) ( )
( ) ( )
( ) ( )( ) ( ) ( )( ) ( )
( ) ( ) 























=
+−
++++−++
+−
−+−−−−−
+−
nnknnkknn
nikikikii
inkiikkii
nikikikii
nkkk
ki
aaaaa
aaaaa
aaaaa
aaaaa
aaaaa
M







111
11111111
111
11111111
11111111
, det
[ ]n
nnnnknn
nk
nk
ccc
r
r
r
aaaa
aaaa
aaaa
A 





21
2
1
21
222221
111211
=












=












=
88
Let A be any square n by n matrix over a field F
SOLO Matrices
Determinant of a Square Matrix – det A or |A|
9 Laplace's Formula: ( ) ∑∑∑ ==
+
=
=−==
n
j
ijji
n
j
ji
ji
ij
n
j
jiij CaMaCaA
1
,
1
,
1
, 1det
Proof
( ) ( )
( ) ( )
( ) ( )
( ) ( )
∑=
+−
+−
+−




















=












=
n
k
nnknkknn
nkkk
nkkk
ik
nnnknn
nk
nk
aaaaa
aaaaa
aaaaa
a
aaaa
aaaa
aaaa
A
1
1111
21121221
11111111
4,3
21
222221
111211
00100
detdetdet










From Properties (3) and (4) of the Determinant, using Row summation:
From Properties (3) and (5) of the Determinant:
( ) ( )
( ) ( )
( ) ( )
( ) ( )
( ) ( )
( ) ( )
( )
( ) ( )
( ) ( )
( ) ( )
















+














−++














⋅=




















+−
+−
+−
+
+−
+−
+−
+−
+−
+−
nnkknn
nkk
nkk
ki
nnknkknn
nkkk
nkkk
nnknkknn
nkkk
nkkk
aaaa
aaaa
aaaa
aaaaa
aaaaa
aaaaa
aaaaa
aaaaa
aaaaa
1111
2111221
1111111
1112
21121222
11111112
1111
21121221
11111111
det1det0
00100
det
( ) ( ) ( )
( ) ( )
( ) ( )
( )
( ) ( )
( ) ( )
( ) ( )
( ) ki
ki
nnkknn
nkk
nkk
ki
nnknkknn
nkkk
nkkk
M
aaaa
aaaa
aaaa
aaaaa
aaaaa
aaaaa
,
1111
2111221
1111111
1111
21121221
111111111
1det1det0
+
+−
+−
+−
+
+−
+−
−+−
−=














−=














⋅+









[ ]n
nnnnknn
nk
nk
ccc
r
r
r
aaaa
aaaa
aaaa
A 





21
2
1
21
222221
111211
=












=












=
89
Let A be any square n by n matrix over a field F
SOLO Matrices
Determinant of a Square Matrix – det A or |A|
9 Laplace's Formula: ( ) ∑∑∑ ==
+
=
=−==
n
j
ijji
n
j
ji
ji
ij
n
j
jiij CaMaCaA
1
,
1
,
1
, 1det
( )∑∑ =
+
=
−==
n
j
ji
ji
ij
n
j
jiij MaCaA
1
,
1
, 1det
Proof (continue 1)
Therefore the minor Mi,k, which is the determinant of the matrix that results from A
by removing the i-th row and the k-th column. We obtain
( ) ( )
( ) ( )
( ) ( )
( )
( ) ( )
( ) ( )
( ) ( )
( ) ki
ki
nnknkknn
nkkk
nkkk
ki
nnknkknn
nkkk
nkkk
ki M
aaaaa
aaaaa
aaaaa
aaaaa
aaaaa
aaaaa
C ,
1111
21121221
11111111
1111
21121221
11111111
, 1:
00100
det1
00100
det:
+
+−
+−
+−
+
+−
+−
+−
−=




















−=




















=












q.e.d.
In the same way we can use Column summation to obtain
( )∑∑ =
+
=
−==
n
j
ij
ji
ji
n
j
ijji MaCaA
1
,
1
, 1det
[ ]n
nnnnknn
nk
nk
ccc
r
r
r
aaaa
aaaa
aaaa
A 





21
2
1
21
222221
111211
=












=












=
90
Let A be any square n by n matrix over a field F
SOLO Matrices
Determinant of a Square Matrix – det A or |A|
10 A-1
the Inverse of Matrix A with det A ≠ 0 is unique and given by:














==−
nnninn
ni
ni
CCCC
CCCC
CCCC
Aadjwhere
A
Aadj
A
,,,2,1
2,2,2,22,1
1,1,1,21,1
1
:
det




Proof
q.e.d.












=


































=⋅
A
A
A
CCCC
CCCC
CCCC
aaaa
aaaa
aaaa
aaaa
AadjA
nnninn
ni
ni
nnninn
iniiii
ni
ni
det00
0det0
00det
,,,2,1
2,2,2,22,1
1,1,1,21,1
21
21
222221
111211

















≠
=
==∑= ik
ikA
ACa ik
n
j
jikj
0
det
det,
1
, δsince
Therefore multiplying by A-1
and dividing by det A, we obtain
A
Aadj
A
det
1
=−
A-1
exists if and only if det A ≠ 0,
i.e., the n rows/columns of Anxn are
Linearly Independent
( ) nIAAadjA det=⋅ Return to
Characteristic Polynomial
Return to
Cayley-Hamilton
adj A is the adjugate of
A
[ ]n
nnnnknn
nk
nk
ccc
r
r
r
aaaa
aaaa
aaaa
A 





21
2
1
21
222221
111211
=












=












=
91
Let A be any square n by n matrix over a field F
SOLO Matrices
Determinant of a Square Matrix – det A or |A|
10 A-1
the Inverse of Matrix A with det A ≠ 0 is unique and given by:














==−
nnninn
ni
ni
CCCC
CCCC
CCCC
Aadjwhere
A
Aadj
A
,,,2,1
2,2,2,22,1
1,1,1,21,1
1
:
det




Proof (continue – 1)
 BABBIAABIAA n
I
BbytionMultiplicaLeft
n
n
=⇒==⇒= −−− 111
A-1
exists if and only if det A ≠ 0,
i.e., the n rows/columns of Anxn are
Linearly Independent
Uniqueness
Assume that exists a second Matrix B such that BA=In and
q.e.d.
adj A is the adjugate of
A
[ ]n
nnnnknn
nk
nk
ccc
r
r
r
aaaa
aaaa
aaaa
A 





21
2
1
21
222221
111211
=












=












=
92
SOLO Matrices
Gabriel Cramer
(1704-1752)
Cramer's rule is a theorem, which gives an expression for the
solution of a system of linear equations with as many equations as
unknowns, valid in those cases where there is a unique solution.
The solution is expressed in terms of the determinants of the
(square) coefficient matrix and of matrices obtained from it by
replacing one column by the vector of right hand sides of the
equations.
Given n linear equations with n variables x1, x2,…,xn
nnnnknknn
nnkk
nnkk
bxaxaxaxa
bxaxaxaxa
bxaxaxaxa
=+++++
=+++++
=+++++




2211
222222121
111212111
Cramer’s Rule states that the solution of this equation is
nk
aaaa
aaaa
aaaa
abaa
abaa
abaa
x
nnnknn
nk
nk
nnnnn
n
n
k ,,2,1det/det
21
222221
111211
21
222221
111211









=
























=
if the determinant that we divide by is not equal zero.
Determinant of a Square Matrix – det A or |A|
Cramer’s Rule11
93
SOLO Matrices
Proof of Cramer's Rule
To prove the Cramer’s Rule we use just two properties of Determinants:
1.adding one column to another does not change the value of the determinant
2.multiplying every element of one column by a factor will multiply the value of the
determinant by the same factor
In the following determinant let replace the b1,b2,…,bn by their equation
( )
( )
( ) 











+++++
+++++
+++++
=












nnnnnknknnnn
nnnkk
nnnkk
nnnnn
n
n
axaxaxaxaaa
axaxaxaxaaa
axaxaxaxaaa
abaa
abaa
abaa








221121
2222221212221
1112121111211
21
222221
111211
detdet
By subtracting from the k column the first multiplied by x1, the second column
multiplied by x2, and so on until the last column multiplied by xn, ( the value of the
determinant will not change by Rule 1 above), and it is found to be equal to












=












=












nnnknn
nk
nk
k
Rule
nnknknn
nkk
nkk
nnnnn
n
n
aaaa
aaaa
aaaa
x
axaaa
axaaa
axaaa
abaa
abaa
abaa












21
222221
111211
2
21
222221
111211
21
222221
111211
detdetdet
q.e.d.
Determinant of a Square Matrix – det A or |A|
Cramer’s Rule11
SOLO Matrices
Determinant of a Square Matrix – det A or |A|
Therefore
The Cramer’s Rule can be rewritten as
nkbC
A
aaaa
aaaa
aaaa
abaa
abaa
abaa
x
n
j
jjk
nnnknn
nk
nk
nnnnn
n
n
k ,,2,1
det
1
det/det
1
,
21
222221
111211
21
222221
111211









==
























= ∑=
bAb
A
Aadj
b
b
b
CCC
CCC
CCC
A
x
x
x
x
nnnnn
n
n
n
12
1
,,2,1
2,2,22,1
1,1,21,1
2
1
detdet
1
: −
=⋅=


























=












=





This result can be derived directly by using












=












==
nn b
b
b
b
x
x
x
xbxA

2
1
2
1
,
Multiply from left by A-1
 bAxAA
nI
11 −−
=
[ ]
A
bAadj
bAx
det
1 ⋅
== −
Proof of Cramer's Rule (continue – 1)
Cramer’s Rule11
95
Let A be any square n by n matrix over a field F
SOLO Matrices
Determinant of a Square Matrix – det A or |A|
Proof
q.e.d.
nn
nnnknnnn
nk
nk
aaa
aaaa
aa
a
a
aaa
aaaa
A 








2211
21
2221
11
2222
111211
00
000
det
000
0
detdet =












=












=
12 The Determinant of a Triangular Matrix is given by the product of the elements
on the Main Diagonal
Use Laplace’s Formula
nn
nnnkn
nnnknnnnnknn
aaa
aaa
a
aa
aaaa
aa
a
a
aaaa
aa
a












2211
3
33
2211
32
3332
22
11
21
2221
11
00
det
00
000
det
00
000
det ==










=












=












[ ]n
nnnnknn
nk
nk
ccc
r
r
r
aaaa
aaaa
aaaa
A 





21
2
1
21
222221
111211
=












=












=
96
Let A be any square n by n matrix over a field F
SOLO Matrices
Determinant of a Square Matrix – det A or |A|
Proof
13 The Determinant of a Matrix Multiplication is equal to the Product of the
Determinants
( ) BABA detdetdet ⋅=
Start with the Multiplication of a Diagonal Matrix and any Matrix B.












=
























=
Bnnn
B
B
nnnn
m
n
nn rd
rd
rd
bbb
bbb
bbb
d
d
d
BD








222
111
21
22221
11211
22
11
00
00
00
In computing the Determinant use Property No. 4
( )
( )
BD
r
r
r
ddd
rd
rd
rd
BD
Bn
B
B
nn
Bnnn
B
B
detdetdetdetdet 2
1
2211
4
222
111
⋅=












=












=



[ ]n
nnnnknn
nk
nk
ccc
r
r
r
aaaa
aaaa
aaaa
A 





21
2
1
21
222221
111211
=












=












=
97
Let A be any square n by n matrix over a field F
SOLO Matrices
Determinant of a Square Matrix – det A or |A|
Proof (continue -1)
13 The Determinant of a Matrix Multiplication is equal to the Product of the
Determinants
( ) BABA detdetdet ⋅=
We have shown that by Invertible Elementary operations a Matrix A can be
transformed to a Diagonal Matrix D. Each operation is to add to a given row
one other row multiplied by a scalar (rj+α ri → rj ). According to Property (6)
the value of the Determinant is unchanged by those operations.
( ) AAEDAED detdetdet ==⇒=
Therefore by doing the same Elementary Operations on (A B) Matrix we have:
( ) ( )( ) ( ) BABDBDBAEBA
ADDAE
detdetdetdetdetdetdet
detdet
⋅=⋅===
==
( ) BDBD detdetdet ⋅=
q.e.d.
Diagonalization of A
[ ]n
nnnnknn
nk
nk
ccc
r
r
r
aaaa
aaaa
aaaa
A 





21
2
1
21
222221
111211
=












=












=
98
SOLO Matrices
Determinant of a Square Matrix – det A or |A|
Proof
mxmnxn
mxmmxn
nxmnxn
BA
BC
A
detdet
0
det ⋅=





14 Block Matrices Determinants
q.e.d.
( ) ( ) 



















=













=





−−
mmxn
nxmn
mxmmxn
nxmnxn
mxmmxn
nxmnxn
mmxn
nxmn
mxmmxn
nxmnxn
mxmmxn
nxmnxn
ICB
I
B
I
I
A
ICB
I
B
A
BC
A
11
0
0
0
0
00
0
00
( )
( ) 



















=





−
mmxn
nxmn
mxmmxn
nxmnxn
mxmmxn
nxmnxn
mxmmxn
nxmnxn
ICB
I
B
I
I
A
BC
A
1
11 0
det
0
0
det
0
0
det
0
det
( )
( ) ( )
A
I
A
I
A Laplace
mxnm
mnxnxn
Laplace
mxmmxn
nxmnxn
det
0
0
det1
0
0
det
11
1
==







⋅=





−−
−

( ) ( )
( )
mxm
LaplaceLaplace
mxmnmx
xmnn
Laplace
mxmmxn
nxmnxn
B
B
I
B
I
det1
0
0
det1
0
0
det
1
11
⋅==







⋅=





−
−−

( )
1
0
det 1
Triangular
Matrix
mmxn
nxmn
ICB
I
=







−
mxmnxn
mxmmxn
nxmnxn
BA
BC
A
detdet
0
det =





99
SOLO Matrices
Determinant of a Square Matrix – det A or |A|
Proof
[ ]
[ ]



−⋅
−⋅
=





−−
−−
existsBifCBDAB
existsAifDACBA
BC
DA
mxnmxmnxmnxnmxm
nxmnxnmxnmxmnxn
mxmmxn
nxmnxn
11
11
detdet
detdet
det
15 Block Matrices Determinants
q.e.d.














 −














−






=





−
−
−
−
−
−
existsBif
ICB
CBDA
B
DI
existsAif
DCAB
DAI
IC
A
BC
DA
m
n
n
m
mxmmxn
nxmnxn
1
1
1
1
1
1
0
0
0
0
( ) ( )
( )



−
−
=














 −














−






=





−−
−−
−
−
−
−
−
−
existsBifCBDAB
existsAifDCABA
existsBif
ICB
CBDA
B
DI
existsAif
DCAB
DAI
IC
A
BC
DA
m
n
n
m
mxmmxn
nxmnxn
11
1112
1
1
1
1
1
1
detdet
detdet
0
det
0
det
0
det
0
det
det
100
SOLO Matrices
Determinant of a Square Matrix – det A or |A|
Proof
( ) ( )nxmmxnmmxnnxmn ABIBAI +=+ detdet
16 Block Matrices Determinants
q.e.d.
( )
( )
( )
( )
( ) ( )nxmmxnmxmmxnnxmnxn
nxmnxnmxnmxmmxnmxmnxmnxn
mxmmxn
nxmnxn
ABIBAI
AIBIBIAI
IB
AI
+=+=
+=+=




 − −−
detdet
detdetdet
1
13
1
13
Sylvester's Determinant Theorem
James Joseph Sylvester
(1814 – 1987)
101
SOLO Matrices
Determinant of a Square Matrix – det A or |A|
17 Cauchy - Binet Formula
Jacques Philippe Marie
Binet
(1786 –1856)
Augustin-Louis
Cauchy
(1789 –1857)
Let A be an m×n matrix and B an n×m matrix (m≤n). Write [n] for
the set { 1, ..., n }, and for the set of m-combinations of [n] (i.e.,
subsets of size m; there are of them). For , write A[m],S
for the
m×m matrix whose columns are the columns of A at indices from S, and
BS,[m]
for the m×m matrix whose rows are the rows of B at indices from
S. The Cauchy–Binet formula then states
( ) [ ]( ) [ ]( )
[ ]
∑





∈
=
m
nS
mSSm BAAB ,, detdetdet
























=
nmnn
m
m
mnmm
n
n
nxmmxn
bbb
bbb
bbb
aaa
aaa
aaa
BA








21
22221
11211
21
22221
11211
If m=n, and we recover ( ) BAAB detdetdet =1=





n
n
102
It was Cauchy in 1812 who used 'determinant' in its modern sense.
Cauchy's work is the most complete of the early works on determinants.
He reproved the earlier results and gave new results of his own on minors
and adjoints. In the 1812 paper the multiplication theorem for
determinants is proved for the first time although, at the same meeting of
the Institut de France, Binet also read a paper which contained a proof of
the multiplication theorem but it was less satisfactory than that given by
Cauchy.
http://www-groups.dcs.st-and.ac.uk/~history/HistTopics/Matrices_and_determinants.html#86
103
SOLO Matrices
Determinant of a Square Matrix – det A or |A|
Example






−
−
=
210
121
A










−
−
=
03
11
12
B
4
03
12
20
11
03
11
21
12
11
12
10
21
det
323311
−=
−−
+
−
−
−
+
−
−
−
=
−−

BA
4
17
13
det
17
13
03
11
12
210
121
−=





−
−






−
−
=










−
−






−
−
=BA
Using Cauchy - Binet Formula we obtain:
By multiplying the matrices A and B and computing det (AB), we obtain:
3
2
23
2
3
,3,2 =
⋅
=





== nm
17 Cauchy - Binet Formula
104
SOLO Matrices
Determinant of a Square Matrix – det A or |A|
( ) ( ) 11
detdet
−−
= AA18
Proof
q.e.d.
nIAA =−1
use
( )
( )
( )
( ) ( ) AAAAAAIn det/1detdetdetdetdet1 11
11
1
1
=⇒⋅=== −−−
19 ( ) AAT
detdet =
Proof
( )
( )
UDLUDLAUDLA detdetdetdetdet
11
⋅⋅==⇒=
L and U are Triangular Matrices with 1 on the Main Diagonal, and D is diagonal.
 nndddDUDLAUDLA  2211
11
detdetdetdetdet ==⋅⋅=⇒=
( )
AdddLDUALDUA nn
TTTTTTTT
detdetdetdetdet 2211
11
11
==⋅⋅=⇒= 
q.e.d.
105
SOLO Matrices
Determinant of the Vandermonde Matrix
( ) ( )∏≤<≤
−−−
−=
















=
nji
ij
n
n
nn
n
n
xx
xxx
xxx
xxx
xxxVnxn
1
11
2
1
1
22
2
2
1
21
221
111
det,,,det






Vandermonde Matrix is a nxn Matrix that has in its j row the entries
x1
j-1
x2
j-1
… xn
j-1
Determinant of a Square Matrix – det A or |A|
20
Proof:
Using elementary operation, let multiply the (j-1) row by –x1 and add to row j, starting
with j=n, then (n-1) until j=1
















=
−−−
→+−→+−→+−→+−→+−→+− −−−−−−−−
11
2
1
1
22
2
2
1
21
111
11112122111111212211
n
n
nn
n
n
rrrxrrrxrrrxrrrxrrrxrrrx
xxx
xxx
xxx
EEEVEEE nnnnnnnxnnnnnnn






jj
x
E jjj rrxr
↑↑−




















−
=→− −
1
1000
010
0010
0001
1
11






←j-1
←j
( ) ( )1
1
1
11
det,det 22 −
−
− −=





= nn
nn
nn xx
xx
xxV xWe have:
106
SOLO Matrices
Determinant of the Vandermonde Matrix
Vandermonde Matrix is a nxn Matrix that has in its j row the entries
x1
j-1
x2
j-1
… xn
j-1
Determinant of a Square Matrix – det A or |A|
Proof (continue – 1):
Using fact (13) that determinant of a product of Matrices is the product of their
determinants
















−−
−−
−−
=
















−−−−−−−
→+−→+−→+− −−−−
2
1
12
21
1
2
1
2
21
2
2
112
11
2
1
1
22
2
2
1
21
0
0
0
111111
1111212211
n
n
n
n
nn
nn
n
n
n
nn
n
n
rrrxrrrxrrrx
xxxxxx
xxxxxx
xxxx
xxx
xxx
xxx
EEE nnnnnn











( ) ( ) ( )














−−
−−
−−
=
















−−
−−
−−
=
















−−−−
−−−−−−−
→+−→+−→+− −−−−
2
1
12
21
1
2
1
2
21
2
2
112
2
1
12
21
1
2
1
2
21
2
2
112
11
2
1
1
22
2
2
1
21
111
det
0
0
0
111
det
111
detdetdetdet 1111212211
n
n
n
n
nn
nn
n
n
n
n
n
nn
nn
n
n
n
nn
n
n
rrrxrrrxrrrx
xxxxxx
xxxxxx
xxxx
xxxxxx
xxxxxx
xxxx
xxx
xxx
xxx
EEE nnnnnn














    

  
107
SOLO Matrices
Determinant of the Vandermonde Matrix
Vandermonde Matrix is a nxn Matrix that has in its j row the entries
x1
j-1
x2
j-1
… xn
j-1
Determinant of a Square Matrix – det A or |A|
Proof (continue – 2):
( )
( ) ( )
( ) ( )
( ) ( )
( )
( ) ( ) ( ) ( ) ( ) ( ) ( )nnxnn
n
n
n
n
n
n
n
n
n
nn
n
n
n
nn
n
n
nnxn xxVxxxx
xx
xx
xxxx
xxxxxx
xxxxxx
xxxx
xxx
xxx
xxx
xxxV ,,det
11
detdet
111
det,,,det 211112
22
2
2
112
4
1
2
12
2
2
1122
112
11
2
1
1
22
2
2
1
21
21 














 −−
−−−−
−−−
−−=














−−=














−−
−−
−−
=
















=
We obtained a recursive relation between the nxn Vandermonde Matrix
V (x1, x2, … , xn) and the (n-1)x(n-1) Matrix V (x2, … ,xn), and by continuing the
procedure, and because det V2x2 (xn-1,xn)=(xn-xn-1), we obtain
( ) ( )∏≤<≤
−−−
−=
















=
nji
ij
n
n
nn
n
n
xx
xxx
xxx
xxx
xxxVnxn
1
11
2
1
1
22
2
2
1
21
221
111
det,,,det






q.e.d.
Use Property (4) that if the elements of a row/column of the Matrix A have a
common factor λ than the Determinant of A is equal to the product of λ and the
Determinant of the Matrix obtained by dividing the previous row/column by λ.
108
SOLO Matrices
Eigenvalues and Eigenvectors of Square Matrices Anxn
The relation represents a Linear
Transformation of the vector to .
11 nxnxnnx xAy =
1nxx 1nxy
For the Square Matrix Anxn a nonzero Vector
is an Eigenvector if there is a Scalar λ (called
the Eigenvalue) such that:
1nxv
11 nxnxnxn vvA λ=
To find the Eigenvalues and Eigenvectors we see that
( ) 01 =− nxnnxn vIA λ
This equation has a solution iff the Matrix (Anxn-λ In) is singular or01 ≠nxv
( ) 0det =− nnxn IA λ
This equation may be used to find the Eigenvalues λ.
109
SOLO Matrices
Eigenvalues and Eigenvectors of Square Matrices Anxn
( ) ( ) 01det 1
1
'
21
22221
11211
=+++−=












−
−
−
−
n
nnn
RuleLeibniz
nnnn
n
n
cc
aaa
aaa
aaa





λλ
λ
λ
λ
The equation that may be used to find the Eigenvalues λ can be written as:
The polynomial:
( ) ( ) ( ) ( ) ( )nn
nn
ccp λλλλλλλλλ −−−=+++= −
 21
1
1:
is called the Characteristic Polynomial of the Square Matrix Anxn, and it has degree n
and therefore n Eigenvalues λ1, λ2,…, λn. However the Characteristic Equations need not have
distinct solutions, there may be less than n distinct eigenvalues.
If the matrix has real entries, the coefficients of the characteristic polynomial are all real. However,
the roots are not necessarily real; they may include complex numbers with a non-zero imaginary
component. However, there is at least one complex number λ solving the characteristic equation,
even if the entries of the matrix A are complex numbers to begin with. (This existence of such a
solution is known as the Fundamental Theorem of Algebra.) For a complex eigenvalue, the
corresponding eigenvectors also have complex components.
By Abel’s Theorem (1824) there are no algebraic formulae for the roots of a general polynomial
with n > 4, therefore we need an iterative algorithm to find the roots.
110
SOLO Matrices
Eigenvalues and Eigenvectors of Square Matrices Anxn
Theorem: The n Eigenvectors of a Square Matrix Anxn that has distinct Eigenvalues are
Linearly Independent.
Proof:
Let assume that we have k (2 ≤ k ≤ n) Linearly Dependent Eigenvvectors.
Then there exist k nonzero constants αi (i=1,…,k) such that:
ivvv ikkii ∀≠=++++ 0011 αααα 
kivvvA iiiinxn ,,1,0 =≠= λwhere:
we have:
( )
( ) ( ) 10
0
111
11111
≠≠−=−=−
=−=−
iifvvvAvIA
vvAvIA
iiiinxninnxn
nxnnnxn
λλλλ
λλ
( ) ( ) ( ) ( ) ( ) 0112122111 =−++−++−=++++− kkkiiikkiinnxn vvvvvvIA λλαλλαλλααααλ 
In the same way multiplying the result by (Anxn – λ2 In) we obtain:
( ) ( ) ( )[ ] ( ) ( ) ( )( ) 012213233121222 =−−++−−=−++−− kkkkkkknnxn vvvvIA λλλλαλλλλαλλαλλαλ 
Continuing the procedure until, at the end, we multiply by (Anxn – λ(k-1) In) to obtain:
( ) 
00
0
0
1
1
=⇒=






−
≠
≠
−
=
∏ kk
k
i
ikk v αλλα
  
This contradicts the assumption that αk ≠ 0
therefore the k Eigenvectors are Linearly
Independent.
111
SOLO Matrices
Eigenvalues and Eigenvectors of Square Matrices Anxn
Theorem: If the n Eigenvectors of a Square Matrix Anxn corresponding to the n
Eigenvalues (not necessary distinct) are Linear Independent than we can write
Proof:












=Λ=−
n
PAP
λ
λ
λ




00
00
00
2
1
1
Using the n Eigenvectors of a Square Matrix Anxn we can write
[ ] [ ] [ ]n
n
nn
P
n vvvvvvvvvA 






 21
2
1
221121
00
00
00












==
λ
λ
λ
λλλ
or PPA Λ=
Λ=−
PAP 1 q.e.d.
we say that the Square Matrix Anxn is
Diagonalizable.
Since the n Eigenvectors of Anxn are Linear Independent P is
nonsingular and we have
nvvv ,,, 21 
Two Square Matrices A and B that are related by A=S-1
B S are called Similar Matrices
Return to
Matrix Decomposition
Matrices i
Matrices i
Matrices i
Matrices i
Matrices i
Matrices i
Matrices i
Matrices i
Matrices i
Matrices i
Matrices i
Matrices i
Matrices i
Matrices i
Matrices i
Matrices i
Matrices i
Matrices i
Matrices i
Matrices i
Matrices i
Matrices i
Matrices i
Matrices i
Matrices i
Matrices i
Matrices i
Matrices i
Matrices i
Matrices i
Matrices i
Matrices i
Matrices i
Matrices i
Matrices i
Matrices i
Matrices i
Matrices i
Matrices i
Matrices i
Matrices i
Matrices i
Matrices i
Matrices i
Matrices i
Matrices i
Matrices i
Matrices i
Matrices i
Matrices i
Matrices i
Matrices i
Matrices i
Matrices i
Matrices i
Matrices i
Matrices i
Matrices i
Matrices i
Matrices i
Matrices i
Matrices i
Matrices i
Matrices i
Matrices i
Matrices i
Matrices i
Matrices i
Matrices i
Matrices i
Matrices i
Matrices i
Matrices i
Matrices i
Matrices i
Matrices i
Matrices i
Matrices i
Matrices i

Contenu connexe

Tendances

What is analytic functions
What is analytic functionsWhat is analytic functions
What is analytic functions
Tarun Gehlot
 
Lagrange's Theorem
Lagrange's TheoremLagrange's Theorem
Lagrange's Theorem
john1129
 

Tendances (20)

Graceful labelings
Graceful labelingsGraceful labelings
Graceful labelings
 
Analisi degli scostamenti
Analisi degli scostamentiAnalisi degli scostamenti
Analisi degli scostamenti
 
Section 10: Lagrange's Theorem
Section 10: Lagrange's TheoremSection 10: Lagrange's Theorem
Section 10: Lagrange's Theorem
 
Graph Theory: Trees
Graph Theory: TreesGraph Theory: Trees
Graph Theory: Trees
 
SigmaDeltaADC
SigmaDeltaADCSigmaDeltaADC
SigmaDeltaADC
 
Lattices
LatticesLattices
Lattices
 
What is analytic functions
What is analytic functionsWhat is analytic functions
What is analytic functions
 
equivalence and countability
equivalence and countabilityequivalence and countability
equivalence and countability
 
Fuzzy mathematics:An application oriented introduction
Fuzzy mathematics:An application oriented introductionFuzzy mathematics:An application oriented introduction
Fuzzy mathematics:An application oriented introduction
 
Section 9: Equivalence Relations & Cosets
Section 9: Equivalence Relations & CosetsSection 9: Equivalence Relations & Cosets
Section 9: Equivalence Relations & Cosets
 
Math tricks
Math tricksMath tricks
Math tricks
 
CMSC 56 | Lecture 16: Equivalence of Relations & Partial Ordering
CMSC 56 | Lecture 16: Equivalence of Relations & Partial OrderingCMSC 56 | Lecture 16: Equivalence of Relations & Partial Ordering
CMSC 56 | Lecture 16: Equivalence of Relations & Partial Ordering
 
system linear equations and matrices
 system linear equations and matrices system linear equations and matrices
system linear equations and matrices
 
Study Material Numerical Solution of Odinary Differential Equations
Study Material Numerical Solution of Odinary Differential EquationsStudy Material Numerical Solution of Odinary Differential Equations
Study Material Numerical Solution of Odinary Differential Equations
 
Matrices
MatricesMatrices
Matrices
 
Graph theory
Graph theory Graph theory
Graph theory
 
Lagrange's Theorem
Lagrange's TheoremLagrange's Theorem
Lagrange's Theorem
 
B.tech ii unit-2 material beta gamma function
B.tech ii unit-2 material beta gamma functionB.tech ii unit-2 material beta gamma function
B.tech ii unit-2 material beta gamma function
 
Eigen values and eigen vectors
Eigen values and eigen vectorsEigen values and eigen vectors
Eigen values and eigen vectors
 
Gamma function
Gamma functionGamma function
Gamma function
 

En vedette

Lecture notes for s4 b tech Mathematics
Lecture notes for s4 b tech Mathematics  Lecture notes for s4 b tech Mathematics
Lecture notes for s4 b tech Mathematics
Anoop T Vilakkuvettom
 

En vedette (13)

Complex analysis book by iit
Complex analysis book by iitComplex analysis book by iit
Complex analysis book by iit
 
Introduction to Mathematical Probability
Introduction to Mathematical ProbabilityIntroduction to Mathematical Probability
Introduction to Mathematical Probability
 
Golden words of swami vivekananda
Golden words of swami vivekanandaGolden words of swami vivekananda
Golden words of swami vivekananda
 
Complex Analysis - Differentiability and Analyticity (Team 2) - University of...
Complex Analysis - Differentiability and Analyticity (Team 2) - University of...Complex Analysis - Differentiability and Analyticity (Team 2) - University of...
Complex Analysis - Differentiability and Analyticity (Team 2) - University of...
 
Swami Vivekananda Quotes
Swami Vivekananda QuotesSwami Vivekananda Quotes
Swami Vivekananda Quotes
 
Lecture notes for s4 b tech Mathematics
Lecture notes for s4 b tech Mathematics  Lecture notes for s4 b tech Mathematics
Lecture notes for s4 b tech Mathematics
 
Vector analysis
Vector analysisVector analysis
Vector analysis
 
Prime numbers
Prime numbersPrime numbers
Prime numbers
 
Swami vivekananda’s 150 quotes
Swami vivekananda’s 150  quotesSwami vivekananda’s 150  quotes
Swami vivekananda’s 150 quotes
 
Mathematics and History of Complex Variables
Mathematics and History of Complex VariablesMathematics and History of Complex Variables
Mathematics and History of Complex Variables
 
Complex varible
Complex varibleComplex varible
Complex varible
 
Integrating spreadsheets, visualization tools, and computational knowledge en...
Integrating spreadsheets, visualization tools, and computational knowledge en...Integrating spreadsheets, visualization tools, and computational knowledge en...
Integrating spreadsheets, visualization tools, and computational knowledge en...
 
Matrix Groups and Symmetry
Matrix Groups and SymmetryMatrix Groups and Symmetry
Matrix Groups and Symmetry
 

Similaire à Matrices i

Linear Algebra and Matrix
Linear Algebra and MatrixLinear Algebra and Matrix
Linear Algebra and Matrix
itutor
 

Similaire à Matrices i (20)

M01L01 Advance Engineering Mathematics.pptx
M01L01 Advance Engineering Mathematics.pptxM01L01 Advance Engineering Mathematics.pptx
M01L01 Advance Engineering Mathematics.pptx
 
Matrices ppt
Matrices pptMatrices ppt
Matrices ppt
 
1560 mathematics for economists
1560 mathematics for economists1560 mathematics for economists
1560 mathematics for economists
 
Vector space
Vector spaceVector space
Vector space
 
Calculas
CalculasCalculas
Calculas
 
Linear Algebra and Matrix
Linear Algebra and MatrixLinear Algebra and Matrix
Linear Algebra and Matrix
 
Aplicaciones y subespacios y subespacios vectoriales en la
Aplicaciones y subespacios y subespacios vectoriales en laAplicaciones y subespacios y subespacios vectoriales en la
Aplicaciones y subespacios y subespacios vectoriales en la
 
Matrix algebra
Matrix algebraMatrix algebra
Matrix algebra
 
Motion in a plane
Motion in a planeMotion in a plane
Motion in a plane
 
Notes on eigenvalues
Notes on eigenvaluesNotes on eigenvalues
Notes on eigenvalues
 
ALA Solution.pdf
ALA Solution.pdfALA Solution.pdf
ALA Solution.pdf
 
MODULE_05-Matrix Decomposition.pptx
MODULE_05-Matrix Decomposition.pptxMODULE_05-Matrix Decomposition.pptx
MODULE_05-Matrix Decomposition.pptx
 
Vcla ppt ch=vector space
Vcla ppt ch=vector spaceVcla ppt ch=vector space
Vcla ppt ch=vector space
 
Ch07 6
Ch07 6Ch07 6
Ch07 6
 
Partial midterm set7 soln linear algebra
Partial midterm set7 soln linear algebraPartial midterm set7 soln linear algebra
Partial midterm set7 soln linear algebra
 
Mathematical Foundations for Machine Learning and Data Mining
Mathematical Foundations for Machine Learning and Data MiningMathematical Foundations for Machine Learning and Data Mining
Mathematical Foundations for Machine Learning and Data Mining
 
On the Seidel’s Method, a Stronger Contraction Fixed Point Iterative Method o...
On the Seidel’s Method, a Stronger Contraction Fixed Point Iterative Method o...On the Seidel’s Method, a Stronger Contraction Fixed Point Iterative Method o...
On the Seidel’s Method, a Stronger Contraction Fixed Point Iterative Method o...
 
03_AJMS_166_18_RA.pdf
03_AJMS_166_18_RA.pdf03_AJMS_166_18_RA.pdf
03_AJMS_166_18_RA.pdf
 
03_AJMS_166_18_RA.pdf
03_AJMS_166_18_RA.pdf03_AJMS_166_18_RA.pdf
03_AJMS_166_18_RA.pdf
 
21EC33 BSP Module 1.pdf
21EC33 BSP Module 1.pdf21EC33 BSP Module 1.pdf
21EC33 BSP Module 1.pdf
 

Plus de Solo Hermelin

Plus de Solo Hermelin (20)

5 introduction to quantum mechanics
5 introduction to quantum mechanics5 introduction to quantum mechanics
5 introduction to quantum mechanics
 
Stabilization of linear time invariant systems, Factorization Approach
Stabilization of linear time invariant systems, Factorization ApproachStabilization of linear time invariant systems, Factorization Approach
Stabilization of linear time invariant systems, Factorization Approach
 
Slide Mode Control (S.M.C.)
Slide Mode Control (S.M.C.)Slide Mode Control (S.M.C.)
Slide Mode Control (S.M.C.)
 
Sliding Mode Observers
Sliding Mode ObserversSliding Mode Observers
Sliding Mode Observers
 
Reduced order observers
Reduced order observersReduced order observers
Reduced order observers
 
Inner outer and spectral factorizations
Inner outer and spectral factorizationsInner outer and spectral factorizations
Inner outer and spectral factorizations
 
Keplerian trajectories
Keplerian trajectoriesKeplerian trajectories
Keplerian trajectories
 
Anti ballistic missiles ii
Anti ballistic missiles iiAnti ballistic missiles ii
Anti ballistic missiles ii
 
Anti ballistic missiles i
Anti ballistic missiles iAnti ballistic missiles i
Anti ballistic missiles i
 
Analytic dynamics
Analytic dynamicsAnalytic dynamics
Analytic dynamics
 
12 performance of an aircraft with parabolic polar
12 performance of an aircraft with parabolic polar12 performance of an aircraft with parabolic polar
12 performance of an aircraft with parabolic polar
 
11 fighter aircraft avionics - part iv
11 fighter aircraft avionics - part iv11 fighter aircraft avionics - part iv
11 fighter aircraft avionics - part iv
 
10 fighter aircraft avionics - part iii
10 fighter aircraft avionics - part iii10 fighter aircraft avionics - part iii
10 fighter aircraft avionics - part iii
 
9 fighter aircraft avionics-part ii
9 fighter aircraft avionics-part ii9 fighter aircraft avionics-part ii
9 fighter aircraft avionics-part ii
 
8 fighter aircraft avionics-part i
8 fighter aircraft avionics-part i8 fighter aircraft avionics-part i
8 fighter aircraft avionics-part i
 
6 computing gunsight, hud and hms
6 computing gunsight, hud and hms6 computing gunsight, hud and hms
6 computing gunsight, hud and hms
 
4 navigation systems
4 navigation systems4 navigation systems
4 navigation systems
 
3 earth atmosphere
3 earth atmosphere3 earth atmosphere
3 earth atmosphere
 
2 aircraft flight instruments
2 aircraft flight instruments2 aircraft flight instruments
2 aircraft flight instruments
 
3 modern aircraft cutaway
3 modern aircraft cutaway3 modern aircraft cutaway
3 modern aircraft cutaway
 

Dernier

Biogenic Sulfur Gases as Biosignatures on Temperate Sub-Neptune Waterworlds
Biogenic Sulfur Gases as Biosignatures on Temperate Sub-Neptune WaterworldsBiogenic Sulfur Gases as Biosignatures on Temperate Sub-Neptune Waterworlds
Biogenic Sulfur Gases as Biosignatures on Temperate Sub-Neptune Waterworlds
Sérgio Sacani
 
Bacterial Identification and Classifications
Bacterial Identification and ClassificationsBacterial Identification and Classifications
Bacterial Identification and Classifications
Areesha Ahmad
 
Asymmetry in the atmosphere of the ultra-hot Jupiter WASP-76 b
Asymmetry in the atmosphere of the ultra-hot Jupiter WASP-76 bAsymmetry in the atmosphere of the ultra-hot Jupiter WASP-76 b
Asymmetry in the atmosphere of the ultra-hot Jupiter WASP-76 b
Sérgio Sacani
 
Pests of cotton_Sucking_Pests_Dr.UPR.pdf
Pests of cotton_Sucking_Pests_Dr.UPR.pdfPests of cotton_Sucking_Pests_Dr.UPR.pdf
Pests of cotton_Sucking_Pests_Dr.UPR.pdf
PirithiRaju
 
GUIDELINES ON SIMILAR BIOLOGICS Regulatory Requirements for Marketing Authori...
GUIDELINES ON SIMILAR BIOLOGICS Regulatory Requirements for Marketing Authori...GUIDELINES ON SIMILAR BIOLOGICS Regulatory Requirements for Marketing Authori...
GUIDELINES ON SIMILAR BIOLOGICS Regulatory Requirements for Marketing Authori...
Lokesh Kothari
 
Pests of cotton_Borer_Pests_Binomics_Dr.UPR.pdf
Pests of cotton_Borer_Pests_Binomics_Dr.UPR.pdfPests of cotton_Borer_Pests_Binomics_Dr.UPR.pdf
Pests of cotton_Borer_Pests_Binomics_Dr.UPR.pdf
PirithiRaju
 
Labelling Requirements and Label Claims for Dietary Supplements and Recommend...
Labelling Requirements and Label Claims for Dietary Supplements and Recommend...Labelling Requirements and Label Claims for Dietary Supplements and Recommend...
Labelling Requirements and Label Claims for Dietary Supplements and Recommend...
Lokesh Kothari
 

Dernier (20)

Green chemistry and Sustainable development.pptx
Green chemistry  and Sustainable development.pptxGreen chemistry  and Sustainable development.pptx
Green chemistry and Sustainable development.pptx
 
High Class Escorts in Hyderabad ₹7.5k Pick Up & Drop With Cash Payment 969456...
High Class Escorts in Hyderabad ₹7.5k Pick Up & Drop With Cash Payment 969456...High Class Escorts in Hyderabad ₹7.5k Pick Up & Drop With Cash Payment 969456...
High Class Escorts in Hyderabad ₹7.5k Pick Up & Drop With Cash Payment 969456...
 
Biogenic Sulfur Gases as Biosignatures on Temperate Sub-Neptune Waterworlds
Biogenic Sulfur Gases as Biosignatures on Temperate Sub-Neptune WaterworldsBiogenic Sulfur Gases as Biosignatures on Temperate Sub-Neptune Waterworlds
Biogenic Sulfur Gases as Biosignatures on Temperate Sub-Neptune Waterworlds
 
Bacterial Identification and Classifications
Bacterial Identification and ClassificationsBacterial Identification and Classifications
Bacterial Identification and Classifications
 
Pulmonary drug delivery system M.pharm -2nd sem P'ceutics
Pulmonary drug delivery system M.pharm -2nd sem P'ceuticsPulmonary drug delivery system M.pharm -2nd sem P'ceutics
Pulmonary drug delivery system M.pharm -2nd sem P'ceutics
 
Stunning ➥8448380779▻ Call Girls In Panchshil Enclave Delhi NCR
Stunning ➥8448380779▻ Call Girls In Panchshil Enclave Delhi NCRStunning ➥8448380779▻ Call Girls In Panchshil Enclave Delhi NCR
Stunning ➥8448380779▻ Call Girls In Panchshil Enclave Delhi NCR
 
Recombinant DNA technology (Immunological screening)
Recombinant DNA technology (Immunological screening)Recombinant DNA technology (Immunological screening)
Recombinant DNA technology (Immunological screening)
 
Asymmetry in the atmosphere of the ultra-hot Jupiter WASP-76 b
Asymmetry in the atmosphere of the ultra-hot Jupiter WASP-76 bAsymmetry in the atmosphere of the ultra-hot Jupiter WASP-76 b
Asymmetry in the atmosphere of the ultra-hot Jupiter WASP-76 b
 
Animal Communication- Auditory and Visual.pptx
Animal Communication- Auditory and Visual.pptxAnimal Communication- Auditory and Visual.pptx
Animal Communication- Auditory and Visual.pptx
 
Pests of cotton_Sucking_Pests_Dr.UPR.pdf
Pests of cotton_Sucking_Pests_Dr.UPR.pdfPests of cotton_Sucking_Pests_Dr.UPR.pdf
Pests of cotton_Sucking_Pests_Dr.UPR.pdf
 
GBSN - Microbiology (Unit 2)
GBSN - Microbiology (Unit 2)GBSN - Microbiology (Unit 2)
GBSN - Microbiology (Unit 2)
 
GUIDELINES ON SIMILAR BIOLOGICS Regulatory Requirements for Marketing Authori...
GUIDELINES ON SIMILAR BIOLOGICS Regulatory Requirements for Marketing Authori...GUIDELINES ON SIMILAR BIOLOGICS Regulatory Requirements for Marketing Authori...
GUIDELINES ON SIMILAR BIOLOGICS Regulatory Requirements for Marketing Authori...
 
All-domain Anomaly Resolution Office U.S. Department of Defense (U) Case: “Eg...
All-domain Anomaly Resolution Office U.S. Department of Defense (U) Case: “Eg...All-domain Anomaly Resolution Office U.S. Department of Defense (U) Case: “Eg...
All-domain Anomaly Resolution Office U.S. Department of Defense (U) Case: “Eg...
 
Biological Classification BioHack (3).pdf
Biological Classification BioHack (3).pdfBiological Classification BioHack (3).pdf
Biological Classification BioHack (3).pdf
 
Pests of cotton_Borer_Pests_Binomics_Dr.UPR.pdf
Pests of cotton_Borer_Pests_Binomics_Dr.UPR.pdfPests of cotton_Borer_Pests_Binomics_Dr.UPR.pdf
Pests of cotton_Borer_Pests_Binomics_Dr.UPR.pdf
 
Isotopic evidence of long-lived volcanism on Io
Isotopic evidence of long-lived volcanism on IoIsotopic evidence of long-lived volcanism on Io
Isotopic evidence of long-lived volcanism on Io
 
GBSN - Biochemistry (Unit 1)
GBSN - Biochemistry (Unit 1)GBSN - Biochemistry (Unit 1)
GBSN - Biochemistry (Unit 1)
 
COST ESTIMATION FOR A RESEARCH PROJECT.pptx
COST ESTIMATION FOR A RESEARCH PROJECT.pptxCOST ESTIMATION FOR A RESEARCH PROJECT.pptx
COST ESTIMATION FOR A RESEARCH PROJECT.pptx
 
Zoology 4th semester series (krishna).pdf
Zoology 4th semester series (krishna).pdfZoology 4th semester series (krishna).pdf
Zoology 4th semester series (krishna).pdf
 
Labelling Requirements and Label Claims for Dietary Supplements and Recommend...
Labelling Requirements and Label Claims for Dietary Supplements and Recommend...Labelling Requirements and Label Claims for Dietary Supplements and Recommend...
Labelling Requirements and Label Claims for Dietary Supplements and Recommend...
 

Matrices i

  • 1. 1 Matrices I SOLO HERMELIN Updated: 30.03.11http://www.solohermelin.com
  • 2. 2 SOLO Matrices I Table of Content Introduction to Algebra Matrices Vectors and Vector Spaces Matrix Operations with Matrices Domain and Codomain of a Matrix A Transpose AT of a Matrix A Conjugate A* and Conjugate Transpose AH =(A* )T of a Matrix A Sum and Difference of Matrices A and B Multiplication of a Matrix by a Scalar Multiplication of a Matrix by a Matrix Kronecker Multiplication of a Matrix by a Matrix Partition of a Matrix Elementary Operations with a Matrix Rank of a Matrix Equivalence of Two Matrices
  • 3. 3 SOLO Matrices I Table of Content (continue – 1) Matrices Square Matrices Trace of a Square Matrix, Diagonal Square Matrix Identity Matrix, Null Matrix, Triangular Matrices Hessenberg Matrix Toeplitz Matrix, Hankel Matrix Householder Matrix Vandermonde Matrix Hermitian Matrix, Skew-Hermitian Matrix, Unitary Matrix Matrices & Determinants History L, U Factorization of a Square Matrix A by Elementary Operations Invertible Matrices Diagonalization of a Square Matrix A by Elementary Operations
  • 4. 4 SOLO Matrices I Table of Content (continue – 2) Matrices Square Matrices Determinant of a Square Matrix – det A or |A| Eigenvalues and Eigenvectors of Square Matrices Anxn Jordan Normal (Canonical) Form Cayley-Hamilton Theorem Matrix Decompositions Companion Matrix References
  • 5. 5 SOLO Algebra Set and Set Operations A collection of objects sharing a common property is called a Set. We use the notation { }PpropertyhasxxS := We write Sx ∈ S1 is a subset of S if every element of S1 is an element of S { }SxSxxSS ∈→∈∀=⊂ 11 : x is an element of S 1 2 { }elementsno=∅ Null (Empty) set { }2121 : SxorSxxSS ∈∈= Union of sets3 { }2121 : SxandSxxSS ∈∈= Intersection of sets4 { }2121 : SxandSxxSS ∉∈=− Difference of sets5 { }Ω=Ω∈∉= SSandxandSxxS : Complement of S relative to Ω 6 21 SS  1 S 2 S 21 SS  1 S 2 S 21 SS − 1 S 2 S Ω S S
  • 6. 6 SOLO Algebra Set and Set Operations A collection of objects sharing a common property is called a Set. We use the notation { }PpropertyhasxxS := We write Sx ∈ S1 is a subset of S if every element of S1 is an element of S { }SxSxxSS ∈→∈∀=⊂ 11 : x is an element of S 1 2 { }elementsno=∅ Null (Empty) set { }2121 : SxorSxxSS ∈∈= Union of sets3 { }2121 : SxandSxxSS ∈∈= Intersection of sets4 { }2121 : SxandSxxSS ∉∈=− Difference of sets5 { }Ω=Ω∈∉= SSandxandSxxS : Complement of S relative to Ω 6 21 SS  1 S 2 S 21 SS  1 S 2 S 21 SS − 1 S 2 S Ω S S
  • 7. 7 SOLO Algebra Group A nonempty set G is said to be a group if in G there is defined an operation * such that: GbaGba ∈∀∈ ,* Closure1 ( ) ( ) Gcbacbacba ∈∀= ,,**** Associativity2 3 GaaaeeatsGe ∈∀==∈∃ **.., Identity element 4 eabbatsGbGa ==∈∃∈∀ **..,, Inverse element b = a-1 Lemma1: A group G has exactly one identity element Proof: If e and f are both identity elements, then fe ffeef eeffe =⇒    == == ** ** Lemma2: Every element in G has exactly one inverse element Proof: If b and c are both inverse elements of x, then cxebx ** ==   cxbbxb ee **** = *b → → cebe ** = → cb =
  • 8. 8 SOLO Algebra Ring A Ring is a set R equipped with two binary operations +: R ×R→R (called addition), and •: R ×R→R (called multiplication), such that: (R,+) is an Abelian Group with identity element 0: ( ) ( )cbacba ++=++ aaa =+=+ 00 abba +=+ 0..,, =+−=−+∈−∃∈∀ aaaatsRaRa (R,.) is associative ( ) ( )cbacba ••=•• Multiplication distributes over addition: ( ) ( ) ( )cabacba •+•=+• ( ) ( ) ( )cbcacba •+•=•+ RbaRba ∈∀∈+ ,, Closure Associativity Identity element Inverse element Group Properties Abelian Group property
  • 9. 9 SOLO Algebra Field A Field is a Ring satisfying two additional conditions: (1) There also exists an identity e with respect to multiplication, i.e.: aaa =•=• 11 (2) All but the zero element have inverse with respect to multiplication 1..,,0& 111 =•=•∈=∃≠∈∀ −−− aaaatsRabaRa
  • 10. 10 Synthetic Geometry Euclid 300BC Algebras HistorySOLO Extensive Algebra Grassmann 1844 Binary Algebra Boole 1854 Complex Algebra Wessel, Gauss 1798 Spin Algebra Pauli, Dirac 1928 Syncopated Algebra Diophantes 250AD Quaternions Hamilton 1843 Tensor Calculus Ricci 1890 Vector Calculus Gibbs 1881 Clifford Algebra Clifford 1878 Differential Forms E. Cartan 1908 First Printing 1482 http://modelingnts.la.asu.edu/html/evolution.html Geometric Algebra and Calculus Hestenes 1966 Matrix Algebra Cayley 1854 Determinants Sylvester 1878 Analytic Geometry Descartes 1637 Table of Content
  • 11. 11 SOLO Matrices Definitions: Vectors and Vector Spaces Vector: A n-dimensional n-Vector is an ordered set of elements x1, x2,…,xn over a field F. One other way is to define it as Row Matrix or a Column Matrix [ ]n n xxxr x x x c   21 2 1 , =             = we have where T is the Transpose operation.crrc TT == & Scalar: A one-dimensional Vector with its element a real or a complex number. Null Vector: A n-dimensional Vector with all elements equal zero. Equality of two Vectors: niforyxyx ii ,1==⇔= [ ]000, 0 0 0   =             = rc oo
  • 12. 12 SOLO Matrices 12 VECTOR SPACE Given the complex numbers . A Vector Space V (Linear Affine Space) with elements over C if its elements satisfy the following conditions: I. Exists a operation of Addition with the following properties: Commutative (Abelian) Law for Addition1 Associative Law for Addition2 Exists a unique vector3 II. Exists a operation of Multiplication by a Scalar with the following properties: 4 Inverse 5 Associative Law for Multiplication6 Distributive Law for Multiplication7 Commutative Law for Multiplication8 We can write: ( ) ( ) ( ) ( ) ( ) 00101010 3 575 =⋅→==+=⋅+⋅=⋅+ xxxxxxxx ( ) yxyx βαα +=+ ( ) xxx βαβα +=+ ( ) ( )xx βαβα = xx =⋅1 0.. =+∈∃∈∀ yxtsVyVx xx =+ 0 0 ( ) ( )zyxzyx ++=++ xyyx +=+ Vzyx ∈,, C∈γβα ,,
  • 13. 13 SOLO Matrices Linear Dependence and Independence Vectors and Vector Spaces Vectors are said to be Linear Independent if:mvvv ,,, 21  00 212211 =====+++ mmm ifonlyandifvvv αααααα  Vectors are said to be Linear Dependent if :mvvv ,,, 21  0&02211 ≠=+++ imm somevvv αααα  k m ki i iik vv αα / 1           −= ∑ ≠ = If the vectors are Linear Dependent, the vectors whose coefficients αk ≠ 0 in can be obtained as a Linear Combination of other Vectors mvvv ,,, 21  kv 011 =++++ mmkk vvv ααα 
  • 14. 14 SOLO Matrices Linear Dependence and Independence Vectors and Vector Spaces Theorem If Vectors are said to be Linear Independent and vectors are Linear Dependent, than can be expressed as a Unique Linear Combination of . mvvv ,,, 21  121 ,,,, +mm vvvv  mvvv ,,, 21  1+mv Proof 0&0 1112211 ≠=++++ +++ mmmmm vvvv ααααα  since αm+1 = 0 implies mvvv ,,, 21  are Linear Dependent, and this is a contradiction. therefore: ( ) 122111 / ++ +++−= mmmm vvvv αααα  q.e.d. 121 ,,,, +mm vvvv  Linear Dependent implies that exists some (more than one) αi ≠ 0 s.t. To prove Uniqueness suppose that there are two expressions ( ) nivvvv ii tIndependenLinearvvm i iii m i ii m i iim m ,10 ,, 111 1 1 =∀=⇒=−⇒== ∑∑∑ === + γβγβγβ 
  • 15. 15 SOLO Matrices Basis of a Vector Space V Vectors and Vector Spaces A set of Vectors of a n-Vector Space is called a Basis of V if these n Vectors are Linearly Independent and every Vector can be Uniquely expressed as a Linear Combination of those Vectors: nvvv ,,, 21  y ∑= = n i iivy 1 α
  • 16. 16 SOLO Matrices Vectors and Vector Spaces Relation Between Two Bases of a Vector Space V If we have Two Bases of Vectors , we can writenn wwwandvvv ,,,,,, 2121                              =               ⇒        +++= +++= +++= n A nnnn n n nnnnnnn nn nn v v v w w w vvvw vvvw vvvw nxn              2 1 21 22221 11211 2 1 2211 22221212 12121111 ααα ααα ααα ααα ααα ααα In the same way                             =               ⇒        +++= +++= +++= n B nnnn n n nnnnnnn nn nn w w w v v v wwwv wwwv wwwv nxn              2 1 21 22221 11211 2 1 2211 22221212 12121111 βββ βββ βββ βββ βββ βββ Therefore               =               =               n nxnnxn n nxn n v v v AB w w w B v v v  2 1 2 1 2 1 Bnxn is called the Inverse of the Square Matrix Anxn and is written as Anxn -1 .               =               =               n nxnnxn n nxn n w w w BA v v v A w w w  2 1 2 1 2 1 nnxnnxn IBA = nnxnnxn IAB =
  • 17. 17 SOLO Inner Product If V is a complex Vector Space, for the Inner Product (a scalar) < , > between the elements (complex numbers) is defined by: Vzyx ∈∀ ,, * ,, >>=<< xyyx1 Commutative law ><+>>=<+< zxyxzyx ,,,2 Distributive law Cyxyx ∈∀><>=< λλλ ,,3 00,&0, =⇔=><≥>< xxxxx4 Using to we can show that:1 4 ( ) ( ) ( ) ><+><=><+><=>+<=>+< xyxyyxyxyyxxyy ,,,,,, 21 1 * 2 * 1 2 * 21 1 21 ( ) ( ) ><=><=><=>< yxxyxyyx ,,,, * 2 *** 2 λλλλ ( ) >=<>=<⇒><+><=>+>=<< xxxxxx ,000,0,0,00,0, 2 Matrices Vectors and Vector Spaces
  • 18. 18 SOLO Inner Product ( ) ** :, xyyxyx TT ==>< We can define the Inner Product in a Vector Space as Matrices therefore ( ) ∑= =+++>=<⇒             =             = n i iinn nn yxyxyxyxyx y y y y x x x x 1 ** 2 * 21 * 1 2 1 2 1 ,&   Outer Product ( ) [ ]               =             ==>< ** 2 * 1 * 2 * 22 * 12 * 1 * 21 * 11 ** 2 * 1 2 1 * : nnnn n n n n T yxyxyx yxyxyx yxyxyx yyy x x x yxyx       Vectors and Vector Spaces
  • 19. 19 SOLO (Identity) 00 =⇔= xx2 1 Vxx ∈∀≥ 0 (Non-negativity) xx λλ =4 Norm of a Vector .x Vyxyxyxyx ∈∀+≤+≤− ,3 (Triangle Inequalities) Matrices The Norm of a Vector is defined by the following relations: If V is an Inner Product space, than we can induce the norm: [ ] 2/1 , ><= xxx and We can see that 0, 2/1 1 2 2/1 1 *2/1 ≥      =      =>=< ∑∑ == n i i n i ii xxxxxx 1 0,100 2/1 1 2 =⇒=∀=⇒=      = ∑= xnixxx i n i i 2 Vectors and Vector Spaces
  • 20. 20 SOLO Inner Product yxyx ≤>< , Cauchy, Bunyakovsky, Schwarz Inequality known as Schwarz Inequality Let x, y be the elements of an Inner Product space V, than : 0,,,,, 2* ≥><+><+><+>>=<++< yyxyyxxxyxyx ααααα Assuming that (for which the equality holds) we choose: >< >< −= yy yx , , α we have: 0, , , , ,, , ,, , 2 2* ≥>< >< >< + >< ><>< − >< ><>< −>< yy yy yx yy xyyx yy yxyx xx which reduce to: 0 , , , , , , , 222 ≥ >< >< + >< >< − >< >< −>< yy yx yy yx yy yx xx or: ><≥⇔≥><−><>< yxyxyxyyxx ,0,,, 2 q.e.d. Augustin Louis Cauchy )1789-1857( Viktor Yakovlevich Bunyakovsky 1804 - 1889 Hermann Amandus Schwarz 1843 - 1921 MatricesVectors and Vector Spaces 0≠y
  • 21. 21 SOLO Inner Product Cauchy Inequality Let ai, bi (i = 1,…,n) be complex numbers, than :             ≤ ∑∑∑ === n i i n i i n i ii baba 1 2 1 2 2 1 Augustin Louis Cauchy )1789-1857( Viktor Yakovlevich Bunyakovsky 1804 - 1889 Hermann Amandus Schwarz 1843 - 1921 Buniakowsky-Schwarz Inequality ( ) ( ) ( )[ ] ( )[ ]∫∫∫ ≤ dttgdttfdttgtf 22 2 Buniakowsky, V., “Sur quelques inéqualités concernant Les intégrales ordinaires et les intégrales aux différences finite”, Mémoires de l’Acad. de St. Pétersbourg (VII),(1859) Schwarz, H.A., “Über ein die Flächen kleinstein Flächeninhalts betreffendes Problem der Variationsrechnung”, Acta Soc. Scient. Fen., 15, 315-362, (1885) Matrices Vectors and Vector Spaces
  • 22. 22 SOLO Inner Product [ ] 2/1 , ><= xxx Parallelogram law Given an Inner Product space V, than is a norm on V. Moreover for any x,y є X the parallelogram law 2222 22 yxyxyx +=−++ is valid. Proof q.e.d. x y yx + yx − 22 22 22,2,2 ,,,, ,,,, ,, yxyyxx yyxyyxxx yyxyyxxx yxyxyxyxyxyx +>=<+><= ><+><−><−><+ ><+><+><+>=< >−−<+>++=<−++ Matrices Vectors and Vector Spaces
  • 23. 23 SOLO Inner Product Let compute: From this we can see that ><+><= ><−><+><+><− ><+><+><+>=< >−−<−>++=<−−+ xyyx yyxyyxxx yyxyyxxx yxyxyxyxyxyx ,2,2 ,,,, ,,,, ,, 22 ><+><−= ><−><+><−><− ><+><+><−>=< ><−><+><+><− ><+><+><+>=< >−−<−>++=<−−+ xyiyxi yyxyiyxixx yyxyiyxixx yiyixyiyixxx yiyixyiyixxx yixyixyixyixyixyix ,2,2 ,,,, ,,,, ,,,, ,,,, ,, 22 ><=−−++−−+ yxyixiyixiyxyx ,4 2222 *2222 ,4,4 ><>=<=−++−−−+ yxxyyixiyixiyxyx MatricesVectors and Vector Spaces
  • 24. 24 SOLO Norm of a Vector . Matrices Let use the Norm definition to develop the following relations: yxyx yyxxyx yyyxxyxxyxyxyx ,Re2 ,, ,,,,, 22 22 2 ++= +++= +++=++=+ We obtain the Triangle Inequalities yxyxyxyxyx ,2,2 22222 ++≤+≤−+ ( ) ( ) yxyxyxyx ,Re,Im,Re, 22 ≥+=use the fact that: to obtain: use the Scwarz Inequality: ><≥ yxyx , yxyxyxyxyx 22 22222 ++≤+≤−+to obtain: or: ( ) ( )222 yxyxyx +≤+≤− ( ) ( )yxyxyx +≤+≤− Vectors and Vector Spaces x
  • 25. 25 SOLO Norm of a Vector . Matrices Other Definitions of Vector Norms ∑= = n i ixx 1 The following definitions satisfy Vector Norm Properties: 1 2 { }i i xx max= ( ) ( )[ ] ( )[ ] [ ] ∑∑= = ==== n i n j jiij TT xxqxQxxTTxxTxTx 1 1 *2/1* 2/1 ** 2/1 **3 Vectors and Vector Spaces x Return to Table of Content
  • 26. 26 SOLO Matrices Matrix A Matrix A over a field F is a rectangular array of elements in F. If A is over a field of real numbers, A is called a Real Matrix. If A is over a field of complex numbers, A is called a Complex Matrix. A n rows by m columns Matrix A, n x m Matrix, is defined as: [ ]                       ==             = s w o r n r r r ccc aaa aaa aaa A n columnsm m nmnn m m nxm         2 1 21 21 22221 11211 aij (i=1,n,j=1,m) are called the elements of A, and we use also the notation: { }ijaAnxm = Return to Table of Content
  • 27. 27 SOLO Matrices Definitions: Any complex matrix A with n rows (r1, r2,…,rn) and m columns (c1,c2,…,cm) [ ]m n nxm ccc r r r A ,,, 21 2 1   =               = can be considered as a linear function (or mapping or transformation) for a m-dimensional domain to a n-dimensional codomain. ( ) ( ){ }AcodomyAdomxxAyA nxmxnxm ∈⇒∈= 11;: In the same way its conjugate transpose: [ ]H n HH H m H H H mxn rrr c c c A ,,, 21 2 1   =               = is a linear function (or mapping or transformation) for a n-dimensional codomain to a m-dimensional domain. ( ) ( ){ }AcdomxAcodomyyAxA mxnx HH mxn ∈⇒∈= 111111 ;: Operations with Matrices
  • 28. 28 SOLO Matrices Domain and Codomain of a Matrix A The domain of A can be decomposed into orthogonal subspaces: ( ) ( ) ( )ANARAdom H ⊥ ⊕= ( )H AR ( )AN ( )H AN ( )AR xAy = 11 yAx H = ( )Adomxmx ∈1 11mx x ( )Acodomy nx ∈11 1nx yR (AH ) – is the row space of AH (dimension r) N (A) – is the null-space of A (x∈ N (A) ⇔ A x = 0) or the kernel of A (ker (A)) (dimension m-r) The codomain of A (domain of AH ) can be decomposed into orthogonal subspaces: ( ) ( ) ( )H ANARAcodom ⊥ ⊕= R (A) – is the column space of A (dimension r) N (AH ) – is the null-space of AH (dimension n-r) Operations with Matrices Return to Table of Content
  • 29. 29 SOLO Matrices Operations with Matrices The Transpose AT of a Matrix A is obtained by interchanging the rows with the columns. For             = nmnn m m aaa aaa aaa Anxm     21 22221 11211 Transpose AT of a Matrix A the transpose is ( )             == nmmm n n TT aaa aaa aaa AA mxnnxm     21 22212 12111 From the definition it is obvious that (AT )T = A Return to Table of Content
  • 30. 30 SOLO Matrices Operations with Matrices The Conjugate AT of a Matrix A is obtained by tacking the conjugate complex of each of the elements of A. { }* ** 2 * 1 * 2 * 22 * 21 * 1 * 12 * 11 * ij nmnn m m a aaa aaa aaa A nxm =               =     Conjugate A* of a Matrix A the transpose is ( )               == ** 2 * 1 * 2 * 22 * 12 * 1 * 21 * 11 * nmmm n n TH aaa aaa aaa AA nxmmxn     Conjugate Transpose AH =(A* )T of a Matrix A Return to Table of Content
  • 31. 31 SOLO Matrices Operations with Matrices The sum/difference of two matrices A and B of the same dimensions n x m is obtained by adding/subtracting the elements bij to/from elements aij. Sum and Difference of Matrices A and B of the same dimensions n x m { }ijij nmnmnnnn mm mm ba bababa bababa bababa BA nxmnxm ±=             ±±± ±±± ±±± =±     2211 2222222121 1112121111 Given the following transformations 1111 , mxnxmnxmxnxmnx xBzxAy == ( ) 11111 mxnxmnxmmxnxmmxnxmnxnx xBAxBxAzy ±=±=± Return to Table of Content
  • 32. 32 SOLO Matrices Operations with Matrices Multiplication of a Matrix by a Scalar The product of a Matrix by a Scalar is a Matrix in which each Element is multiplied by the Scalar. { }ij nmnn m m a aaa aaa aaa Anxm α ααα ααα ααα α =             =     21 22221 11211 Given the following operations 1111 , mxnxmnxmxnxmnx xAzxAy α== Return to Table of Content
  • 33. 33 SOLO Matrices Operations with Matrices Multiplication of a Matrix by a Matrix Consider the two consecutive transformations: nxp npnn p p mpmm p p nmnn m m C ccc ccc ccc bbb bbb bbb aaa aaa aaa BA mxpnxm =               =                           =             21 22221 11211 21 22221 11211 21 22221 11211 where                             ===             pmpmm p p pxmx m z z z bbb bbb bbb zBx x x x mxp       2 1 21 22221 11211 11 2 1 11 2 1 21 22221 11211 1 2 1 pxmxpnxmmxnxm zBA x x x aaa aaa aaa xAy y y y mnmnn m m nx n =                         ===                  
  • 34. 34 SOLO Matrices Operations with Matrices Multiplication of a Matrix by a Matrix (continue -1) The Multiplication of a Matrix by a Matrix is possible between Matrices in which the number of the columns in the first Matrix is equal to the number of rows in the second Matrix . nxp npnn p p mpmm p p nmnn m m C ccc ccc ccc bbb bbb bbb aaa aaa aaa BA mxpnxm =               =                           =             21 22221 11211 21 22221 11211 21 22221 11211 where ∑= = m j jkijik bac 1 :
  • 35. 35 SOLO Matrices Operations with Matrices Multiplication of a Matrix by a Matrix (continue - 2) CABBCA )()( =Matrix multiplication is associative: Transpose of Matrix Multiplication TTT ABAB =)( Matrix product is compatible with scalar multiplication: ( ) ( )BABAAB ααα ==)( Matrix multiplication is distributive over matrix addition: ( ) CBCACBACABACBA +=++=+ ,)( In general Matrix Multiplication is not Commutative ABAB ≠ Return to Table of Content
  • 36. 36 SOLO Matrices Operations with Matrices Kronecker Multiplication of a Matrix by a Matrix ( ) ( )pmxrnnmnn m m rprr p p nmnn m m BaBaBa BaBaBa BaBaBa bbb bbb bbb aaa aaa aaa BA rxpnxm ⋅⋅             =               ⊗             =⊗             21 22221 11211 21 22221 11211 21 22221 11211 :Leopold Kronecker (1823 –1891) ( ) ( ) ( ) ( ) ( ) ( ) ( )CBACBA BABABA CBCACBA CABACBA ⊗⊗=⊗⊗ ⊗=⊗=⊗ ⊗+⊗=⊗+ ⊗+⊗=+⊗ ααα Properties Return to Table of Content
  • 37. 37 SOLO Matrices Operations with Matrices Partition of a Matrix ( ) ( ) ( ) ( )           =                         = −−− − + +++++ + + pmxqnxpqn pmqxqxp nxm AA AA aaaa aaaa aaaa aaaa A nmnpnpn mqpqpqq qmqpqpq mpp 1221 1211 11 111111 11 111111                     = qpq p aa aa A qxp    1 111 11 : ( )           = + + − qmqp mp aa aa A pmqx    1 111 12 : ( )           = ++ − npn pqq aa aa A xpqn    1 111 21 : ( ) ( )           = + +++ −− nmnp mqpq aa aa A pmxqn    1 111 12 :
  • 38. 38 SOLO Matrices Operations with Matrices Partition of a Matrix (continue) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( )         ++ ++ =                     = −−−−−−−−−− −−−−−− −−− − −−− − srxpmpmxqnsrpxxpqnxspmpmxqnpxsxpqn srxpmpmqxsrpxqxpxspmpmqxpxsqxp srxpmxspm srpxpxs pmxqnxpqn pmqxqxp mxrnxm BABABABA BABABABA BB BB AA AA BA 2222122121221121 2212121121121111 2221 1211 2221 1211       Return to Table of Content
  • 39. 39 SOLO Matrices Operations with Matrices Elementary Operations with a Matrix j iEE ji cr ↑ ←                 == 100 00 001      ααα AE irα jcEA α The Elementary Operations on rows/columns of a Matrix Anxm are reversible (invertible) The reverse operation is to multiply the row/column elements by the scalar inverse. ( ) ( ) j iEE ji cr ↑ ←                 == 100 0/10 001 /1/1      ααα ( ) ( ) nrrrr IEEAAEE iiii =⇒= αααα /1/1 ( ) ( ) mcccc IEEAEEA jjjj =⇒= αααα /1/1 1. Multiple the elements of a row/column by a nonzero scalar The reverse operations are written as: ( ) ( ) ( )( ) 1 /1 1 /1 & −− == iiii rrrr EEEE αααα ( ) ( ) ( )( ) 1 /1 1 /1 & −− == jjjj cccc EEEE αααα
  • 40. 40 SOLO Matrices Operations with Matrices Elementary Operations with a Matrix (continue – 1) The Elementary Operations on rows/columns of a Matrix Anxm are reversible (invertible)                       =→+ 1000 010 0010 0001        α α jij rrrE ←i ←j nrrrrrr IEE jijjij =                       =                                             − =→+→− 1000 0100 0010 0001 1000 010 0010 0001 1000 010 0010 0001                      αα αα The reverse operation is to multiply each element of row i by the scalar (-α) and add to elements of row j AAEE jijjij rrrrrr =→+→− αα                         ++++ =→+ nnnjnin injnijjjiijiij inijiii nji rrr aaaa aaaaaaaa aaaa aaaa AE jij        1 11 1 11111 αααα α ←i ←j 2.a Multiply each element of row i by the scalar α and add to elements of row j                       − =→− 1000 010 0010 0001        α α jij rrrE ←i ←j
  • 41. 41 SOLO Matrices Operations with Matrices Elementary Operations with a Matrix (continue – 2) The Elementary Operations on rows/columns of a Matrix Anxm are reversible (invertible) ji E jij ccc ↑↑                       =→+ 1000 0100 010 0001        α α ←i ←j ncccccc IEE jijjij =                       =                                             − =→+→− 1000 0100 0010 0001 1000 0100 010 0001 1000 0100 010 0001                      αα αα The reverse operation is to multiply each element of column i by the scalar (-α) and add to elements of column j AEEA jijjij cccccc =→+→− αα                       − =→− 1000 0100 010 0001        α α jij cccE ←i ←j ji aaaaa aaaaa aaaaa aaaaa EA nnninjnin jnjijjjij iniiijiii niji ccc jij ↑↑                         + + + + =→+        α α α α α 1 1 1 111111 ←i ←j 2.b Multiply each element of column i by the scalar α and add to elements of column j
  • 42. 42 SOLO Matrices Operations with Matrices Elementary Operations with a Matrix (continue – 3) The Elementary Operations on rows/columns of a Matrix Anxm are reversible (invertible)                       =↔ 1000 0010 0100 0001        ji rrE ←i ←j nrrrr IEE jiij =                       =                                             =↔↔ 1000 0100 0010 0001 1000 0010 0100 0001 1000 0010 0100 0001                      The reverse operation is again interchange row j with row i AAEE jiij rrrr =↔↔                         =↔ nnnjnin inijiii jnjjjij nji rr aaaa aaaa aaaa aaaa AE ji        1 1 1 11111 ←i ←j ( ) jiij rrrr EE ↔ − ↔ = 1 3.a Interchange row i with row j
  • 43. 43 SOLO Matrices Operations with Matrices Elementary Operations with a Matrix (continue – 4) The Elementary Operations on rows/columns of a Matrix Anxm are reversible (invertible) ji E ji rr ↑↑                       =↔ 1000 0010 0100 0001        ncccc IEE ijji =                       =                                             =↔↔ 1000 0100 0010 0001 1000 0010 0100 0001 1000 0010 0100 0001                      The reverse operation is again interchange column j with column i AEEA ijji cccc =↔↔ ji aaaa aaaa aaaa aaaa EA nnninjn jnjijjj iniiiji nij cc ji ↑↑                         =↔        1 1 1 11111 ( ) jiij cccc EE ↔ − ↔ = 1 3.b Interchange column i with column j Return to Table of Content
  • 44. 44 SOLO Matrices Operations with Matrices Rank of a Matrix Given a Matrix Anxm we want, by using Elementary (reversible) Operations to reduce it to a Main Diagonal Unit Matrix and zeros in all other positions.             = nmnn m m aaa aaa aaa Anxm     21 22221 11211 Assume that a11 ≠ 0. If this is not the case interchange the first row/column (a Elementary operation) until this is satisfied. Divide the elements of the first row by a11. For i=2,n multiply the first row by (–ai1/a11) and add to i row (a Elementary operation) to obtain:                       −− −− = m n nm n n mm m a a a aa a a a a a a aa a a a a a a a AE nxm 1 11 1 12 11 1 2 1 11 21 212 11 21 22 11 1 11 12 1 0 0 1    
  • 45. 45 SOLO Matrices Operations with Matrices Rank of a Matrix (continue – 1) Repeat this procedure for second column (starting at the new a22), third column (starting at the new a33), and so on, as long as we ca obtain non-zero elements on the main diagonal, using the rows bellow. At the end we obtain: r a aa aaa AEEE nm mr mr rowrowrrow nxm ↑                       = 0000 0000 '100 ''10 '''1 22 1112 1_2__         ←r Define the multiplications of Elementary Operations as: 1_2__: rowrowrrow EEEP = Those Elementary Operations can be reversed in opposite order to obtain: ( ) ( ) ( ) 1 _ 1 2_ 1 1_ 1 : −−−− = rrowrowrow EEEP  nIPP =−1
  • 46. 46 SOLO Matrices Operations with Matrices Rank of a Matrix (continue – 2) Now use column operation starting with the first column in order to nullify all the elements above the Main Unit Diagonal: ( ) ( ) ( ) ( )         =                       = −−− − rmxrnxrrn rmrxr rcccrowrowrrow I EEEAEEE nxm 00 0 0000 0000 0100 0010 0001 _2_1_1_2__         Define the multiplications of Elementary Operations as: rccc EEEQ _2_1_: = Those Elementary Operations can be reversed in opposite order to obtain: ( ) ( ) ( ) 1 1_ 1 2_ 1 _ 1 : −−−− = ccrc EEEQ  mIQQ =−1
  • 47. 47 SOLO Matrices Operations with Matrices Rank of a Matrix (continue – 3) We obtained:  1111 00 0 00 0 −−−−       =⇒      = Q I PQQAPP I QPA r I I r m nxm n nxm  The maximum number of Linearly Independent Rows of A = r 11 00 0 −−       = Q I PA r nxm From the relation we can see that the maximum number of Linearly Independent Rows and the maximum number of Linearly Independent Columns of Matrix PAQ is r.       = 00 0rI QPAnxm       =               =      = −− −− −− − 0000 0 00 0 12 1 11 1 22 1 21 1 12 1 11 1 1 QQ QQ QQI Q I PA rr nxm Since the maximum number of Linearly Independent Rows of Matrix PA is also r. But the Elementary Operations P are not changing the number of Linearly Independent Rows of A, therefore: The maximum number of Linearly Independent Columns of A = r         =              =      = − − −− −− − 0 0 00 0 00 0 21 1 11 1 22 1 21 1 12 1 11 1 1 P PI PP PPI PQA rr nxm Since the maximum number of Linearly Independent Columns of Matrix A Q is also r. But the Elementary Operations Q are not changing the number of Linearly Independent Columns of A, therefore:
  • 48. 48 SOLO Matrices Operations with Matrices Rank of a Matrix (continue – 4) We obtained:       = 00 0rI QPAnxm The maximum number of Linearly Independent Rows of Anxm = The maximum number of Linearly Independent Columns of Anxm = r ≤ min (m,n) := Rank of Matrix Anxm 11 00 0 −−       = Q I PA r nxm ( ) ( ) ( )TrT mxn TT P I QAAnxm 11 00 0 −−       == Since in the Transpose of A we interchanged the columns with the rows of A: nxmmxn T ARankARank =
  • 49. 49 SOLO Matrices Operations with Matrices Rank of a Matrix (continue – 5) Proof ( ) ( ) mxpmxp mxp BRankBARank ARankBARank nxm nxmnxm ≤ ≤ Rank of A B: ( )mnrARank nxm ,min≤=Assume:       =               =      =⇒      = −− −− −− − 0000 0 00 0 00 0 12 1 11 1 22 1 21 1 12 1 11 1 1 QQ QQ QQI Q I PA I QPA rrr nxmnxm ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( )( ) ( )( ) ( ) ( ) ( ) ( ) ( ) ( ) ( )         =                 = −−− − −−− − −−− − −− rpxrnxrrn rprxrxr rpxrmxrrm rprxrxr rmxrnxrrn rmrxrxr mxp HH BB BBQQ BPAnxm 0000 1211 1221 121112 1 11 1 Therefore (P A B) has at most r nonzero rows: ARankrABRankPABRank rNonsingulaP =≤= Since ( ) ( ) BRankBRankABRankABRankABRankABAB TTTTTTT =≤==⇒= q.e.d.
  • 50. 50 SOLO Matrices Operations with Matrices Rank of a Matrix (continue – 6) ( ) ( ) nxnnxnnxn nxnnxnnxn BRankARankmBARank BRankARankBARank nxn nxn +≤+ +≤+ If A and B are Square nxn Matrices then: [3] K. Ogata, “State Space Analysis of Control Systems”, Prentice Hall, Inc., 1967, p.104 ( ) ( )nxpmxnmxnnxpmxn BRankARankBARanknBRankARank nxp ,min≤≤−+ Sylvester’s Inequality: James Joseph Sylvester (1814 – 1887) [4] T. Kailath, “Linear Systems”, Prentice Hall, Inc., 1980, p.654 Return to Table of Content
  • 51. 51 SOLO Matrices Operations with Matrices Equivalence of Two Matrices Proof Two Matrices Anxm and Bnxm are said to be Equivalent, if and only if there exist a Nonsingular Matrix Pnxn and a Nonsingular Matrix Qmxm such that A=P B Q. This is the same to saying that A and B are Equivalent if and only if they have the same rank. Since A and B have the same rank r, we can write: T I SBH I GA rr       =      = 00 0 , 00 0 where G,H, S, T are square invertible matrices. q.e.d. QBPHTBSGATBS I QP r ==⇒=      −−−−  1111 00 0 P and Q are square invertible matrices since ( ) ( ) THHTQGSSGPHTQSGP 1111111111 ,:&: −−−−−−−−−− ====⇒== Return to Table of Content
  • 52. 52 SOLO Matrices Square Matrices In a Square Matrix Number of Rows = Number of Columns = n             = nnnn n n aaa aaa aaa Anxn     21 22221 11211 Trace of a Square Matrix ∑= == n i iiaAtrAoftrace nxnnxn 1 Diagonal Square Matrix { }ijij nn a a a a Dnxn δ=             =     00 00 00 22 11 Return to Table of Content
  • 53. 53 SOLO Matrices Square Matrices Identity Matrix Triangular Matrices { }ijnnxn II δ=             == 100 010 001     A Matrix whose elements below or above the main diagonal are all zero is called a Triangular Matrix             = nnnn aaa aa a Lnxn     21 2221 11 0 00 nxnnxnnxnnxnnxn AAIIA == Null Matrix { }0=nxn O nxnnxnnxnnxnnxn OOIIO == Upper Triangular Matrix Lower Triangular Matrix             = nn n n a aa aaa Unxn     00 0 222 11211 Return to Table of Content
  • 54. 54 SOLO Matrices Square Matrices Hessenberg Matrix An Upper Hessenberg Matrix has zero entries below the first subdiagonal: ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( )                         = − −− −− −− −− nnnn nnn nnn nnn nnn H aa aaaa aaaaa aaaaaa aaaaaa U nxn 1 4142443 313233332 21222232211 11121131211 0000 00 0       An Lower Hessenberg Matrix has zero entries below the first superdiagonal: ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( )                         = −−−−− −−−− nnnnnn nnnnnn nnnn H aaaaa aaaaa aaaa aaaa aaa aa L nxn        4321 141312111 42322212 34333231 232221 1211 0 0 00 000 A Hessenberg Matrix is an “almost” Triangular Matrix. Return to Table of Content
  • 55. 55 SOLO Matrices Square Matrices Toeplitz Matrix A Toeplitz Matrix or a “Diagonal-constant Matrix”, named after Otto Toeplitz, is a Matrix in which each descending Diagonal from left to right is constant. Otto Toeplitz (1881 – 1940)                    = − − −− − +−−− 0121 101 21 012 101 1210 aaaa aaa aa aaa aaa aaaa T n n nxn       Hankel Matrix A Hankel Matrix is closed related to a Toeplitz Matrix (a Hankel Matrix is an upside-down Toeplitz Matrix), named after Hermann Hankel, is a Matrix in which each uprising Diagonal from left to right is constant.                       = −+−+− +− −− −−− +−−− nnn n n nxn aaa a aa aaa aaaa H 2121 12 32 321 1210        Hermann Hankel (1839 – 1873) Return to Table of Content
  • 56. 56 SOLO Matrices Householder Matrix nˆ ( )xnn T  ˆˆ ( )xnn T  ˆˆ x  'x  O A We want to compute the reflection of over a plane defined by the normal ( )1ˆˆˆ =nnn T x  From the Figure we can see that: ( ) ( ) xHxnnIxnnxx TT  =−=−= ˆˆ2ˆˆ2' 1ˆˆˆˆ2: =−= nnnnIH TT We can see that H is symmetric: ( ) HnnInnIH TTTT =−=−= ˆˆ2ˆˆ2 In fact H is also a rotation of around OA so it must be orthogonal, i.e. HT H=H HT =I. x  ( ) ( )  InnnnnnInnInnIHHHH TTTTTT =+−=−−== ˆˆˆˆ4ˆˆ4ˆˆ2ˆˆ2 1 Alston Scott Householder 1904-1993 Square Matrices Return to Table of Content
  • 57. 57 SOLO Matrices Square Matrices Vandermonde Matrix ( )                 = −−− 11 2 1 1 22 2 2 1 21 21 111 ,,, n n nn n n n xxx xxx xxx xxxVnxn       Vandermonde Matrix is a nxn Matrix that has in its j row the entries x1 j-1 x2 j-1 … xn j-1 Alexandre-Théophile Vandermonde 1735 - 1796 Return to Table of Content
  • 58. 58 SOLO Hermitian = Symmetric if A has real components Hermitian Matrix: AH = A, Symmetric Matrix: AT = A Matrices Pease, “Methods of Matrix Algebra”, Mathematics in Science and Engineering Vol.16, Academic Press 1965 Definitions: Adjoint Operation (H): AH = (A*)T (* is complex conjugate and T is transpose of the matrix) Skew-Hermitian = Anti-Symmetric if A has real components. Skew-Hermitian: AH = -A, Anti-Symmetric Matrix: AT =-A Unitary Matrix: UH = U-1, Orthonormal Matix: OT = O-1 Unitary = Orthonormal if A has real components. Charles Hermite 1822 - 1901 Square Matrices Hermitian Matrix, Skew-Hermitian Matrix, Unitary Matrix Return to Table of Content
  • 59. 59 SOLO Matrices Square Matrices Singular, Non-singular and Inverse of a Non-singular Square Matrix Anxn We obtained:       = 00 0rI QPAnxn 11 00 0 −−       = Q I PA r nxn Singular Square Matrix Anxn: r < n Only r rows/columns of A are Linearly Independent Non-singular Square Matrix Anxn: r = n The n rows/columns of A are Linearly Independent For a Non-singular Matrix (r=n): n I n IQQQPPQAPQQPQIPA n nxn ===⇒== −−−−−−− 1111111  and:  n I n IPPPQQPPQAQPQIPA n nxn ===⇒== −−−−−−− 1111111 The Matrix (Q P) is the Inverse of the Non-singular Matrix A: PQAnxn = −1 This result explains the Gauss–Jordan elimination algorithm that can be used to determine whether a given square matrix is invertible and to find the inverse Return to Table of Content
  • 60. 60 SOLO Matrices Invertible Matrices Matrix Inversion • Gauss–Jordan elimination is an algorithm that can be used to determine whether a given matrix is invertible and to find the inverse. • An alternative is the LU decomposition which generates an upper and a lower triangular matrices which are easier to invert. • For special purposes, it may be convenient to invert matrices by treating mxn- by-mxn matrices as m-by-m matrices of n-by-n matrices, and applying one or another formula recursively (other sized matrices can be padded out with dummy rows and columns). • For other purposes, a variant of Newton's method may be convenient (particularly when dealing with families of related matrices, so inverses of earlier matrices can be used to seed generating inverses of later matrices). Square Matrices
  • 61. 61 SOLO Matrices Invertible Matrices Square Matrices Gaussian elimination, which first appeared in the text Nine Chapters on the Mathematical Art written in 200 BC, was used by Gauss in his work which studied the orbit of the asteroid Pallas. Using observations of Pallas taken between 1803 and 1809, Gauss obtained a system of six linear equations in six unknowns. Gauss gave a systematic method for solving such equations which is precisely Gaussian elimination on the coefficient matrix. Sketch of the orbits of Ceres and Pallas, by Gauss http://www.math.rutgers.edu/~cherlin/History/Papers1999/ weiss.html Gauss published his methods in 1809 as "Theoria motus corporum coelestium in sectionibus conicus solem ambientium," or, "Theory of the motion of heavenly bodies moving about the sun in conic sections."
  • 62. 62 SOLO Matrices Invertible Matrices Gauss-Jordan elimination In Linear Algebra, Gauss–Jordan elimination is an algorithm for getting matrices in reduced row echelon form using elementary row operations. It is variation of Gaussian elimination. Gaussian elimination places zeros below each pivot in the matrix, starting with the top row and working downwards. Matrices containing zeros below each pivot are said to be in row echelon form. Gauss–Jordan elimination goes a step further by placing zeros above and below each pivot; such matrices are said to be in reduced row echelon form. Every matrix has a reduced row echelon form, and Gauss–Jordan elimination is guaranteed to find it. Carl Friedrich Gauss (1777–1855) Wilhelm Jordan ( 1842–1899) See example Square Matrices
  • 63. 63 SOLO Matrices Invertible Matrices Gauss-Jordan elimination If the original square matrix, A, is given by the following expression:           − −− − = 210 121 012 33xA Then, after augmenting A Matrix by the Identity Matrix, the following is obtained: [ ]           − −− − = 100210 010121 001012 IA Perform the following: 1. row1 + row2 →row1 equivalent with left multiplication by           =→+ 100 010 011 121 rrrE [ ]           − −− − =→+ 100210 010121 011111 121 IAE rrr Square Matrices
  • 64. 64 SOLO Matrices Invertible Matrices Gauss-Jordan elimination 2. row1 + row2 →row2 equivalent with left multiplication by           =→+ 100 011 001 221 rrrE [ ]           − −− − =→+ 100210 010121 011111 121 IAE rrr [ ]           − − − =→+→+ 100210 021230 011111 121221 IAEE rrrrrr 3. (1/3) row2 →row2 equivalent with left multiplication by           = → 100 03/10 001 22 3 1 rr E [ ]             − − − =→+→+ → 100210 0 3 2 3 1 3 2 10 011111 121221 22 3 1 IAEEE rrrrrr rr 1. row1 + row2 →row1 Square Matrices
  • 65. 65 SOLO Matrices Invertible Matrices Gauss-Jordan elimination 3. (1/3) row2 →row2           =→+ 110 010 001 332 rrrE [ ]             − − − =→+→+ → 100210 0 3 2 3 1 3 2 10 011111 121221 22 3 1 IAEEE rrrrrr rr 4. row2+row3 →row3 equivalent with left multiplication by [ ]                 − − =→+→+ → →+ 1 3 2 3 1 3 4 00 0 3 2 3 1 3 2 10 011111 121221 22 332 3 1 IAEEEE rrrrrr rr rrr 5. row1-row2 →row1 equivalent with left multiplication by           − =→− 100 010 011 121 rrrE [ ]                   − − =→+→+ → →+→− 1 3 2 3 1 3 4 00 0 3 2 3 1 3 2 10 0 3 1 3 2 3 1 01 121221 22 332121 3 1 IAEEEEE rrrrrr rr rrrrrr Square Matrices
  • 66. 66 SOLO Matrices Invertible Matrices Gauss-Jordan elimination 5. row1-row2 →row1               = → 4 3 00 010 001 33 4 3 rr E [ ]                   − − =→+→+ → →+→− 1 3 2 3 1 3 4 00 0 3 2 3 1 3 2 10 0 3 1 3 2 3 1 01 121221 22 332121 3 1 IAEEEEE rrrrrr rr rrrrrr 6. (4/3) row3 →row3 equivalent with left multiplication by [ ]                   − − =→+→+ → →+→− → 4 3 2 1 4 1 100 0 3 2 3 1 3 2 10 0 3 1 3 2 3 1 01 121221 22 332121 33 3 1 4 3 IAEEEEEE rrrrrr rr rrrrrr rr 7. (1/3) row3+row1 →row1 equivalent with left multiplication by               = →+ 100 010 3 1 01 113 3 1 rrr E [ ]                   −=→+→+ → →+→− →→+ 4 3 2 1 4 1 100 0 3 2 3 1 3 2 10 4 1 2 1 4 3 001 121221 22 332121 33113 3 1 4 3 3 1 IAEEEEEEE rrrrrr rr rrrrrr rrrrr Square Matrices
  • 67. 67 SOLO Matrices Invertible Matrices Gauss-Jordan elimination 7. (1/3) row3+row1 →row1             = →+ 100 3 2 10 001 223 3 2 rrr E [ ]                   −=→+→+ → →+→− →→+ 4 3 2 1 4 1 100 0 3 2 3 1 3 2 10 4 1 2 1 4 3 001 121221 22 332121 33113 3 1 4 3 3 1 IAEEEEEEE rrrrrr rr rrrrrr rrrrr 8. (2/3) row3+row2 →row2 equivalent with left multiplication by [ ] [ ]BIIAEEEEEEEE B rrrrrr rr rrrrrr rrrrrrrr =                   =→+→+ → →+→− →→+→+ 4 3 2 1 4 1 100 2 1 1 2 1 010 4 1 2 1 4 3 001 121221 22 332121 33113223 3 1 4 3 3 1 3 2    We found [ ] [ ] 1− =⇒=⇒= ABIABBIIAB 1 3 1 4 3 3 1 3 2 4 3 2 1 4 1 2 1 1 2 1 4 1 2 1 4 3 : 121221 22 332121 33113223 − →+→+ → →+→− →→+→+ =                   == AEEEEEEEEB rrrrrr rr rrrrrr rrrrrrrr [ ] [ ]11 −− =− AIIAAinationlimeJordanGaussTherefore Square Matrices Return to Table of Content
  • 68. 68 The first to use the term 'matrix' was Sylvester in 1850. Sylvester defined a matrix to be an oblong arrangement of terms and saw it as something which led to various determinants from square arrays contained within it. After leaving America and returning to England in 1851, Sylvester became a lawyer and met Cayley, a fellow lawyer who shared his interest in mathematics. Cayley quickly saw the significance of the matrix concept and by 1853 Cayley had published a note giving, for the first time, the inverse of a matrix. Arthur Cayley 1821 - 1895 Cayley in 1858 published “Memoir on the Theory of Matrices” which is remarkable for containing the first abstract definition of a matrix. He shows that the coefficient arrays studied earlier for quadratic forms and for linear transformations are special cases of his general concept. Cayley gave a matrix algebra defining addition, multiplication, scalar multiplication and inverses. He gave an explicit construction of the inverse of a matrix in terms of the determinant of the matrix. Cayley also proved that, in the case of 2 2 matrices, that a matrix satisfies its own characteristic equation. James Joseph Sylvester 1814 - 1897 http://www-groups.dcs.st-and.ac.uk/~history/HistTopics/Matrices_and_determinants.html#86 Return to Table of Content
  • 69. 69 SOLO Matrices Square Matrices L, U Factorization of a Square Matrix A by Elementary Operations Given a Square Matrix Number of Rows = Number of Columns = n             = nnnn n n aaa aaa aaa Anxn     21 22221 11211 Consider the following Simple Operations on the rows/columns of A to obtain a U Triangular Matrix (all elements bellow the Main Diagonal are 0) : 1. Multiple the elements of a row/column by a nonzero scalar 2. Multiply each element of row i by the scalar α and add to elements of row j j iEE ji cr ↑ ←                 == 100 00 001      αααAE irα jcEA α                       =↔+ 1000 010 0010 0001        α α jij rrrE AE jij rrr →+α L,U factorization was proposed by Heinz Rutishauser in 1955.
  • 70. 70 SOLO Matrices Square Matrices L, U Factorization of a Matrix A by Elementary Operations Given a Square Matrix Number of Rows = Number of Columns = n for example:           − −− − = 210 121 012 33xA Consider the following Simple Operations on the rows/columns of A to obtain a U1 Triangular Matrix (all elements bellow the Main Diagonal are 0) :           = →+ 13/20 010 001 332 3 2 rrr E             − − − = →+ 210 1 2 3 0 012 221 2 1 AE rrr           = →+ 100 012/1 001 221 2 1 rrr E 1 2 1 3 2 3 4 00 1 2 3 0 012 221332 UAEE rrrrrr =                 − − = →+→+ 1. (1/2) row1+row2 →row1 equivalent with left multiplication by 2. (2/3) row2 + row3 →row3 equivalent with left multiplication by
  • 71. 71 SOLO Matrices Square Matrices L, U Factorization of a Matrix A by Elementary Operations           =                     = →→+ 13/23/1 012/1 001 100 012/1 001 13/20 010 001 11221 2 1 2 1 rrrrr EE 1 2 1 3 2 3 4 00 1 2 3 0 012 221332 UAEE rrrrrr =                 − − = →+→+ we found: To Undo the Simple Operations and to obtain again A, let perform:           − = →+ 13/20 010 001 332 3 2 rrr E we can see that:           =                     − = →+→+ 100 010 001 13/20 010 001 13/20 010 001 332332 3 2 3 2 rrrrrr EE 332 3 2 rrr E →+− is the Inverse Operation to and we write 1 3 2 3 2 332332 − →+→+−         = rrrrrr EE 332 3 2 rrr E →+           −= →+− 100 012/1 001 221 2 1 rrr E           =                     −= →+→+− 100 010 001 100 012/1 001 100 012/1 001 221221 2 1 2 1 rrrrrr EE 221 2 1 rrr E →+− is the Inverse Operation to and we write 1 2 1 2 1 221221 − →+→+−         = rrrrrr EE 221 2 1 rrr E →+ 1. (-2/3) row2 + row3 →row3 equivalent with left multiplication by 2. (-1/2) row1+row2 →row1 equivalent with left multiplication by
  • 72. 72 SOLO Matrices Square Matrices L, U Factorization of a Matrix A by Elementary Operations LEE rrrrrr =           − −=           −          −= →+−→+− 13/20 012/1 001 13/20 010 001 100 012/1 001 221332 2 1 3 2 AUEEAEEEE rrrrrrrrrrrrrrrrrr =        =                →+−→+−→+→+→+−→+− 1 3 2 2 1 2 1 3 2 3 2 2 1 332221221332332221 we found: Therefore we obtained an L U factorization of the Square Matrix A: AUL =           − −− − =                 − −           − −= 210 121 012 3 4 00 1 2 3 0 012 13/20 012/1 001 1 We can have 1 on the diagonal of U Matrix, by introducing the Diagonal Matrix D: UDLA =           − −                     − −=           − −− − = 100 2/310 02/11 4/300 03/20 002/1 13/20 012/1 001 210 121 012 Return to Table of Content
  • 73. 73 SOLO Matrices Square Matrices Diagonalization of a Square Matrix A by Elementary Operations we found: 1 2 1 3 2 3 4 00 1 2 3 0 012 221332 UAEE rrrrrr =                 − − = →+→+ 1. (3/2) row2 + row1 →row1 equivalent with left multiplication by 2. (4/3) row3 + row2 →row2 and (9/8) row3 + row1 →row1 equivalent with left multiplication by               = →+ 100 010 0 2 3 1 112 2 3 rrr E                   − − = →+→+→+ 3 4 00 1 2 3 0 2 3 02 221332112 2 1 3 2 2 3 AEEE rrrrrrrrr               = →+→+ 100 010 8 9 3 4 1 223113 3 4 8 9 rrrrrr EEDAEAEEEEE rrrrrrrrrrrrrrr =                 == →+→+→+→+→+ 3 4 00 0 2 3 0 002 221332112223113 2 1 3 2 2 3 3 4 8 9 Return to Table of Content
  • 74. 74 Let A be any square n by n matrix over a field F SOLO Matrices Determinant of a Square Matrix – det A or |A| [ ]n nnnnknn nk nk ccc r r r aaaa aaaa aaaa A       21 2 1 21 222221 111211 =             =             = To each Matrix A we associate a scalar called Determinant; i.e. det A or |A| defined by the following 4 properties: 1 The Determinant of the Identity Matrix In is 1. If the Matrix A has two identical rows/columns the Determinant of A is zero.2 0det 1 =                       nr r r r    α α [ ] 0det 1 =ncccc  αα ←i row ↑ i column 1 1000 0100 0010 0001 detdet =                     =       nI
  • 75. 75 Let A be any square n by n matrix over a field F SOLO Matrices Determinant of a Square Matrix – det A or |A| To each Matrix A we associate a scalar called Determinant; i.e. det A or |A| defined by the following 4 properties: 3 If each element of a row/column of the Matrix A is the sum of two terms, the Determinant of A is the sum of the two Determinants formed by the separation of the terms                 +                 =                 + n k n k n kk r r r r r r r rr r       'detdet'det 111 [ ] [ ] [ ]nknknkk cccccccccc  'detdet'det 111 +=+ [ ]n nnnnknn nk nk ccc r r r aaaa aaaa aaaa A       21 2 1 21 222221 111211 =             =             =
  • 76. 76 Let A be any square n by n matrix over a field F SOLO Matrices Determinant of a Square Matrix – det A or |A| 4 If the elements of a row/column of the Matrix A have a common factor λ than the Determinant of A is equal to the product of λ and the Determinant of the Matrix obtained by dividing the previous row/column by λ.                 =                 nknn knkk n nknn knkk n aaa aaa aaa aaa aaa aaa           21 21 11211 21 21 11211 detdet λλλλ [ ]n nnnnknn nk nk ccc r r r aaaa aaaa aaaa A       21 2 1 21 222221 111211 =             =             =
  • 77. 77 SOLO Matrices & Determinants History http://www-groups.dcs.st-and.ac.uk/~history/HistTopics/Matrices_and_determinants.html#86 The idea of a determinant appeared in Japan and Europe at almost exactly the same time although Seki in Japan certainly published first. In 1683 Seki wrote “Method of solving the dissimulated problems “ which contains matrix methods written as tables. Without having any word which corresponds to 'determinant' Seki still introduced determinants and gave general methods for calculating them based on examples. Using his 'determinants' Seki was able to find determinants of 2x2, 3x3, 4x4 and 5x5 matrices and applied them to solving equations but not systems of linear equations. Takakazu Shinsuke Seki 1642 - 1708Rather remarkably the first appearance of a determinant in Europe appeared in exactly the same year 1683. In that year Leibniz wrote to de l'Hôpital. He explained that the system of equations 10 + 11x + 12y = 0 20 + 21x + 22y = 0 30 + 31x + 32y = 0 had a solution because 302112322011312210312012302211322110 ⋅⋅+⋅⋅+⋅⋅=⋅⋅+⋅⋅+⋅⋅ which is exactly the condition that the coefficient matrix has determinant 0. Gottfried Wilhelm von Leibniz 1646 - 1716
  • 78. 78 SOLO Matrices & Determinants History http://www-groups.dcs.st-and.ac.uk/~history/HistTopics/Matrices_and_determinants.html#86 Leibniz used the word 'resultant' for certain combinatorial sums of terms of a determinant. He proved various results on resultants including what is essentially Cramer's rule. He also knew that a determinant could be expanded using any column - what is now called the Laplace expansion. As well as studying coefficient systems of equations which led him to determinants, Leibniz also studied coefficient systems of quadratic forms which led naturally towards matrix theory. Gottfried Wilhelm von Leibniz 1646 - 1716 Gabriel Cramer (1704-1752) In the 1730's Maclaurin wrote Treatise of algebra although it was not published until 1748, two years after his death. It contains the first published results on determinants proving Cramer's rule for 2x2 and 3x3 systems and indicating how the 4x4 case would work. Cramer gave the general rule for n n systems in a paper Introduction to the analysis of algebraic curves (1750). It arose out of a desire to find the equation of a plane curve passing through a number of given points. Cramer does go on to explain precisely how one calculates these terms as products of certain coefficients in the equations and how one determines the sign. He also says how the n numerators of the fractions can be found by replacing certain coefficients in this calculation by constant terms of the system. Colin Maclaurin 1698 - 1746
  • 79. 79 An axiomatic definition of a determinant was used by Weierstrass in his lectures and, after his death, it was published in 1903 in the note ‘On Determinant Theory‘. In the same year Kronecker's lectures on determinants were also published, again after his death. With these two publications the modern theory of determinants was in place but matrix theory took slightly longer to become a fully accepted theory. Karl Theodor Wilhelm Weierstrass 1815 - 1897 Leopold Kronecker 1823 - 1891 Determinant Weirstrass Definition of Determinant of a nxn Matrix A: (1)det (A) is linear in the rows of A (2) Interchanging two rows change the sign of det (A) (3) det (In) = 1 For each positive integer n, there is exactly one function with these three properties. http://www-groups.dcs.st-and.ac.uk/~history/HistTopics/Matrices_and_determinants.html#86 http://www.sandgquinn.org/stonehill/MA251/notes/Weierstrass.pdf
  • 80. 80 Let A be any square n by n matrix over a field F SOLO Matrices Determinant of a Square Matrix – det A or |A| Using the 4 properties that define the Determinant of a Square Matrix more properties can be derived 5 If in a Matrix Determinant we interchange two rows/columns the sign of the Determinant will change. Proof [ ]nji cccc 1detgiven ( ) [ ] ( ) [ ] ( ) [ ]nji by nii njiji cccccccc cccccc       1 20 1 3 1 2 detdet det0 += ++= [ ] [ ] ( )     20 11 detdet by njjnij cccccccc ++ therefore [ ] [ ]nijnji cccccccc  11 detdet −= q.e.d. [ ]n nnnnknn nk nk ccc r r r aaaa aaaa aaaa A       21 2 1 21 222221 111211 =             =             =
  • 81. 81 Let A be any square n by n matrix over a field F SOLO Matrices Determinant of a Square Matrix – det A or |A| Using the 4 properties that define the Determinant of a Square Matrix more properties can be derived 6 The Matrix Determinant is unchanged if we add to a row/column any linear combination of the other rows/columns. Proof [ ]nji cccc 1detgiven q.e.d. [ ] [ ] ( ) [ ]ni ij j by njjj nin ij j jji ccccccc ccccccc       1 20 1 11 detdet detdet =+ =           + ∑ ∑ ≠ ≠ λ λ [ ]n nnnnknn nk nk ccc r r r aaaa aaaa aaaa A       21 2 1 21 222221 111211 =             =             =
  • 82. 82 Let A be any square n by n matrix over a field F SOLO Matrices Determinant of a Square Matrix – det A or |A| Using the 4 properties that define the Determinant of a Square Matrix more properties can be derived 7 If a row/column is a Linear Combination of other rows/columns the Determinant is zero. Proof q.e.d. [ ] ( ) 0detdet 20 11 ==           ∑∑ ≠≠ ij j by njjjn ij j jj ccccccc     λλ [ ]n nnnnknn nk nk ccc r r r aaaa aaaa aaaa A       21 2 1 21 222221 111211 =             =             =
  • 83. 83 Let A be any square n by n matrix over a field F SOLO Matrices Determinant of a Square Matrix – det A or |A| Using the 4 properties that define the Determinant of a Square Matrix more properties can be derived 8 Leibniz formula for determinants ( ) ( )n n i n iii i n iii i nikii L nnnknn nk nk iiinPermutatioL aaa aaaa aaaa aaaa A kk k nn n nk ,,, 1detdet 21 1 21 222221 111211 1 11 11 1         = −=             = ∑ ∑ ∑ − −≠ ≠ The meaning of this equation is that in the product there are no two elements of the same row or the same column, and the sign of the product is a function of the position of each element in the Matrix. The sign of each element, in the product, is given by { } ( ){ } ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( )               −−−−− −−−+− −−+−+ =−= +++++ ++ ++ + nnknnnn nk nk ji ijasign 11111 11 11 1 321 22 11     Gottfried Wilhelm Leibniz (1646 – 1716) [ ]n nnnnknn nk nk ccc r r r aaaa aaaa aaaa A       21 2 1 21 222221 111211 =             =             =
  • 84. 84 Let A be any square n by n matrix over a field F SOLO Matrices Determinant of a Square Matrix – det A or |A| Proof 8 ( ) ( )n n i n iii i n iii i nikii L nnnknn nk nk iiinPermutatioL aaa aaaa aaaa aaaa A kk k nn n nk ,,, 1detdet 21 1 21 222221 111211 1 11 11 1         = −=             = ∑ ∑ ∑ − −≠ ≠ From Properties (3) and (4) of the Determinant: ( ) ( ) ( ) ( ) [ ]010:detdetdetdet 1 321 4,3 1 2 1 4,3 21 222221 111211 1 2 2 1 21 1 1 1        =                   =               =             = ∑ ∑∑ == i n i n i n i i ii n i n i i nnnknn nk nk ewhere r r e e aa r r e a aaaa aaaa aaaa A ↑ i column ↑ 1st row coeff 2nd row Coeff ↓ From Properties (2) if two rows are identical the determinant is zero, therefore, in the summation of i2 we can delete the case i2=i1. [ ]n nnnnknn nk nk ccc r r r aaaa aaaa aaaa A       21 2 1 21 222221 111211 =             =             =
  • 85. 85 Let A be any square n by n matrix over a field F SOLO Matrices Determinant of a Square Matrix – det A or |A| Proof (continue 1) 8 ( ) ( )n n i n iii i n iii i nikii L nnnknn nk nk iiinPermutatioL aaa aaaa aaaa aaaa A kk k nn n nk ,,, 1detdet 21 1 21 222221 111211 1 11 11 1         = −=             = ∑ ∑ ∑ − −≠ ≠ From Properties (2),(3) and (4) of the Determinant: ( ) ( ) ( ) ( ) [ ] ( ) ( ) ( ) ∑∑ ∑ ∑ ∑∑ ≠ ≠ == −               = =                   =               =             = n i n ii i n iiii i i i i niii i n i n i n i i ii n i n i i nnnknn nk nk nn n n n e e e aaa ewhere r r e e aa r r e a aaaa aaaa aaaa A 1 12 2 121 2 1 21 1 2 2 1 21 1 1 1 ,,, 21 4,3,2 1 321 4,3 1 2 1 4,3 21 222221 111211 det 010:detdetdetdet           ↑ i column [ ]n nnnnknn nk nk ccc r r r aaaa aaaa aaaa A       21 2 1 21 222221 111211 =             =             =
  • 86. 86 Let A be any square n by n matrix over a field F SOLO Matrices Determinant of a Square Matrix – det A or |A| Proof (continue 1) q.e.d. 8 ( ) ( )n n i n iii i n iii i nikii L nnnknn nk nk iiinPermutatioL aaa aaaa aaaa aaaa A kk k nn n nk ,,, 1detdet 21 1 21 222221 111211 1 11 11 1         = −=             = ∑ ∑ ∑ − −≠ ≠ Let interchange the position of the rows to obtain a Unit Matrix, where, according with Property (5), each interchange will cause a change in determinant sign. We also use Property (1) that the determinant of the Unit Matrix is 1: ( )∑ ∑ ∑ − −≠ ≠ −=             = n i n iii i n iii i nikii L nnnknn nk nk kk k nn n nk aaa aaaa aaaa aaaa A 1 11 11 1 ,, ,, 1 21 222221 111211 1detdet        ( ) ( ) ( )L n L i i i e e e e e e n 1det1det 1 2 1 2 1 −=             −=                where L is the Number of Permutations necessary to go from (i1,i2,…,in) to (1,2,…,n) [ ]n nnnnknn nk nk ccc r r r aaaa aaaa aaaa A       21 2 1 21 222221 111211 =             =             =
  • 87. 87 Let A be any square n by n matrix over a field F SOLO Matrices Determinant of a Square Matrix – det A or |A| Using the 4 properties that define the Determinant of a Square Matrix more properties can be derived 9 A Determinant can be expanded along a row or column using Laplace's Formula: ( )∑∑ = + = −== n k ki ki ik n k kiik MaCaA 1 , 1 , 1det where the Ci,k represents the i,k element of the matrix cofactors, i.e. Ci,k is ( − 1)i + k times the minor Mi,k, which is the determinant of the matrix that results from A by removing the i-th row and the k-th column, and n is the length of the matrix. Pierre-Simon, marquis de Laplace 1749 - 1827 ( ) ( ) ( ) ( )( ) ( ) ( )( ) ( ) ( ) ( ) ( ) ( )( ) ( ) ( )( ) ( ) ( ) ( )                         = +− ++++−++ +− −+−−−−− +− nnknnkknn nikikikii inkiikkii nikikikii nkkk ki aaaaa aaaaa aaaaa aaaaa aaaaa M        111 11111111 111 11111111 11111111 , det [ ]n nnnnknn nk nk ccc r r r aaaa aaaa aaaa A       21 2 1 21 222221 111211 =             =             =
  • 88. 88 Let A be any square n by n matrix over a field F SOLO Matrices Determinant of a Square Matrix – det A or |A| 9 Laplace's Formula: ( ) ∑∑∑ == + = =−== n j ijji n j ji ji ij n j jiij CaMaCaA 1 , 1 , 1 , 1det Proof ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ∑= +− +− +−                     =             = n k nnknkknn nkkk nkkk ik nnnknn nk nk aaaaa aaaaa aaaaa a aaaa aaaa aaaa A 1 1111 21121221 11111111 4,3 21 222221 111211 00100 detdetdet           From Properties (3) and (4) of the Determinant, using Row summation: From Properties (3) and (5) of the Determinant: ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( )                 +               −++               ⋅=                     +− +− +− + +− +− +− +− +− +− nnkknn nkk nkk ki nnknkknn nkkk nkkk nnknkknn nkkk nkkk aaaa aaaa aaaa aaaaa aaaaa aaaaa aaaaa aaaaa aaaaa 1111 2111221 1111111 1112 21121222 11111112 1111 21121221 11111111 det1det0 00100 det ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ki ki nnkknn nkk nkk ki nnknkknn nkkk nkkk M aaaa aaaa aaaa aaaaa aaaaa aaaaa , 1111 2111221 1111111 1111 21121221 111111111 1det1det0 + +− +− +− + +− +− −+− −=               −=               ⋅+          [ ]n nnnnknn nk nk ccc r r r aaaa aaaa aaaa A       21 2 1 21 222221 111211 =             =             =
  • 89. 89 Let A be any square n by n matrix over a field F SOLO Matrices Determinant of a Square Matrix – det A or |A| 9 Laplace's Formula: ( ) ∑∑∑ == + = =−== n j ijji n j ji ji ij n j jiij CaMaCaA 1 , 1 , 1 , 1det ( )∑∑ = + = −== n j ji ji ij n j jiij MaCaA 1 , 1 , 1det Proof (continue 1) Therefore the minor Mi,k, which is the determinant of the matrix that results from A by removing the i-th row and the k-th column. We obtain ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ki ki nnknkknn nkkk nkkk ki nnknkknn nkkk nkkk ki M aaaaa aaaaa aaaaa aaaaa aaaaa aaaaa C , 1111 21121221 11111111 1111 21121221 11111111 , 1: 00100 det1 00100 det: + +− +− +− + +− +− +− −=                     −=                     =             q.e.d. In the same way we can use Column summation to obtain ( )∑∑ = + = −== n j ij ji ji n j ijji MaCaA 1 , 1 , 1det [ ]n nnnnknn nk nk ccc r r r aaaa aaaa aaaa A       21 2 1 21 222221 111211 =             =             =
  • 90. 90 Let A be any square n by n matrix over a field F SOLO Matrices Determinant of a Square Matrix – det A or |A| 10 A-1 the Inverse of Matrix A with det A ≠ 0 is unique and given by:               ==− nnninn ni ni CCCC CCCC CCCC Aadjwhere A Aadj A ,,,2,1 2,2,2,22,1 1,1,1,21,1 1 : det     Proof q.e.d.             =                                   =⋅ A A A CCCC CCCC CCCC aaaa aaaa aaaa aaaa AadjA nnninn ni ni nnninn iniiii ni ni det00 0det0 00det ,,,2,1 2,2,2,22,1 1,1,1,21,1 21 21 222221 111211                  ≠ = ==∑= ik ikA ACa ik n j jikj 0 det det, 1 , δsince Therefore multiplying by A-1 and dividing by det A, we obtain A Aadj A det 1 =− A-1 exists if and only if det A ≠ 0, i.e., the n rows/columns of Anxn are Linearly Independent ( ) nIAAadjA det=⋅ Return to Characteristic Polynomial Return to Cayley-Hamilton adj A is the adjugate of A [ ]n nnnnknn nk nk ccc r r r aaaa aaaa aaaa A       21 2 1 21 222221 111211 =             =             =
  • 91. 91 Let A be any square n by n matrix over a field F SOLO Matrices Determinant of a Square Matrix – det A or |A| 10 A-1 the Inverse of Matrix A with det A ≠ 0 is unique and given by:               ==− nnninn ni ni CCCC CCCC CCCC Aadjwhere A Aadj A ,,,2,1 2,2,2,22,1 1,1,1,21,1 1 : det     Proof (continue – 1)  BABBIAABIAA n I BbytionMultiplicaLeft n n =⇒==⇒= −−− 111 A-1 exists if and only if det A ≠ 0, i.e., the n rows/columns of Anxn are Linearly Independent Uniqueness Assume that exists a second Matrix B such that BA=In and q.e.d. adj A is the adjugate of A [ ]n nnnnknn nk nk ccc r r r aaaa aaaa aaaa A       21 2 1 21 222221 111211 =             =             =
  • 92. 92 SOLO Matrices Gabriel Cramer (1704-1752) Cramer's rule is a theorem, which gives an expression for the solution of a system of linear equations with as many equations as unknowns, valid in those cases where there is a unique solution. The solution is expressed in terms of the determinants of the (square) coefficient matrix and of matrices obtained from it by replacing one column by the vector of right hand sides of the equations. Given n linear equations with n variables x1, x2,…,xn nnnnknknn nnkk nnkk bxaxaxaxa bxaxaxaxa bxaxaxaxa =+++++ =+++++ =+++++     2211 222222121 111212111 Cramer’s Rule states that the solution of this equation is nk aaaa aaaa aaaa abaa abaa abaa x nnnknn nk nk nnnnn n n k ,,2,1det/det 21 222221 111211 21 222221 111211          =                         = if the determinant that we divide by is not equal zero. Determinant of a Square Matrix – det A or |A| Cramer’s Rule11
  • 93. 93 SOLO Matrices Proof of Cramer's Rule To prove the Cramer’s Rule we use just two properties of Determinants: 1.adding one column to another does not change the value of the determinant 2.multiplying every element of one column by a factor will multiply the value of the determinant by the same factor In the following determinant let replace the b1,b2,…,bn by their equation ( ) ( ) ( )             +++++ +++++ +++++ =             nnnnnknknnnn nnnkk nnnkk nnnnn n n axaxaxaxaaa axaxaxaxaaa axaxaxaxaaa abaa abaa abaa         221121 2222221212221 1112121111211 21 222221 111211 detdet By subtracting from the k column the first multiplied by x1, the second column multiplied by x2, and so on until the last column multiplied by xn, ( the value of the determinant will not change by Rule 1 above), and it is found to be equal to             =             =             nnnknn nk nk k Rule nnknknn nkk nkk nnnnn n n aaaa aaaa aaaa x axaaa axaaa axaaa abaa abaa abaa             21 222221 111211 2 21 222221 111211 21 222221 111211 detdetdet q.e.d. Determinant of a Square Matrix – det A or |A| Cramer’s Rule11
  • 94. SOLO Matrices Determinant of a Square Matrix – det A or |A| Therefore The Cramer’s Rule can be rewritten as nkbC A aaaa aaaa aaaa abaa abaa abaa x n j jjk nnnknn nk nk nnnnn n n k ,,2,1 det 1 det/det 1 , 21 222221 111211 21 222221 111211          ==                         = ∑= bAb A Aadj b b b CCC CCC CCC A x x x x nnnnn n n n 12 1 ,,2,1 2,2,22,1 1,1,21,1 2 1 detdet 1 : − =⋅=                           =             =      This result can be derived directly by using             =             == nn b b b b x x x xbxA  2 1 2 1 , Multiply from left by A-1  bAxAA nI 11 −− = [ ] A bAadj bAx det 1 ⋅ == − Proof of Cramer's Rule (continue – 1) Cramer’s Rule11
  • 95. 95 Let A be any square n by n matrix over a field F SOLO Matrices Determinant of a Square Matrix – det A or |A| Proof q.e.d. nn nnnknnnn nk nk aaa aaaa aa a a aaa aaaa A          2211 21 2221 11 2222 111211 00 000 det 000 0 detdet =             =             = 12 The Determinant of a Triangular Matrix is given by the product of the elements on the Main Diagonal Use Laplace’s Formula nn nnnkn nnnknnnnnknn aaa aaa a aa aaaa aa a a aaaa aa a             2211 3 33 2211 32 3332 22 11 21 2221 11 00 det 00 000 det 00 000 det ==           =             =             [ ]n nnnnknn nk nk ccc r r r aaaa aaaa aaaa A       21 2 1 21 222221 111211 =             =             =
  • 96. 96 Let A be any square n by n matrix over a field F SOLO Matrices Determinant of a Square Matrix – det A or |A| Proof 13 The Determinant of a Matrix Multiplication is equal to the Product of the Determinants ( ) BABA detdetdet ⋅= Start with the Multiplication of a Diagonal Matrix and any Matrix B.             =                         = Bnnn B B nnnn m n nn rd rd rd bbb bbb bbb d d d BD         222 111 21 22221 11211 22 11 00 00 00 In computing the Determinant use Property No. 4 ( ) ( ) BD r r r ddd rd rd rd BD Bn B B nn Bnnn B B detdetdetdetdet 2 1 2211 4 222 111 ⋅=             =             =    [ ]n nnnnknn nk nk ccc r r r aaaa aaaa aaaa A       21 2 1 21 222221 111211 =             =             =
  • 97. 97 Let A be any square n by n matrix over a field F SOLO Matrices Determinant of a Square Matrix – det A or |A| Proof (continue -1) 13 The Determinant of a Matrix Multiplication is equal to the Product of the Determinants ( ) BABA detdetdet ⋅= We have shown that by Invertible Elementary operations a Matrix A can be transformed to a Diagonal Matrix D. Each operation is to add to a given row one other row multiplied by a scalar (rj+α ri → rj ). According to Property (6) the value of the Determinant is unchanged by those operations. ( ) AAEDAED detdetdet ==⇒= Therefore by doing the same Elementary Operations on (A B) Matrix we have: ( ) ( )( ) ( ) BABDBDBAEBA ADDAE detdetdetdetdetdetdet detdet ⋅=⋅=== == ( ) BDBD detdetdet ⋅= q.e.d. Diagonalization of A [ ]n nnnnknn nk nk ccc r r r aaaa aaaa aaaa A       21 2 1 21 222221 111211 =             =             =
  • 98. 98 SOLO Matrices Determinant of a Square Matrix – det A or |A| Proof mxmnxn mxmmxn nxmnxn BA BC A detdet 0 det ⋅=      14 Block Matrices Determinants q.e.d. ( ) ( )                     =              =      −− mmxn nxmn mxmmxn nxmnxn mxmmxn nxmnxn mmxn nxmn mxmmxn nxmnxn mxmmxn nxmnxn ICB I B I I A ICB I B A BC A 11 0 0 0 0 00 0 00 ( ) ( )                     =      − mmxn nxmn mxmmxn nxmnxn mxmmxn nxmnxn mxmmxn nxmnxn ICB I B I I A BC A 1 11 0 det 0 0 det 0 0 det 0 det ( ) ( ) ( ) A I A I A Laplace mxnm mnxnxn Laplace mxmmxn nxmnxn det 0 0 det1 0 0 det 11 1 ==        ⋅=      −− −  ( ) ( ) ( ) mxm LaplaceLaplace mxmnmx xmnn Laplace mxmmxn nxmnxn B B I B I det1 0 0 det1 0 0 det 1 11 ⋅==        ⋅=      − −−  ( ) 1 0 det 1 Triangular Matrix mmxn nxmn ICB I =        − mxmnxn mxmmxn nxmnxn BA BC A detdet 0 det =     
  • 99. 99 SOLO Matrices Determinant of a Square Matrix – det A or |A| Proof [ ] [ ]    −⋅ −⋅ =      −− −− existsBifCBDAB existsAifDACBA BC DA mxnmxmnxmnxnmxm nxmnxnmxnmxmnxn mxmmxn nxmnxn 11 11 detdet detdet det 15 Block Matrices Determinants q.e.d.                −               −       =      − − − − − − existsBif ICB CBDA B DI existsAif DCAB DAI IC A BC DA m n n m mxmmxn nxmnxn 1 1 1 1 1 1 0 0 0 0 ( ) ( ) ( )    − − =                −               −       =      −− −− − − − − − − existsBifCBDAB existsAifDCABA existsBif ICB CBDA B DI existsAif DCAB DAI IC A BC DA m n n m mxmmxn nxmnxn 11 1112 1 1 1 1 1 1 detdet detdet 0 det 0 det 0 det 0 det det
  • 100. 100 SOLO Matrices Determinant of a Square Matrix – det A or |A| Proof ( ) ( )nxmmxnmmxnnxmn ABIBAI +=+ detdet 16 Block Matrices Determinants q.e.d. ( ) ( ) ( ) ( ) ( ) ( )nxmmxnmxmmxnnxmnxn nxmnxnmxnmxmmxnmxmnxmnxn mxmmxn nxmnxn ABIBAI AIBIBIAI IB AI +=+= +=+=      − −− detdet detdetdet 1 13 1 13 Sylvester's Determinant Theorem James Joseph Sylvester (1814 – 1987)
  • 101. 101 SOLO Matrices Determinant of a Square Matrix – det A or |A| 17 Cauchy - Binet Formula Jacques Philippe Marie Binet (1786 –1856) Augustin-Louis Cauchy (1789 –1857) Let A be an m×n matrix and B an n×m matrix (m≤n). Write [n] for the set { 1, ..., n }, and for the set of m-combinations of [n] (i.e., subsets of size m; there are of them). For , write A[m],S for the m×m matrix whose columns are the columns of A at indices from S, and BS,[m] for the m×m matrix whose rows are the rows of B at indices from S. The Cauchy–Binet formula then states ( ) [ ]( ) [ ]( ) [ ] ∑      ∈ = m nS mSSm BAAB ,, detdetdet                         = nmnn m m mnmm n n nxmmxn bbb bbb bbb aaa aaa aaa BA         21 22221 11211 21 22221 11211 If m=n, and we recover ( ) BAAB detdetdet =1=      n n
  • 102. 102 It was Cauchy in 1812 who used 'determinant' in its modern sense. Cauchy's work is the most complete of the early works on determinants. He reproved the earlier results and gave new results of his own on minors and adjoints. In the 1812 paper the multiplication theorem for determinants is proved for the first time although, at the same meeting of the Institut de France, Binet also read a paper which contained a proof of the multiplication theorem but it was less satisfactory than that given by Cauchy. http://www-groups.dcs.st-and.ac.uk/~history/HistTopics/Matrices_and_determinants.html#86
  • 103. 103 SOLO Matrices Determinant of a Square Matrix – det A or |A| Example       − − = 210 121 A           − − = 03 11 12 B 4 03 12 20 11 03 11 21 12 11 12 10 21 det 323311 −= −− + − − − + − − − = −−  BA 4 17 13 det 17 13 03 11 12 210 121 −=      − −       − − =           − −       − − =BA Using Cauchy - Binet Formula we obtain: By multiplying the matrices A and B and computing det (AB), we obtain: 3 2 23 2 3 ,3,2 = ⋅ =      == nm 17 Cauchy - Binet Formula
  • 104. 104 SOLO Matrices Determinant of a Square Matrix – det A or |A| ( ) ( ) 11 detdet −− = AA18 Proof q.e.d. nIAA =−1 use ( ) ( ) ( ) ( ) ( ) AAAAAAIn det/1detdetdetdetdet1 11 11 1 1 =⇒⋅=== −−− 19 ( ) AAT detdet = Proof ( ) ( ) UDLUDLAUDLA detdetdetdetdet 11 ⋅⋅==⇒= L and U are Triangular Matrices with 1 on the Main Diagonal, and D is diagonal.  nndddDUDLAUDLA  2211 11 detdetdetdetdet ==⋅⋅=⇒= ( ) AdddLDUALDUA nn TTTTTTTT detdetdetdetdet 2211 11 11 ==⋅⋅=⇒=  q.e.d.
  • 105. 105 SOLO Matrices Determinant of the Vandermonde Matrix ( ) ( )∏≤<≤ −−− −=                 = nji ij n n nn n n xx xxx xxx xxx xxxVnxn 1 11 2 1 1 22 2 2 1 21 221 111 det,,,det       Vandermonde Matrix is a nxn Matrix that has in its j row the entries x1 j-1 x2 j-1 … xn j-1 Determinant of a Square Matrix – det A or |A| 20 Proof: Using elementary operation, let multiply the (j-1) row by –x1 and add to row j, starting with j=n, then (n-1) until j=1                 = −−− →+−→+−→+−→+−→+−→+− −−−−−−−− 11 2 1 1 22 2 2 1 21 111 11112122111111212211 n n nn n n rrrxrrrxrrrxrrrxrrrxrrrx xxx xxx xxx EEEVEEE nnnnnnnxnnnnnnn       jj x E jjj rrxr ↑↑−                     − =→− − 1 1000 010 0010 0001 1 11       ←j-1 ←j ( ) ( )1 1 1 11 det,det 22 − − − −=      = nn nn nn xx xx xxV xWe have:
  • 106. 106 SOLO Matrices Determinant of the Vandermonde Matrix Vandermonde Matrix is a nxn Matrix that has in its j row the entries x1 j-1 x2 j-1 … xn j-1 Determinant of a Square Matrix – det A or |A| Proof (continue – 1): Using fact (13) that determinant of a product of Matrices is the product of their determinants                 −− −− −− =                 −−−−−−− →+−→+−→+− −−−− 2 1 12 21 1 2 1 2 21 2 2 112 11 2 1 1 22 2 2 1 21 0 0 0 111111 1111212211 n n n n nn nn n n n nn n n rrrxrrrxrrrx xxxxxx xxxxxx xxxx xxx xxx xxx EEE nnnnnn            ( ) ( ) ( )               −− −− −− =                 −− −− −− =                 −−−− −−−−−−− →+−→+−→+− −−−− 2 1 12 21 1 2 1 2 21 2 2 112 2 1 12 21 1 2 1 2 21 2 2 112 11 2 1 1 22 2 2 1 21 111 det 0 0 0 111 det 111 detdetdetdet 1111212211 n n n n nn nn n n n n n nn nn n n n nn n n rrrxrrrxrrrx xxxxxx xxxxxx xxxx xxxxxx xxxxxx xxxx xxx xxx xxx EEE nnnnnn                       
  • 107. 107 SOLO Matrices Determinant of the Vandermonde Matrix Vandermonde Matrix is a nxn Matrix that has in its j row the entries x1 j-1 x2 j-1 … xn j-1 Determinant of a Square Matrix – det A or |A| Proof (continue – 2): ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( )nnxnn n n n n n n n n n nn n n n nn n n nnxn xxVxxxx xx xx xxxx xxxxxx xxxxxx xxxx xxx xxx xxx xxxV ,,det 11 detdet 111 det,,,det 211112 22 2 2 112 4 1 2 12 2 2 1122 112 11 2 1 1 22 2 2 1 21 21                 −− −−−− −−− −−=               −−=               −− −− −− =                 = We obtained a recursive relation between the nxn Vandermonde Matrix V (x1, x2, … , xn) and the (n-1)x(n-1) Matrix V (x2, … ,xn), and by continuing the procedure, and because det V2x2 (xn-1,xn)=(xn-xn-1), we obtain ( ) ( )∏≤<≤ −−− −=                 = nji ij n n nn n n xx xxx xxx xxx xxxVnxn 1 11 2 1 1 22 2 2 1 21 221 111 det,,,det       q.e.d. Use Property (4) that if the elements of a row/column of the Matrix A have a common factor λ than the Determinant of A is equal to the product of λ and the Determinant of the Matrix obtained by dividing the previous row/column by λ.
  • 108. 108 SOLO Matrices Eigenvalues and Eigenvectors of Square Matrices Anxn The relation represents a Linear Transformation of the vector to . 11 nxnxnnx xAy = 1nxx 1nxy For the Square Matrix Anxn a nonzero Vector is an Eigenvector if there is a Scalar λ (called the Eigenvalue) such that: 1nxv 11 nxnxnxn vvA λ= To find the Eigenvalues and Eigenvectors we see that ( ) 01 =− nxnnxn vIA λ This equation has a solution iff the Matrix (Anxn-λ In) is singular or01 ≠nxv ( ) 0det =− nnxn IA λ This equation may be used to find the Eigenvalues λ.
  • 109. 109 SOLO Matrices Eigenvalues and Eigenvectors of Square Matrices Anxn ( ) ( ) 01det 1 1 ' 21 22221 11211 =+++−=             − − − − n nnn RuleLeibniz nnnn n n cc aaa aaa aaa      λλ λ λ λ The equation that may be used to find the Eigenvalues λ can be written as: The polynomial: ( ) ( ) ( ) ( ) ( )nn nn ccp λλλλλλλλλ −−−=+++= −  21 1 1: is called the Characteristic Polynomial of the Square Matrix Anxn, and it has degree n and therefore n Eigenvalues λ1, λ2,…, λn. However the Characteristic Equations need not have distinct solutions, there may be less than n distinct eigenvalues. If the matrix has real entries, the coefficients of the characteristic polynomial are all real. However, the roots are not necessarily real; they may include complex numbers with a non-zero imaginary component. However, there is at least one complex number λ solving the characteristic equation, even if the entries of the matrix A are complex numbers to begin with. (This existence of such a solution is known as the Fundamental Theorem of Algebra.) For a complex eigenvalue, the corresponding eigenvectors also have complex components. By Abel’s Theorem (1824) there are no algebraic formulae for the roots of a general polynomial with n > 4, therefore we need an iterative algorithm to find the roots.
  • 110. 110 SOLO Matrices Eigenvalues and Eigenvectors of Square Matrices Anxn Theorem: The n Eigenvectors of a Square Matrix Anxn that has distinct Eigenvalues are Linearly Independent. Proof: Let assume that we have k (2 ≤ k ≤ n) Linearly Dependent Eigenvvectors. Then there exist k nonzero constants αi (i=1,…,k) such that: ivvv ikkii ∀≠=++++ 0011 αααα  kivvvA iiiinxn ,,1,0 =≠= λwhere: we have: ( ) ( ) ( ) 10 0 111 11111 ≠≠−=−=− =−=− iifvvvAvIA vvAvIA iiiinxninnxn nxnnnxn λλλλ λλ ( ) ( ) ( ) ( ) ( ) 0112122111 =−++−++−=++++− kkkiiikkiinnxn vvvvvvIA λλαλλαλλααααλ  In the same way multiplying the result by (Anxn – λ2 In) we obtain: ( ) ( ) ( )[ ] ( ) ( ) ( )( ) 012213233121222 =−−++−−=−++−− kkkkkkknnxn vvvvIA λλλλαλλλλαλλαλλαλ  Continuing the procedure until, at the end, we multiply by (Anxn – λ(k-1) In) to obtain: ( )  00 0 0 1 1 =⇒=       − ≠ ≠ − = ∏ kk k i ikk v αλλα    This contradicts the assumption that αk ≠ 0 therefore the k Eigenvectors are Linearly Independent.
  • 111. 111 SOLO Matrices Eigenvalues and Eigenvectors of Square Matrices Anxn Theorem: If the n Eigenvectors of a Square Matrix Anxn corresponding to the n Eigenvalues (not necessary distinct) are Linear Independent than we can write Proof:             =Λ=− n PAP λ λ λ     00 00 00 2 1 1 Using the n Eigenvectors of a Square Matrix Anxn we can write [ ] [ ] [ ]n n nn P n vvvvvvvvvA         21 2 1 221121 00 00 00             == λ λ λ λλλ or PPA Λ= Λ=− PAP 1 q.e.d. we say that the Square Matrix Anxn is Diagonalizable. Since the n Eigenvectors of Anxn are Linear Independent P is nonsingular and we have nvvv ,,, 21  Two Square Matrices A and B that are related by A=S-1 B S are called Similar Matrices Return to Matrix Decomposition

Notes de l'éditeur

  1. http://en.wikipedia.org/wiki/Cauchy-Schwarz_inequality
  2. http://en.wikipedia.org/wiki/Cauchy-Schwarz_inequality http://mathworld.wolfram.com/SchwarzsInequality.html
  3. http://en.wikipedia.org/wiki/Matrix_multiplication
  4. http://en.wikipedia.org/wiki/Matrix_multiplication
  5. http://en.wikipedia.org/wiki/Matrix_multiplication
  6. http://en.wikipedia.org/wiki/Kronecker_product http://en.wikipedia.org/wiki/Leopold_Kronecker
  7. http://en.wikipedia.org/wiki/Householder_transformation http://www-history.mcs.st-andrews.ac.uk/Biographies/Householder.html G. Strang, “Linear Algebra and its Applications”, Academic Press, 2nd Ed., 198
  8. http://en.wikipedia.org/wiki/Matrix_inversion#Methods_of_matrix_inversion
  9. http://www-groups.dcs.st-and.ac.uk/~history/HistTopics/Matrices_and_determinants.html#86
  10. http://en.wikipedia.org/wiki/Matrix_inversion#Methods_of_matrix_inversion
  11. http://en.wikipedia.org/wiki/Matrix_inversion#Methods_of_matrix_inversion
  12. http://en.wikipedia.org/wiki/Matrix_inversion#Methods_of_matrix_inversion
  13. http://en.wikipedia.org/wiki/Matrix_inversion#Methods_of_matrix_inversion
  14. http://en.wikipedia.org/wiki/Matrix_inversion#Methods_of_matrix_inversion
  15. http://en.wikipedia.org/wiki/Matrix_inversion#Methods_of_matrix_inversion
  16. http://en.wikipedia.org/wiki/Cramer’s_rule
  17. http://en.wikipedia.org/wiki/Cramer%27s_rule
  18. http://en.wikipedia.org/wiki/Cramer%27s_rule
  19. http://en.wikipedia.org/wiki/cauchy_binet
  20. http://en.wikipedia.org/wiki/cauchy_binet
  21. http://en.wikipedia.org/wiki/Geometric_multiplicity#Algebraic_and_geometric_multiplicities
  22. http://en.wikipedia.org/wiki/Geometric_multiplicity#Algebraic_and_geometric_multiplicities
  23. http://en.wikipedia.org/wiki/Geometric_multiplicity#Algebraic_and_geometric_multiplicities
  24. http://en.wikipedia.org/wiki/Geometric_multiplicity#Algebraic_and_geometric_multiplicities
  25. http://en.wikipedia.org/wiki/Geometric_multiplicity#Algebraic_and_geometric_multiplicities
  26. http://en.wikipedia.org/wiki/Geometric_multiplicity#Algebraic_and_geometric_multiplicities
  27. http://en.wikipedia.org/wiki/Geometric_multiplicity#Algebraic_and_geometric_multiplicities
  28. http://en.wikipedia.org/wiki/Geometric_multiplicity#Algebraic_and_geometric_multiplicities
  29. http://en.wikipedia.org/wiki/Geometric_multiplicity#Algebraic_and_geometric_multiplicities
  30. http://en.wikipedia.org/wiki/Geometric_multiplicity#Algebraic_and_geometric_multiplicities
  31. http://en.wikipedia.org/wiki/Geometric_multiplicity#Algebraic_and_geometric_multiplicities
  32. http://en.wikipedia.org/wiki/Geometric_multiplicity#Algebraic_and_geometric_multiplicities
  33. http://en.wikipedia.org/wiki/Geometric_multiplicity#Algebraic_and_geometric_multiplicities
  34. http://en.wikipedia.org/wiki/Geometric_multiplicity#Algebraic_and_geometric_multiplicities
  35. http://en.wikipedia.org/wiki/Geometric_multiplicity#Algebraic_and_geometric_multiplicities
  36. http://en.wikipedia.org/wiki/Geometric_multiplicity#Algebraic_and_geometric_multiplicities
  37. http://en.wikipedia.org/wiki/Geometric_multiplicity#Algebraic_and_geometric_multiplicities
  38. http://en.wikipedia.org/wiki/Geometric_multiplicity#Algebraic_and_geometric_multiplicities
  39. http://en.wikipedia.org/wiki/Geometric_multiplicity#Algebraic_and_geometric_multiplicities
  40. http://www-groups.dcs.st-and.ac.uk/~history/HistTopics/Matrices_and_determinants.html#86
  41. http://en.wikipedia.org/wiki/Matrix_inversion#Methods_of_matrix_inversion
  42. http://en.wikipedia.org/wiki/Matrix_decomposition
  43. http://en.wikipedia.org/wiki/Matrix_decomposition
  44. http://en.wikipedia.org/wiki/Matrix_decomposition
  45. http://en.wikipedia.org/wiki/Matrix_decomposition
  46. http://en.wikipedia.org/wiki/Householder_transformation http://www-history.mcs.st-andrews.ac.uk/Biographies/Householder.html G. Strang, “Linear Algebra and its Applications”, Academic Press, 2nd Ed., 198
  47. http://en.wikipedia.org/wiki/Householder_transformation http://www-history.mcs.st-andrews.ac.uk/Biographies/Householder.html G. Strang, “Linear Algebra and its Applications”, Academic Press, 2nd Ed., 198
  48. http://en.wikipedia.org/wiki/Householder_transformation http://www-history.mcs.st-andrews.ac.uk/Biographies/Householder.html G. Strang, “Linear Algebra and its Applications”, Academic Press, 2nd Ed., 198
  49. http://en.wikipedia.org/wiki/Householder_transformation http://www-history.mcs.st-andrews.ac.uk/Biographies/Householder.html G. Strang, “Linear Algebra and its Applications”, Academic Press, 2nd Ed., 198
  50. K. Ogata, “State Space Analysis of Control Systems”, Prentice Hall, Inc., 1967
  51. K. Ogata, “State Space Analysis of Control Systems”, Prentice Hall, Inc., 1967
  52. http://www.maths.lancs.ac.uk/~gilbert/m306c/node22.html
  53. http://www.maths.lancs.ac.uk/~gilbert/m306c/node22.html
  54. http://www.maths.lancs.ac.uk/~gilbert/m306c/node22.html
  55. http://www.maths.lancs.ac.uk/~gilbert/m306c/node22.html
  56. http://www.maths.lancs.ac.uk/~gilbert/m306c/node22.html
  57. K. Ogata, “State Space Analysis of Control Systems”, Prentice Hall, Inc., 1967
  58. K. Ogata, “State Space Analysis of Control Systems”, Prentice Hall, Inc., 1967
  59. K. Ogata, “State Space Analysis of Control Systems”, Prentice Hall, Inc., 1967
  60. K. Ogata, “State Space Analysis of Control Systems”, Prentice Hall, Inc., 1967
  61. K. Ogata, “State Space Analysis of Control Systems”, Prentice Hall, Inc., 1967
  62. K. Ogata, “State Space Analysis of Control Systems”, Prentice Hall, Inc., 1967, pp.94-95
  63. K. Ogata, “State Space Analysis of Control Systems”, Prentice Hall, Inc., 1967
  64. K. Ogata, “State Space Analysis of Control Systems”, Prentice Hall, Inc., 1967
  65. Richard Turner, “Inverse of the Vandermonde Matrix with Applications”, NASA TN D-3547, August 1966
  66. Richard Turner, “Inverse of the Vandermonde Matrix with Applications”, NASA TN D-3547, August 1966
  67. K. Ogata, “State Space Analysis of Control Systems”, Prentice Hall, Inc., 1967
  68. http://en.wikipedia.org/wiki/Matrix_decomposition
  69. http://en.wikipedia.org/wiki/Matrix_decomposition
  70. http://en.wikipedia.org/wiki/Matrix_decomposition