SlideShare une entreprise Scribd logo
1  sur  4
Télécharger pour lire hors ligne
Monika Agarwal et al Int. Journal of Engineering Research and Applications www.ijera.com
ISSN : 2248-9622, Vol. 4, Issue 1( Version 4), January 2014, pp.90-93
www.ijera.com 90 | P a g e
Review of Matrix Decomposition Techniques for Signal
Processing Applications
Monika Agarwal*, Rajesh Mehra**
*(Department of Electronics and communication, NITTTR, Chandigarh)
** (Department of Electronics and communication, NITTTR, Chandigarh)
ABSTRACT
Decomposition of matrix is a vital part of many scientific and engineering applications. It is a technique that
breaks down a square numeric matrix into two different square matrices and is a basis for efficiently solving a
system of equations, which in turn is the basis for inverting a matrix. An inverting matrix is a part of many
important algorithms. Matrix factorizations have wide applications in numerical linear algebra, in solving linear
systems, computing inertia, and rank estimation is an important consideration. This paper presents review of all
the matrix decomposition techniques used in signal processing applications on the basis of their computational
complexity, advantages and disadvantages. Various Decomposition techniques such as LU Decomposition, QR
decomposition , Cholesky decomposition are discussed here.
Keywords – Cholesky Decomposition, LU decomposition, Matrix factorization, QR decomposition.
I. INTRODUCTION
Signal processing is an area of system
engineering, electrical engineering and applied
mathematics that deals with the operations on or
analysis of analog as well as digitized signal,
representing time varying or spatially physical
quantities. There are various mathematical model
applied in signal processing such as linear time
invariant system, system identification, optimization,
estimation theory, numerical methods [1].In the
computational application matrix decomposition is a
basic operation. “Matrix decomposition refers to the
transformation of the given matrix into a given
canonical form”. Matrix factorization is the process
of producing the decomposition where the given
matrix is transformed to right hand side product of
canonical matrices [2]. Matrix decomposition is a
fundamental theme in linear algebra and applied
statistics which has both scientific and engineering
significance. Computational convenience and
analytic simplicity are the two main aspects in the
purpose of matrix decomposition. In the real world it
is not feasible for most of the matrix computation to
be calculated in an optimal explicit way such as
matrix inversion, matrix determinant, solving linear
systems and least square fitting , thus to convert a
difficult matrix computation problem into several
easier task such as solving triangular or diagonal
systems will greatly facilitate the calculations. Data
matrices such as proximity matrix or correlation
matrix represent numerical observations and it is
often huge and hard to analyze them. Therefore to
decompose the data matrices into some lower rank
canonical forms will reveal the inherent
characteristics. Since matrix computation algorithms
are expensive computational tasks, hardware
implementation of these algorithms require
substantial time and effort. There is an increasing
demand for domain specific tool for matrix
computation algorithms which provide fast and
highly efficient hardware production. In this paper
there is review of matrix decomposition techniques
that are used in signal processing.
The paper is organized as follows: Section II
discusses the Matrix decomposition Models. Section
III concludes the paper
II. MATRIX DECOMPOSITION
MODELS
This paper discusses basically four matrix
decomposition models.
QR decomposition
SVD decomposition
LU decomposition
Cholesky Decomposition
2.1 QR decomposition
A matrix A=QR in linear algebra is a
decomposition of a matrix A into an orthogonal and
right triangular matrix. The Q-R decompositions are
generally used to solve the linear least squares
problems. If the matrix A is non-singular then the Q-
R factorization of A is a unique one if we want that
the diagonal elements of R are positive. More
generally we can factor a m x n rectangular matrix
into an m x m unitary matrix and an m x n upper
triangular matrix. There are two standard methods to
evaluate Q-R factorization of A such as Graham-
RESEARCH ARTICLE OPEN ACCESS
Monika Agarwal et al Int. Journal of Engineering Research and Applications www.ijera.com
ISSN : 2248-9622, Vol. 4, Issue 1( Version 4), January 2014, pp.90-93
www.ijera.com 91 | P a g e
Schmidt triangulizations and Householder
triangulizations but Householder triangulizations are
much more efficient than Graham-Schmidt
triangulizations because of the following statistics
involved. The number of operations at the kth step
are multiplications=2(n-k+1)2 , additions=(n-k+1)2 +
(n-k+1) (n-k) + 2 , division=1and square root=1.
Summing all these operations over (n-1) steps we get
the complexity as O (n3) but this involves much less
computation time than the other triangulation
methods [3]. QR factorization is an important tool for
solving linear systems of equations because of good
error propagation properties and the inevitability of
unitary matrices [4]. The main application of QR
decomposition is solving the linear least squares
problem. There exist systems of linear equations that
have no exact solutions. Linear least square is a
mathematical optimization technique used to find an
approximate solution for these systems. QR
decomposition has several advantages that it finds
least squares solution as well when no exact solution
exists, and if there is exact solution, it finds all.
2.2 SVD DECOMPOSITION
This is also one of the matrix decomposition
schemes. This is also applicable for complex
rectangular matrices of the dimensions m x n. The
singular value decomposition block factors the M-by-
N input matrix A such that [4]
A=U×diag(s)× V* (1)
Where U is an M-by-P matrix, V is an N-by-P
matrix, S is a length-P , vector P is defined as
min(M,N) When M = N, U and V are both M-by-M
unitary matrices M > N, V is an N-by-N unitary
matrix, and U is an M-by-N matrix whose columns
are the first N columns of a unitary matrix N > M, U
is an M-by-M unitary matrix, and V is an N-by-M
matrix whose columns are the first M columns of a
unitary matrix In all cases, S is a 1-D vector of
positive singular values having length P. Length-N
row inputs are treated as length-N columns. Note that
the first (maximum) element of output S is equal to
the 2-norm of the matrix A. The output is always
sampling based [4].
2.3 LU DECOMPOSITION
LU decomposition is widely used matrix
factorization algorithm. it transforms square matrix
into lower triangular matrix L and upper matrix U
with A=LU. LU decomposition used the Dolittle
algorithm for the elimination column by column from
left to right. It results in unit lower triangular matrix
that can use the storage of the original matrix A [5].
This algorithm requires 2n3/3 floating point
operations. Algorithm 1 shows the steps followed for
LU decomposition of a matrix. In each iteration , the
same steps are applied to a matrix that has one less
row and column than in the previous iteration. This
algorithms shows the parallelism of each steps while
in each iteration, a new sub matrix is constructed and
hence efficient use of local storage capacity. It would
be preferable to use an algorithm to load a block of
data into local storage performs the computation and
then transfer new block. The block LU
decomposition algorithm partitioned the entire matrix
into smaller blocks so that work can be done
concurrently to improve the performance.
Input:A- n x n matrix with elemnts a i,p
Output: L, U triangular matrices with elements l i p,
u i p
1: for p=1 : n
2. for q=1 to p-1
3. for i=q+1 to p-1
4. A[i,p]=A[i,p]- A[i, q]× A[q,p]
5. for q=1to p-1
6. for i=p to n
7. A(i,p) = A[i,p]- A[i, q]× A[q,p]
8. for q= p+1:n
9. A(q,p)=A(q,p)/A(p,p)
Algorithm 1.Basic LU decomposition
The algorithm is analyzed as it writes lower
and upper triangular matrices onto A matrices then it
updates the value of A matrix column by column ((4)
and (7)). The final values are computed by the
division of each column entry by the diagonal entry
of that column. The LU decomposition has various
advantages. it is applicable for any matrix. It can be
solved by forward substitution as well as backward
substitution. The solution of triangular set of equation
is trivial to solve. it is direct method and finds all
solution. LU decomposition is easy to program.
There are several drawbacks that it doesn’t find
approximate solution (least square). It could easily be
unstable. Hence to find the approximate solution
Cholesky decomposition is studied.
2.4 CHOLESKY DECOMPOSITION
The Cholesky decomposition factors a
positive symmetric matrix to the product of a lower
triangular matrix and its transpose, of which two
general ways are given by:
A = UT
U (2)
A = LDLT
(3)
Where A is a positive definite symmetric matrix. U is
the upper triangular matrix & UT is Transpose of
upper triangular matrix, L is the unit lower triangular
matrix & LT is lower triangular matrix and D is the
diagonal matrix in which all elements are zeros
except the diagonal elements. Cholesky
decomposition has 1/3N3 FLOPS complexity with
heavy inner data dependency. Introducing a diagonal
matrix as shown in Eq. (3) in Cholesky
Monika Agarwal et al Int. Journal of Engineering Research and Applications www.ijera.com
ISSN : 2248-9622, Vol. 4, Issue 1( Version 4), January 2014, pp.90-93
www.ijera.com 92 | P a g e
decomposition has many advantages, such as
avoiding square roots and alleviating data
dependency [5]. In this paper, we design and
implement the UTU Cholesky decomposition in Eq.
(2) on FPGAs for computation acceleration. Using
the fact that A is symmetric [6]:
A= LDLT
Where L is lower triangular matrix.
(4)
With Cholesky decompositions, the elements of L are
evaluated as follows:
(5)
Where k=1, 2 ….n.
(6)
Where i=1, 2…..k-1.
First subscript is row index and second one
is column index. Cholesky decomposition is
evaluated column by column and in each row the
elements are evaluated from top to bottom. That is, in
each column the diagonal element is evaluated first
using equation (4) (the elements above the diagonal
are zero) and then the other elements in the same row
are evaluated next using equation (5). This is carried
out for each column starting from the first one. Note
that if the value within the square root in equation (4)
is negative, Cholesky decomposition will fail.
However, this will not happen for positive semi
definite matrices, which are encountered commonly
in many engineering systems (e.g., circuits,
covariance matrix). Thus, Cholesky decomposition is
a good way to test for the positive semi definiteness
of symmetric matrices. A general form of the
Cholesky algorithm is given in Algorithm 2, where
the row-order representation is assumed for stored
matrices. In the above algorithm, if one changes the
order of the three for loops, one can get different
variations which give different behavior in terms of
memory access patterns and the basic linear
Algebraic operation performed in the innermost loop.
Out of the six different variations possible, only three
are of interest. They are the row-oriented, column-
oriented and sub matrixes forms and are described as:
In Row-oriented L is calculated row by row, with
the terms for each row being calculated using terms
on the preceding rows that have already been
evaluated, Column-oriented, the inner loop
computes a matrix-vector product. Here, each column
is calculated using terms from previously computed
columns. And Sub-matrix the inner loops apply the
current column as a rank-1 update to the partially
reduced sub-matrix.The algorithm is analyzed as the
decomposition by transferring the matrix A into
memory elements. The diagonal entire of lower
triangular matrix, A, are square root of diagonal
entries of given matrix (8). It calculate the entries by
dividing the corresponding element of given matrix
by the belonging column diagonal element (9) .The
algorithm works column by column and after the
computation of first column of diagonal matrix with
the given matrix entries, the elements in the next
column are updated (4).
1. For p = 1to n
2. For q=1 to j-1
3. For i = p to n
4. A[i,p] = A[i,q] – A[i,q] * A[p,q]
5 End
6 End
7 A[p,p] = √ A[p,p]
8 For q = p+1 to n
9 A[q,p]= A[q,p] / A[p,p]
10 End
11 End
Algorithm 2: General form of the Cholesky
algorithm.
In this paper, the Cholesky factorization
method has several advantages over the other
decomposition techniques. It has faster execution
because only one factor needs to be calculated as the
other factor is simply the transpose of the first one.
Thus, the number of operations for dense matrices is
approximately n3/3 under Cholesky. It is highly
adaptive to parallel architectures hence pivoting is
not required. This makes the Cholesky algorithm
especially suitable for parallel architectures as it
reduces inter-processor communications. A lot of
work [8] has been devoted to parallelizing the
Cholesky factorization method. it has significantly
lower memory requirements. In the Cholesky
algorithm, only the lower triangular matrix is used for
factorization, and the intermediate and final results
(the Cholesky factor L) are overwritten in the original
matrix. Thus, only the lower triangular matrix must
be stored, resulting in significant savings in terms of
memory requirements. it eliminates the divider
operation dependency. It improves the data
throughput.
III. CONCLUSIONS
The proposed paper review all the matrix
decomposition techniques used in signal processing
applications. QR decomposition has complicated
algorithm and also a slower algorithm. LU
decomposition doesn’t find approximate solution and
have stability issues. Cholesky decomposition can
find approximate solution of the problem. Among all
the matrix decomposition techniques Cholesky
Monika Agarwal et al Int. Journal of Engineering Research and Applications www.ijera.com
ISSN : 2248-9622, Vol. 4, Issue 1( Version 4), January 2014, pp.90-93
www.ijera.com 93 | P a g e
Decomposition is the well suited for signal
processing application. It is efficient in terms of
memory storage capacity, computational cost, speed,
data alleviating and throughput.
REFERENCES
[1] http://en.wikipedia.org/wiki/Signal_processi
ng
[2] E. W. Weisstein. Matrix decomposition.
MathWorld–A Wolfram Web Resource.
[Online]. Available:
a. http://mathworld.wolfram.com/MatrixDeco
mposition.html
[3] Steven Chapra and Raymond Canale,
Numerical methods for Engineers, Fourth
Edition,Mcgraw Hill, 2002.
[4] Mathworks, “Users Guide Filter Design
Toolbox”, March-2007.
[5] R.Barett,templates for the solution of linear
systems: building blocks for iterative
method, second ed.SIAM,1994.
[6] DepengYang, G.D.Peterson, H.Li and
junquing Sun," An FPGA implementation
for solving least square problem” IEEE
Symposium on Field-Programmable Custom
Computing Machines, pp.no.306-309, 2009.
[7] X. Wang and S.G. Ziavras, "Parallel LU
Factorization of Sparse Matrices on FPGA-
Based Configurable Computing Engines,"
Concurrency and Computation: Practice and
Experience, 16.4:319-343, April 2004.
[8] Guiming Wu,Yong Dou,“ A High
Performance and Memory Efficient LU
decomposer on FPGAs” IEEE Transactions
on Computers, Vol. 61, No. 3, pp. no. 366-
378, 2012.

Contenu connexe

Tendances

The Positive Effects of Fuzzy C-Means Clustering on Supervised Learning Class...
The Positive Effects of Fuzzy C-Means Clustering on Supervised Learning Class...The Positive Effects of Fuzzy C-Means Clustering on Supervised Learning Class...
The Positive Effects of Fuzzy C-Means Clustering on Supervised Learning Class...Waqas Tariq
 
Varaiational formulation fem
Varaiational formulation fem Varaiational formulation fem
Varaiational formulation fem Tushar Aneyrao
 
Cone crusher model identification using
Cone crusher model identification usingCone crusher model identification using
Cone crusher model identification usingijctcm
 
Parallel Machine Learning
Parallel Machine LearningParallel Machine Learning
Parallel Machine LearningJanani C
 
Finite Element Analysis - The Basics
Finite Element Analysis - The BasicsFinite Element Analysis - The Basics
Finite Element Analysis - The BasicsSujith Jose
 
Radial basis function neural network control for parallel spatial robot
Radial basis function neural network control for parallel spatial robotRadial basis function neural network control for parallel spatial robot
Radial basis function neural network control for parallel spatial robotTELKOMNIKA JOURNAL
 
Principal component analysis - application in finance
Principal component analysis - application in financePrincipal component analysis - application in finance
Principal component analysis - application in financeIgor Hlivka
 
Method of weighted residuals
Method of weighted residualsMethod of weighted residuals
Method of weighted residualsJasim Almuhandis
 
The International Journal of Engineering and Science (The IJES)
The International Journal of Engineering and Science (The IJES)The International Journal of Engineering and Science (The IJES)
The International Journal of Engineering and Science (The IJES)theijes
 
On selection of periodic kernels parameters in time series prediction
On selection of periodic kernels parameters in time series predictionOn selection of periodic kernels parameters in time series prediction
On selection of periodic kernels parameters in time series predictioncsandit
 

Tendances (15)

The Positive Effects of Fuzzy C-Means Clustering on Supervised Learning Class...
The Positive Effects of Fuzzy C-Means Clustering on Supervised Learning Class...The Positive Effects of Fuzzy C-Means Clustering on Supervised Learning Class...
The Positive Effects of Fuzzy C-Means Clustering on Supervised Learning Class...
 
Varaiational formulation fem
Varaiational formulation fem Varaiational formulation fem
Varaiational formulation fem
 
Finite Element Methods
Finite Element  MethodsFinite Element  Methods
Finite Element Methods
 
Rayleigh Ritz Method
Rayleigh Ritz MethodRayleigh Ritz Method
Rayleigh Ritz Method
 
Cone crusher model identification using
Cone crusher model identification usingCone crusher model identification using
Cone crusher model identification using
 
Parallel Machine Learning
Parallel Machine LearningParallel Machine Learning
Parallel Machine Learning
 
recko_paper
recko_paperrecko_paper
recko_paper
 
Pca analysis
Pca analysisPca analysis
Pca analysis
 
Finite Element Analysis - The Basics
Finite Element Analysis - The BasicsFinite Element Analysis - The Basics
Finite Element Analysis - The Basics
 
Radial basis function neural network control for parallel spatial robot
Radial basis function neural network control for parallel spatial robotRadial basis function neural network control for parallel spatial robot
Radial basis function neural network control for parallel spatial robot
 
Manu maths ppt
Manu maths pptManu maths ppt
Manu maths ppt
 
Principal component analysis - application in finance
Principal component analysis - application in financePrincipal component analysis - application in finance
Principal component analysis - application in finance
 
Method of weighted residuals
Method of weighted residualsMethod of weighted residuals
Method of weighted residuals
 
The International Journal of Engineering and Science (The IJES)
The International Journal of Engineering and Science (The IJES)The International Journal of Engineering and Science (The IJES)
The International Journal of Engineering and Science (The IJES)
 
On selection of periodic kernels parameters in time series prediction
On selection of periodic kernels parameters in time series predictionOn selection of periodic kernels parameters in time series prediction
On selection of periodic kernels parameters in time series prediction
 

En vedette (20)

P410498102
P410498102P410498102
P410498102
 
T4103123127
T4103123127T4103123127
T4103123127
 
Z4103148155
Z4103148155Z4103148155
Z4103148155
 
N41038288
N41038288N41038288
N41038288
 
S4103115122
S4103115122S4103115122
S4103115122
 
W4103136140
W4103136140W4103136140
W4103136140
 
Q4103103110
Q4103103110Q4103103110
Q4103103110
 
Aq4103266271
Aq4103266271Aq4103266271
Aq4103266271
 
B41030620
B41030620B41030620
B41030620
 
Bc4103338340
Bc4103338340Bc4103338340
Bc4103338340
 
J41035862
J41035862J41035862
J41035862
 
Ad4103173176
Ad4103173176Ad4103173176
Ad4103173176
 
Aw4103303306
Aw4103303306Aw4103303306
Aw4103303306
 
An4103230235
An4103230235An4103230235
An4103230235
 
CIBA - Rita Canavarro
CIBA - Rita CanavarroCIBA - Rita Canavarro
CIBA - Rita Canavarro
 
CertificaçãO Parcial
CertificaçãO ParcialCertificaçãO Parcial
CertificaçãO Parcial
 
(je)zelf onderzoeken
(je)zelf onderzoeken (je)zelf onderzoeken
(je)zelf onderzoeken
 
Floral
FloralFloral
Floral
 
Leitura de imagens
Leitura de imagensLeitura de imagens
Leitura de imagens
 
Redescobrir os lugares da história 3
Redescobrir os lugares da história 3Redescobrir os lugares da história 3
Redescobrir os lugares da história 3
 

Similaire à N41049093

AN EFFICIENT PARALLEL ALGORITHM FOR COMPUTING DETERMINANT OF NON-SQUARE MATRI...
AN EFFICIENT PARALLEL ALGORITHM FOR COMPUTING DETERMINANT OF NON-SQUARE MATRI...AN EFFICIENT PARALLEL ALGORITHM FOR COMPUTING DETERMINANT OF NON-SQUARE MATRI...
AN EFFICIENT PARALLEL ALGORITHM FOR COMPUTING DETERMINANT OF NON-SQUARE MATRI...ijdpsjournal
 
AN EFFICIENT PARALLEL ALGORITHM FOR COMPUTING DETERMINANT OF NON-SQUARE MATRI...
AN EFFICIENT PARALLEL ALGORITHM FOR COMPUTING DETERMINANT OF NON-SQUARE MATRI...AN EFFICIENT PARALLEL ALGORITHM FOR COMPUTING DETERMINANT OF NON-SQUARE MATRI...
AN EFFICIENT PARALLEL ALGORITHM FOR COMPUTING DETERMINANT OF NON-SQUARE MATRI...ijdpsjournal
 
Paper id 26201482
Paper id 26201482Paper id 26201482
Paper id 26201482IJRAT
 
Ebook mst209 block3_e1i1_n9780749252830_l1 (2)
Ebook mst209 block3_e1i1_n9780749252830_l1 (2)Ebook mst209 block3_e1i1_n9780749252830_l1 (2)
Ebook mst209 block3_e1i1_n9780749252830_l1 (2)Fred Stanworth
 
Setting linear algebra problems
Setting linear algebra problemsSetting linear algebra problems
Setting linear algebra problemsJB Online
 
directed-research-report
directed-research-reportdirected-research-report
directed-research-reportRyen Krusinga
 
APLICACIONES DE ESPACIO VECTORIALES
APLICACIONES DE ESPACIO VECTORIALESAPLICACIONES DE ESPACIO VECTORIALES
APLICACIONES DE ESPACIO VECTORIALESJoseLuisCastroGualot
 
A COMPREHENSIVE ANALYSIS OF QUANTUM CLUSTERING : FINDING ALL THE POTENTIAL MI...
A COMPREHENSIVE ANALYSIS OF QUANTUM CLUSTERING : FINDING ALL THE POTENTIAL MI...A COMPREHENSIVE ANALYSIS OF QUANTUM CLUSTERING : FINDING ALL THE POTENTIAL MI...
A COMPREHENSIVE ANALYSIS OF QUANTUM CLUSTERING : FINDING ALL THE POTENTIAL MI...IJDKP
 
A New Approach to Linear Estimation Problem in Multiuser Massive MIMO Systems
A New Approach to Linear Estimation Problem in Multiuser Massive MIMO SystemsA New Approach to Linear Estimation Problem in Multiuser Massive MIMO Systems
A New Approach to Linear Estimation Problem in Multiuser Massive MIMO SystemsRadita Apriana
 
Quantum inspired evolutionary algorithm for solving multiple travelling sales...
Quantum inspired evolutionary algorithm for solving multiple travelling sales...Quantum inspired evolutionary algorithm for solving multiple travelling sales...
Quantum inspired evolutionary algorithm for solving multiple travelling sales...eSAT Publishing House
 
A Parallel Direct Method For Solving Initial Value Problems For Ordinary Diff...
A Parallel Direct Method For Solving Initial Value Problems For Ordinary Diff...A Parallel Direct Method For Solving Initial Value Problems For Ordinary Diff...
A Parallel Direct Method For Solving Initial Value Problems For Ordinary Diff...Katie Naple
 
Wiener Filter Hardware Realization
Wiener Filter Hardware RealizationWiener Filter Hardware Realization
Wiener Filter Hardware RealizationSayan Chaudhuri
 
Modeling and quantification of uncertainties in numerical aerodynamics
Modeling and quantification of uncertainties in numerical aerodynamicsModeling and quantification of uncertainties in numerical aerodynamics
Modeling and quantification of uncertainties in numerical aerodynamicsAlexander Litvinenko
 
UNIT-1 MATRICES (All the Topics).pdf
UNIT-1 MATRICES (All the Topics).pdfUNIT-1 MATRICES (All the Topics).pdf
UNIT-1 MATRICES (All the Topics).pdfAbdulKalam13785
 
Finite Element Analysis Lecture Notes Anna University 2013 Regulation
Finite Element Analysis Lecture Notes Anna University 2013 Regulation Finite Element Analysis Lecture Notes Anna University 2013 Regulation
Finite Element Analysis Lecture Notes Anna University 2013 Regulation NAVEEN UTHANDI
 
tw1979 Exercise 1 Report
tw1979 Exercise 1 Reporttw1979 Exercise 1 Report
tw1979 Exercise 1 ReportThomas Wigg
 
Matrix and it's Application
Matrix and it's ApplicationMatrix and it's Application
Matrix and it's ApplicationMahmudle Hassan
 
THE LEFT AND RIGHT BLOCK POLE PLACEMENT COMPARISON STUDY: APPLICATION TO FLIG...
THE LEFT AND RIGHT BLOCK POLE PLACEMENT COMPARISON STUDY: APPLICATION TO FLIG...THE LEFT AND RIGHT BLOCK POLE PLACEMENT COMPARISON STUDY: APPLICATION TO FLIG...
THE LEFT AND RIGHT BLOCK POLE PLACEMENT COMPARISON STUDY: APPLICATION TO FLIG...ieijjournal1
 

Similaire à N41049093 (20)

AN EFFICIENT PARALLEL ALGORITHM FOR COMPUTING DETERMINANT OF NON-SQUARE MATRI...
AN EFFICIENT PARALLEL ALGORITHM FOR COMPUTING DETERMINANT OF NON-SQUARE MATRI...AN EFFICIENT PARALLEL ALGORITHM FOR COMPUTING DETERMINANT OF NON-SQUARE MATRI...
AN EFFICIENT PARALLEL ALGORITHM FOR COMPUTING DETERMINANT OF NON-SQUARE MATRI...
 
AN EFFICIENT PARALLEL ALGORITHM FOR COMPUTING DETERMINANT OF NON-SQUARE MATRI...
AN EFFICIENT PARALLEL ALGORITHM FOR COMPUTING DETERMINANT OF NON-SQUARE MATRI...AN EFFICIENT PARALLEL ALGORITHM FOR COMPUTING DETERMINANT OF NON-SQUARE MATRI...
AN EFFICIENT PARALLEL ALGORITHM FOR COMPUTING DETERMINANT OF NON-SQUARE MATRI...
 
Paper id 26201482
Paper id 26201482Paper id 26201482
Paper id 26201482
 
Ebook mst209 block3_e1i1_n9780749252830_l1 (2)
Ebook mst209 block3_e1i1_n9780749252830_l1 (2)Ebook mst209 block3_e1i1_n9780749252830_l1 (2)
Ebook mst209 block3_e1i1_n9780749252830_l1 (2)
 
Setting linear algebra problems
Setting linear algebra problemsSetting linear algebra problems
Setting linear algebra problems
 
directed-research-report
directed-research-reportdirected-research-report
directed-research-report
 
APLICACIONES DE ESPACIO VECTORIALES
APLICACIONES DE ESPACIO VECTORIALESAPLICACIONES DE ESPACIO VECTORIALES
APLICACIONES DE ESPACIO VECTORIALES
 
A COMPREHENSIVE ANALYSIS OF QUANTUM CLUSTERING : FINDING ALL THE POTENTIAL MI...
A COMPREHENSIVE ANALYSIS OF QUANTUM CLUSTERING : FINDING ALL THE POTENTIAL MI...A COMPREHENSIVE ANALYSIS OF QUANTUM CLUSTERING : FINDING ALL THE POTENTIAL MI...
A COMPREHENSIVE ANALYSIS OF QUANTUM CLUSTERING : FINDING ALL THE POTENTIAL MI...
 
A New Approach to Linear Estimation Problem in Multiuser Massive MIMO Systems
A New Approach to Linear Estimation Problem in Multiuser Massive MIMO SystemsA New Approach to Linear Estimation Problem in Multiuser Massive MIMO Systems
A New Approach to Linear Estimation Problem in Multiuser Massive MIMO Systems
 
Quantum inspired evolutionary algorithm for solving multiple travelling sales...
Quantum inspired evolutionary algorithm for solving multiple travelling sales...Quantum inspired evolutionary algorithm for solving multiple travelling sales...
Quantum inspired evolutionary algorithm for solving multiple travelling sales...
 
A Parallel Direct Method For Solving Initial Value Problems For Ordinary Diff...
A Parallel Direct Method For Solving Initial Value Problems For Ordinary Diff...A Parallel Direct Method For Solving Initial Value Problems For Ordinary Diff...
A Parallel Direct Method For Solving Initial Value Problems For Ordinary Diff...
 
Wiener Filter Hardware Realization
Wiener Filter Hardware RealizationWiener Filter Hardware Realization
Wiener Filter Hardware Realization
 
Modeling and quantification of uncertainties in numerical aerodynamics
Modeling and quantification of uncertainties in numerical aerodynamicsModeling and quantification of uncertainties in numerical aerodynamics
Modeling and quantification of uncertainties in numerical aerodynamics
 
UNIT-1 MATRICES (All the Topics).pdf
UNIT-1 MATRICES (All the Topics).pdfUNIT-1 MATRICES (All the Topics).pdf
UNIT-1 MATRICES (All the Topics).pdf
 
Finite Element Analysis Lecture Notes Anna University 2013 Regulation
Finite Element Analysis Lecture Notes Anna University 2013 Regulation Finite Element Analysis Lecture Notes Anna University 2013 Regulation
Finite Element Analysis Lecture Notes Anna University 2013 Regulation
 
tw1979 Exercise 1 Report
tw1979 Exercise 1 Reporttw1979 Exercise 1 Report
tw1979 Exercise 1 Report
 
Matrix and it's Application
Matrix and it's ApplicationMatrix and it's Application
Matrix and it's Application
 
Ou3425912596
Ou3425912596Ou3425912596
Ou3425912596
 
THE LEFT AND RIGHT BLOCK POLE PLACEMENT COMPARISON STUDY: APPLICATION TO FLIG...
THE LEFT AND RIGHT BLOCK POLE PLACEMENT COMPARISON STUDY: APPLICATION TO FLIG...THE LEFT AND RIGHT BLOCK POLE PLACEMENT COMPARISON STUDY: APPLICATION TO FLIG...
THE LEFT AND RIGHT BLOCK POLE PLACEMENT COMPARISON STUDY: APPLICATION TO FLIG...
 
MCCS
MCCSMCCS
MCCS
 

Dernier

08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking MenDelhi Call girls
 
Histor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slideHistor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slidevu2urc
 
Injustice - Developers Among Us (SciFiDevCon 2024)
Injustice - Developers Among Us (SciFiDevCon 2024)Injustice - Developers Among Us (SciFiDevCon 2024)
Injustice - Developers Among Us (SciFiDevCon 2024)Allon Mureinik
 
Scaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organizationScaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organizationRadu Cotescu
 
Exploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone ProcessorsExploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone Processorsdebabhi2
 
[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdf[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdfhans926745
 
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdfThe Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdfEnterprise Knowledge
 
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking MenDelhi Call girls
 
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...apidays
 
Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...
Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...
Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...Neo4j
 
The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024Rafal Los
 
Developing An App To Navigate The Roads of Brazil
Developing An App To Navigate The Roads of BrazilDeveloping An App To Navigate The Roads of Brazil
Developing An App To Navigate The Roads of BrazilV3cube
 
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptxHampshireHUG
 
Factors to Consider When Choosing Accounts Payable Services Providers.pptx
Factors to Consider When Choosing Accounts Payable Services Providers.pptxFactors to Consider When Choosing Accounts Payable Services Providers.pptx
Factors to Consider When Choosing Accounts Payable Services Providers.pptxKatpro Technologies
 
Automating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps ScriptAutomating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps Scriptwesley chun
 
Boost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityBoost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityPrincipled Technologies
 
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time AutomationFrom Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time AutomationSafe Software
 
Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024The Digital Insurer
 
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law DevelopmentsTrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law DevelopmentsTrustArc
 
WhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure service
WhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure serviceWhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure service
WhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure servicePooja Nehwal
 

Dernier (20)

08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men
 
Histor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slideHistor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slide
 
Injustice - Developers Among Us (SciFiDevCon 2024)
Injustice - Developers Among Us (SciFiDevCon 2024)Injustice - Developers Among Us (SciFiDevCon 2024)
Injustice - Developers Among Us (SciFiDevCon 2024)
 
Scaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organizationScaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organization
 
Exploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone ProcessorsExploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone Processors
 
[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdf[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdf
 
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdfThe Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
 
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
 
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
 
Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...
Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...
Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...
 
The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024
 
Developing An App To Navigate The Roads of Brazil
Developing An App To Navigate The Roads of BrazilDeveloping An App To Navigate The Roads of Brazil
Developing An App To Navigate The Roads of Brazil
 
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
 
Factors to Consider When Choosing Accounts Payable Services Providers.pptx
Factors to Consider When Choosing Accounts Payable Services Providers.pptxFactors to Consider When Choosing Accounts Payable Services Providers.pptx
Factors to Consider When Choosing Accounts Payable Services Providers.pptx
 
Automating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps ScriptAutomating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps Script
 
Boost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityBoost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivity
 
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time AutomationFrom Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
 
Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024
 
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law DevelopmentsTrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
 
WhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure service
WhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure serviceWhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure service
WhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure service
 

N41049093

  • 1. Monika Agarwal et al Int. Journal of Engineering Research and Applications www.ijera.com ISSN : 2248-9622, Vol. 4, Issue 1( Version 4), January 2014, pp.90-93 www.ijera.com 90 | P a g e Review of Matrix Decomposition Techniques for Signal Processing Applications Monika Agarwal*, Rajesh Mehra** *(Department of Electronics and communication, NITTTR, Chandigarh) ** (Department of Electronics and communication, NITTTR, Chandigarh) ABSTRACT Decomposition of matrix is a vital part of many scientific and engineering applications. It is a technique that breaks down a square numeric matrix into two different square matrices and is a basis for efficiently solving a system of equations, which in turn is the basis for inverting a matrix. An inverting matrix is a part of many important algorithms. Matrix factorizations have wide applications in numerical linear algebra, in solving linear systems, computing inertia, and rank estimation is an important consideration. This paper presents review of all the matrix decomposition techniques used in signal processing applications on the basis of their computational complexity, advantages and disadvantages. Various Decomposition techniques such as LU Decomposition, QR decomposition , Cholesky decomposition are discussed here. Keywords – Cholesky Decomposition, LU decomposition, Matrix factorization, QR decomposition. I. INTRODUCTION Signal processing is an area of system engineering, electrical engineering and applied mathematics that deals with the operations on or analysis of analog as well as digitized signal, representing time varying or spatially physical quantities. There are various mathematical model applied in signal processing such as linear time invariant system, system identification, optimization, estimation theory, numerical methods [1].In the computational application matrix decomposition is a basic operation. “Matrix decomposition refers to the transformation of the given matrix into a given canonical form”. Matrix factorization is the process of producing the decomposition where the given matrix is transformed to right hand side product of canonical matrices [2]. Matrix decomposition is a fundamental theme in linear algebra and applied statistics which has both scientific and engineering significance. Computational convenience and analytic simplicity are the two main aspects in the purpose of matrix decomposition. In the real world it is not feasible for most of the matrix computation to be calculated in an optimal explicit way such as matrix inversion, matrix determinant, solving linear systems and least square fitting , thus to convert a difficult matrix computation problem into several easier task such as solving triangular or diagonal systems will greatly facilitate the calculations. Data matrices such as proximity matrix or correlation matrix represent numerical observations and it is often huge and hard to analyze them. Therefore to decompose the data matrices into some lower rank canonical forms will reveal the inherent characteristics. Since matrix computation algorithms are expensive computational tasks, hardware implementation of these algorithms require substantial time and effort. There is an increasing demand for domain specific tool for matrix computation algorithms which provide fast and highly efficient hardware production. In this paper there is review of matrix decomposition techniques that are used in signal processing. The paper is organized as follows: Section II discusses the Matrix decomposition Models. Section III concludes the paper II. MATRIX DECOMPOSITION MODELS This paper discusses basically four matrix decomposition models. QR decomposition SVD decomposition LU decomposition Cholesky Decomposition 2.1 QR decomposition A matrix A=QR in linear algebra is a decomposition of a matrix A into an orthogonal and right triangular matrix. The Q-R decompositions are generally used to solve the linear least squares problems. If the matrix A is non-singular then the Q- R factorization of A is a unique one if we want that the diagonal elements of R are positive. More generally we can factor a m x n rectangular matrix into an m x m unitary matrix and an m x n upper triangular matrix. There are two standard methods to evaluate Q-R factorization of A such as Graham- RESEARCH ARTICLE OPEN ACCESS
  • 2. Monika Agarwal et al Int. Journal of Engineering Research and Applications www.ijera.com ISSN : 2248-9622, Vol. 4, Issue 1( Version 4), January 2014, pp.90-93 www.ijera.com 91 | P a g e Schmidt triangulizations and Householder triangulizations but Householder triangulizations are much more efficient than Graham-Schmidt triangulizations because of the following statistics involved. The number of operations at the kth step are multiplications=2(n-k+1)2 , additions=(n-k+1)2 + (n-k+1) (n-k) + 2 , division=1and square root=1. Summing all these operations over (n-1) steps we get the complexity as O (n3) but this involves much less computation time than the other triangulation methods [3]. QR factorization is an important tool for solving linear systems of equations because of good error propagation properties and the inevitability of unitary matrices [4]. The main application of QR decomposition is solving the linear least squares problem. There exist systems of linear equations that have no exact solutions. Linear least square is a mathematical optimization technique used to find an approximate solution for these systems. QR decomposition has several advantages that it finds least squares solution as well when no exact solution exists, and if there is exact solution, it finds all. 2.2 SVD DECOMPOSITION This is also one of the matrix decomposition schemes. This is also applicable for complex rectangular matrices of the dimensions m x n. The singular value decomposition block factors the M-by- N input matrix A such that [4] A=U×diag(s)× V* (1) Where U is an M-by-P matrix, V is an N-by-P matrix, S is a length-P , vector P is defined as min(M,N) When M = N, U and V are both M-by-M unitary matrices M > N, V is an N-by-N unitary matrix, and U is an M-by-N matrix whose columns are the first N columns of a unitary matrix N > M, U is an M-by-M unitary matrix, and V is an N-by-M matrix whose columns are the first M columns of a unitary matrix In all cases, S is a 1-D vector of positive singular values having length P. Length-N row inputs are treated as length-N columns. Note that the first (maximum) element of output S is equal to the 2-norm of the matrix A. The output is always sampling based [4]. 2.3 LU DECOMPOSITION LU decomposition is widely used matrix factorization algorithm. it transforms square matrix into lower triangular matrix L and upper matrix U with A=LU. LU decomposition used the Dolittle algorithm for the elimination column by column from left to right. It results in unit lower triangular matrix that can use the storage of the original matrix A [5]. This algorithm requires 2n3/3 floating point operations. Algorithm 1 shows the steps followed for LU decomposition of a matrix. In each iteration , the same steps are applied to a matrix that has one less row and column than in the previous iteration. This algorithms shows the parallelism of each steps while in each iteration, a new sub matrix is constructed and hence efficient use of local storage capacity. It would be preferable to use an algorithm to load a block of data into local storage performs the computation and then transfer new block. The block LU decomposition algorithm partitioned the entire matrix into smaller blocks so that work can be done concurrently to improve the performance. Input:A- n x n matrix with elemnts a i,p Output: L, U triangular matrices with elements l i p, u i p 1: for p=1 : n 2. for q=1 to p-1 3. for i=q+1 to p-1 4. A[i,p]=A[i,p]- A[i, q]× A[q,p] 5. for q=1to p-1 6. for i=p to n 7. A(i,p) = A[i,p]- A[i, q]× A[q,p] 8. for q= p+1:n 9. A(q,p)=A(q,p)/A(p,p) Algorithm 1.Basic LU decomposition The algorithm is analyzed as it writes lower and upper triangular matrices onto A matrices then it updates the value of A matrix column by column ((4) and (7)). The final values are computed by the division of each column entry by the diagonal entry of that column. The LU decomposition has various advantages. it is applicable for any matrix. It can be solved by forward substitution as well as backward substitution. The solution of triangular set of equation is trivial to solve. it is direct method and finds all solution. LU decomposition is easy to program. There are several drawbacks that it doesn’t find approximate solution (least square). It could easily be unstable. Hence to find the approximate solution Cholesky decomposition is studied. 2.4 CHOLESKY DECOMPOSITION The Cholesky decomposition factors a positive symmetric matrix to the product of a lower triangular matrix and its transpose, of which two general ways are given by: A = UT U (2) A = LDLT (3) Where A is a positive definite symmetric matrix. U is the upper triangular matrix & UT is Transpose of upper triangular matrix, L is the unit lower triangular matrix & LT is lower triangular matrix and D is the diagonal matrix in which all elements are zeros except the diagonal elements. Cholesky decomposition has 1/3N3 FLOPS complexity with heavy inner data dependency. Introducing a diagonal matrix as shown in Eq. (3) in Cholesky
  • 3. Monika Agarwal et al Int. Journal of Engineering Research and Applications www.ijera.com ISSN : 2248-9622, Vol. 4, Issue 1( Version 4), January 2014, pp.90-93 www.ijera.com 92 | P a g e decomposition has many advantages, such as avoiding square roots and alleviating data dependency [5]. In this paper, we design and implement the UTU Cholesky decomposition in Eq. (2) on FPGAs for computation acceleration. Using the fact that A is symmetric [6]: A= LDLT Where L is lower triangular matrix. (4) With Cholesky decompositions, the elements of L are evaluated as follows: (5) Where k=1, 2 ….n. (6) Where i=1, 2…..k-1. First subscript is row index and second one is column index. Cholesky decomposition is evaluated column by column and in each row the elements are evaluated from top to bottom. That is, in each column the diagonal element is evaluated first using equation (4) (the elements above the diagonal are zero) and then the other elements in the same row are evaluated next using equation (5). This is carried out for each column starting from the first one. Note that if the value within the square root in equation (4) is negative, Cholesky decomposition will fail. However, this will not happen for positive semi definite matrices, which are encountered commonly in many engineering systems (e.g., circuits, covariance matrix). Thus, Cholesky decomposition is a good way to test for the positive semi definiteness of symmetric matrices. A general form of the Cholesky algorithm is given in Algorithm 2, where the row-order representation is assumed for stored matrices. In the above algorithm, if one changes the order of the three for loops, one can get different variations which give different behavior in terms of memory access patterns and the basic linear Algebraic operation performed in the innermost loop. Out of the six different variations possible, only three are of interest. They are the row-oriented, column- oriented and sub matrixes forms and are described as: In Row-oriented L is calculated row by row, with the terms for each row being calculated using terms on the preceding rows that have already been evaluated, Column-oriented, the inner loop computes a matrix-vector product. Here, each column is calculated using terms from previously computed columns. And Sub-matrix the inner loops apply the current column as a rank-1 update to the partially reduced sub-matrix.The algorithm is analyzed as the decomposition by transferring the matrix A into memory elements. The diagonal entire of lower triangular matrix, A, are square root of diagonal entries of given matrix (8). It calculate the entries by dividing the corresponding element of given matrix by the belonging column diagonal element (9) .The algorithm works column by column and after the computation of first column of diagonal matrix with the given matrix entries, the elements in the next column are updated (4). 1. For p = 1to n 2. For q=1 to j-1 3. For i = p to n 4. A[i,p] = A[i,q] – A[i,q] * A[p,q] 5 End 6 End 7 A[p,p] = √ A[p,p] 8 For q = p+1 to n 9 A[q,p]= A[q,p] / A[p,p] 10 End 11 End Algorithm 2: General form of the Cholesky algorithm. In this paper, the Cholesky factorization method has several advantages over the other decomposition techniques. It has faster execution because only one factor needs to be calculated as the other factor is simply the transpose of the first one. Thus, the number of operations for dense matrices is approximately n3/3 under Cholesky. It is highly adaptive to parallel architectures hence pivoting is not required. This makes the Cholesky algorithm especially suitable for parallel architectures as it reduces inter-processor communications. A lot of work [8] has been devoted to parallelizing the Cholesky factorization method. it has significantly lower memory requirements. In the Cholesky algorithm, only the lower triangular matrix is used for factorization, and the intermediate and final results (the Cholesky factor L) are overwritten in the original matrix. Thus, only the lower triangular matrix must be stored, resulting in significant savings in terms of memory requirements. it eliminates the divider operation dependency. It improves the data throughput. III. CONCLUSIONS The proposed paper review all the matrix decomposition techniques used in signal processing applications. QR decomposition has complicated algorithm and also a slower algorithm. LU decomposition doesn’t find approximate solution and have stability issues. Cholesky decomposition can find approximate solution of the problem. Among all the matrix decomposition techniques Cholesky
  • 4. Monika Agarwal et al Int. Journal of Engineering Research and Applications www.ijera.com ISSN : 2248-9622, Vol. 4, Issue 1( Version 4), January 2014, pp.90-93 www.ijera.com 93 | P a g e Decomposition is the well suited for signal processing application. It is efficient in terms of memory storage capacity, computational cost, speed, data alleviating and throughput. REFERENCES [1] http://en.wikipedia.org/wiki/Signal_processi ng [2] E. W. Weisstein. Matrix decomposition. MathWorld–A Wolfram Web Resource. [Online]. Available: a. http://mathworld.wolfram.com/MatrixDeco mposition.html [3] Steven Chapra and Raymond Canale, Numerical methods for Engineers, Fourth Edition,Mcgraw Hill, 2002. [4] Mathworks, “Users Guide Filter Design Toolbox”, March-2007. [5] R.Barett,templates for the solution of linear systems: building blocks for iterative method, second ed.SIAM,1994. [6] DepengYang, G.D.Peterson, H.Li and junquing Sun," An FPGA implementation for solving least square problem” IEEE Symposium on Field-Programmable Custom Computing Machines, pp.no.306-309, 2009. [7] X. Wang and S.G. Ziavras, "Parallel LU Factorization of Sparse Matrices on FPGA- Based Configurable Computing Engines," Concurrency and Computation: Practice and Experience, 16.4:319-343, April 2004. [8] Guiming Wu,Yong Dou,“ A High Performance and Memory Efficient LU decomposer on FPGAs” IEEE Transactions on Computers, Vol. 61, No. 3, pp. no. 366- 378, 2012.