SlideShare une entreprise Scribd logo
1  sur  7
Télécharger pour lire hors ligne
Comparing Least Squares Calculations
Douglas Bates
R Development Core Team
Douglas.Bates@R-project.org
September 3, 2012
Abstract
Many statistics methods require one or more least squares problems
to be solved. There are several ways to perform this calculation, using
objects from the base R system and using objects in the classes defined
in the Matrix package.
We compare the speed of some of these methods on a very small ex-
ample and on a example for which the model matrix is large and sparse.
1 Linear least squares calculations
Many statistical techniques require least squares solutions
β = arg min
β
y − Xβ
2
(1)
where X is an n × p model matrix (p ≤ n), y is n-dimensional and β is p
dimensional. Most statistics texts state that the solution to (1) is
β = XT
X
−1
XT
y (2)
when X has full column rank (i.e. the columns of X are linearly independent)
and all too frequently it is calculated in exactly this way.
1.1 A small example
As an example, let’s create a model matrix, mm, and corresponding response
vector, y, for a simple linear regression model using the Formaldehyde data.
> data(Formaldehyde)
> str(Formaldehyde)
'data.frame': 6 obs. of 2 variables:
$ carb : num 0.1 0.3 0.5 0.6 0.7 0.9
$ optden: num 0.086 0.269 0.446 0.538 0.626 0.782
1
> (m <- cbind(1, Formaldehyde$carb))
[,1] [,2]
[1,] 1 0.1
[2,] 1 0.3
[3,] 1 0.5
[4,] 1 0.6
[5,] 1 0.7
[6,] 1 0.9
> (yo <- Formaldehyde$optden)
[1] 0.086 0.269 0.446 0.538 0.626 0.782
Using t to evaluate the transpose, solve to take an inverse, and the %*% operator
for matrix multiplication, we can translate 2 into the S language as
> solve(t(m) %*% m) %*% t(m) %*% yo
[,1]
[1,] 0.005085714
[2,] 0.876285714
On modern computers this calculation is performed so quickly that it cannot
be timed accurately in R 1
> system.time(solve(t(m) %*% m) %*% t(m) %*% yo)
user system elapsed
0 0 0
and it provides essentially the same results as the standard lm.fit function that
is called by lm.
> dput(c(solve(t(m) %*% m) %*% t(m) %*% yo))
c(0.00508571428571428, 0.876285714285715)
> dput(unname(lm.fit(m, yo)$coefficients))
c(0.00508571428571408, 0.876285714285715)
1From R version 2.2.0, system.time() has default argument gcFirst = TRUE which is as-
sumed and relevant for all subsequent timings
2
1.2 A large example
For a large, ill-conditioned least squares problem, such as that described in
Koenker and Ng (2003), the literal translation of (2) does not perform well.
> library(Matrix)
> data(KNex, package = "Matrix")
> y <- KNex$y
> mm <- as(KNex$mm, "matrix") # full traditional matrix
> dim(mm)
[1] 1850 712
> system.time(naive.sol <- solve(t(mm) %*% mm) %*% t(mm) %*% y)
user system elapsed
3.682 0.014 3.718
Because the calculation of a “cross-product” matrix, such as XT
X or XT
y,
is a common operation in statistics, the crossprod function has been provided
to do this efficiently. In the single argument form crossprod(mm) calculates
XT
X, taking advantage of the symmetry of the product. That is, instead of
calculating the 7122
= 506944 elements of XT
X separately, it only calculates
the (712 · 713)/2 = 253828 elements in the upper triangle and replicates them
in the lower triangle. Furthermore, there is no need to calculate the inverse of
a matrix explicitly when solving a linear system of equations. When the two
argument form of the solve function is used the linear system
XT
X β = XT
y (3)
is solved directly.
Combining these optimizations we obtain
> system.time(cpod.sol <- solve(crossprod(mm), crossprod(mm,y)))
user system elapsed
0.989 0.007 1.002
> all.equal(naive.sol, cpod.sol)
[1] TRUE
On this computer (2.0 GHz Pentium-4, 1 GB Memory, Goto’s BLAS, in
Spring 2004) the crossprod form of the calculation is about four times as fast as
the naive calculation. In fact, the entire crossprod solution is faster than simply
calculating XT
X the naive way.
> system.time(t(mm) %*% mm)
3
user system elapsed
1.840 0.001 1.854
Note that in newer versions of R and the BLAS library (as of summer 2007),
R’s %*% is able to detect the many zeros in mm and shortcut many operations, and
is hence much faster for such a sparse matrix than crossprod which currently
does not make use of such optimizations. This is not the case when R is linked
against an optimized BLAS library such as GOTO or ATLAS. Also, for fully
dense matrices, crossprod() indeed remains faster (by a factor of two, typically)
independently of the BLAS library:
> fm <- mm
> set.seed(11)
> fm[] <- rnorm(length(fm))
> system.time(c1 <- t(fm) %*% fm)
user system elapsed
1.922 0.002 1.937
> system.time(c2 <- crossprod(fm))
user system elapsed
0.890 0.000 0.896
> stopifnot(all.equal(c1, c2, tol = 1e-12))
1.3 Least squares calculations with Matrix classes
The crossprod function applied to a single matrix takes advantage of symme-
try when calculating the product but does not retain the information that the
product is symmetric (and positive semidefinite). As a result the solution of (3)
is performed using general linear system solver based on an LU decomposition
when it would be faster, and more stable numerically, to use a Cholesky decom-
position. The Cholesky decomposition could be used but it is rather awkward
> system.time(ch <- chol(crossprod(mm)))
user system elapsed
0.965 0.000 0.972
> system.time(chol.sol <-
+ backsolve(ch, forwardsolve(ch, crossprod(mm, y),
+ upper = TRUE, trans = TRUE)))
user system elapsed
0.012 0.000 0.012
> stopifnot(all.equal(chol.sol, naive.sol))
4
The Matrix package uses the S4 class system (Chambers, 1998) to retain
information on the structure of matrices from the intermediate calculations.
A general matrix in dense storage, created by the Matrix function, has class
"dgeMatrix" but its cross-product has class "dpoMatrix". The solve methods
for the "dpoMatrix" class use the Cholesky decomposition.
> mm <- as(KNex$mm, "dgeMatrix")
> class(crossprod(mm))
[1] "dpoMatrix"
attr(,"package")
[1] "Matrix"
> system.time(Mat.sol <- solve(crossprod(mm), crossprod(mm, y)))
user system elapsed
0.962 0.000 0.967
> stopifnot(all.equal(naive.sol, unname(as(Mat.sol,"matrix"))))
Furthermore, any method that calculates a decomposition or factorization
stores the resulting factorization with the original object so that it can be reused
without recalculation.
> xpx <- crossprod(mm)
> xpy <- crossprod(mm, y)
> system.time(solve(xpx, xpy))
user system elapsed
0.096 0.000 0.097
> system.time(solve(xpx, xpy)) # reusing factorization
user system elapsed
0.001 0.000 0.001
The model matrix mm is sparse; that is, most of the elements of mm are zero.
The Matrix package incorporates special methods for sparse matrices, which
produce the fastest results of all.
> mm <- KNex$mm
> class(mm)
[1] "dgCMatrix"
attr(,"package")
[1] "Matrix"
> system.time(sparse.sol <- solve(crossprod(mm), crossprod(mm, y)))
5
user system elapsed
0.006 0.000 0.005
> stopifnot(all.equal(naive.sol, unname(as(sparse.sol, "matrix"))))
As with other classes in the Matrix package, the dsCMatrix retains any
factorization that has been calculated although, in this case, the decomposition
is so fast that it is difficult to determine the difference in the solution times.
> xpx <- crossprod(mm)
> xpy <- crossprod(mm, y)
> system.time(solve(xpx, xpy))
user system elapsed
0.002 0.000 0.002
> system.time(solve(xpx, xpy))
user system elapsed
0.001 0.000 0.000
Session Info
> toLatex(sessionInfo())
• R version 2.15.1 Patched (2012-09-01 r60539),
x86_64-unknown-linux-gnu
• Locale: LC_CTYPE=de_CH.UTF-8, LC_NUMERIC=C, LC_TIME=en_US.UTF-8,
LC_COLLATE=C, LC_MONETARY=en_US.UTF-8, LC_MESSAGES=de_CH.UTF-8,
LC_PAPER=C, LC_NAME=C, LC_ADDRESS=C, LC_TELEPHONE=C,
LC_MEASUREMENT=de_CH.UTF-8, LC_IDENTIFICATION=C
• Base packages: base, datasets, grDevices, graphics, methods, stats, tools,
utils
• Other packages: Matrix 1.0-9, lattice 0.20-10
• Loaded via a namespace (and not attached): grid 2.15.1
> if(identical(1L, grep("linux", R.version[["os"]]))) { ## Linux - only ---
+ Scpu <- sfsmisc::Sys.procinfo("/proc/cpuinfo")
+ Smem <- sfsmisc::Sys.procinfo("/proc/meminfo")
+ print(Scpu[c("model name", "cpu MHz", "cache size", "bogomips")])
+ print(Smem[c("MemTotal", "SwapTotal")])
+ }
6
_
model name AMD Phenom(tm) II X4 925 Processor
cpu MHz 800.000
cache size 512 KB
bogomips 5599.95
_
MemTotal 7920288 kB
SwapTotal 16777212 kB
References
John M. Chambers. Programming with Data. Springer, New York, 1998. ISBN
0-387-98503-4.
Roger Koenker and Pin Ng. SparseM: A sparse matrix package for R. J. of
Statistical Software, 8(6), 2003.
7

Contenu connexe

Tendances

Discretizing of linear systems with time-delay Using method of Euler’s and Tu...
Discretizing of linear systems with time-delay Using method of Euler’s and Tu...Discretizing of linear systems with time-delay Using method of Euler’s and Tu...
Discretizing of linear systems with time-delay Using method of Euler’s and Tu...
IJERA Editor
 

Tendances (9)

Approximate Methods
Approximate MethodsApproximate Methods
Approximate Methods
 
Modern Control - Lec07 - State Space Modeling of LTI Systems
Modern Control - Lec07 - State Space Modeling of LTI SystemsModern Control - Lec07 - State Space Modeling of LTI Systems
Modern Control - Lec07 - State Space Modeling of LTI Systems
 
Forecasting models for Customer Lifetime Value
Forecasting models for Customer Lifetime ValueForecasting models for Customer Lifetime Value
Forecasting models for Customer Lifetime Value
 
Discretizing of linear systems with time-delay Using method of Euler’s and Tu...
Discretizing of linear systems with time-delay Using method of Euler’s and Tu...Discretizing of linear systems with time-delay Using method of Euler’s and Tu...
Discretizing of linear systems with time-delay Using method of Euler’s and Tu...
 
The Chimera Grid Concept and Application
The Chimera Grid Concept and Application The Chimera Grid Concept and Application
The Chimera Grid Concept and Application
 
Daa chapter 2
Daa chapter 2Daa chapter 2
Daa chapter 2
 
Finite element analysis of space truss by abaqus
Finite element analysis of space truss by abaqus Finite element analysis of space truss by abaqus
Finite element analysis of space truss by abaqus
 
Clustering tutorial
Clustering tutorialClustering tutorial
Clustering tutorial
 
Smart Multitask Bregman Clustering
Smart Multitask Bregman ClusteringSmart Multitask Bregman Clustering
Smart Multitask Bregman Clustering
 

En vedette

Canyon corridor boundaries and population by tract aubrey relf
Canyon corridor boundaries and population by tract aubrey relfCanyon corridor boundaries and population by tract aubrey relf
Canyon corridor boundaries and population by tract aubrey relf
Aubrey Relf
 
โครงงานเชิงประพันธ์
โครงงานเชิงประพันธ์โครงงานเชิงประพันธ์
โครงงานเชิงประพันธ์
maerimwittayakom school
 
Cleantelligent Benifits Slide Show Carter Cleaning
Cleantelligent Benifits Slide Show Carter Cleaning Cleantelligent Benifits Slide Show Carter Cleaning
Cleantelligent Benifits Slide Show Carter Cleaning
Sarah Beaudry
 
Hadoop World Oct 2009 Production Deep Dive With High Availability
Hadoop World Oct 2009 Production Deep Dive With High AvailabilityHadoop World Oct 2009 Production Deep Dive With High Availability
Hadoop World Oct 2009 Production Deep Dive With High Availability
Alex Dorman
 
Automating web content migration
Automating web content migrationAutomating web content migration
Automating web content migration
samsherwood
 
【三重 迷你倉】居家收納 佈置小撇步
 【三重 迷你倉】居家收納 佈置小撇步 【三重 迷你倉】居家收納 佈置小撇步
【三重 迷你倉】居家收納 佈置小撇步
收多易迷你倉庫
 
Велика Вітчизняна Війна
Велика Вітчизняна ВійнаВелика Вітчизняна Війна
Велика Вітчизняна Війна
teryevdokim96
 
Web design winter start
Web design  winter startWeb design  winter start
Web design winter start
Konrad Roeder
 
II Guerra Mundial
II Guerra MundialII Guerra Mundial
II Guerra Mundial
agatagc
 

En vedette (19)

Merp
MerpMerp
Merp
 
Canyon corridor boundaries and population by tract aubrey relf
Canyon corridor boundaries and population by tract aubrey relfCanyon corridor boundaries and population by tract aubrey relf
Canyon corridor boundaries and population by tract aubrey relf
 
Climate change
Climate changeClimate change
Climate change
 
โครงงานเชิงประพันธ์
โครงงานเชิงประพันธ์โครงงานเชิงประพันธ์
โครงงานเชิงประพันธ์
 
Hodnocení vědy na Univerzitě Karlově v Praze / Helena Kvačková
Hodnocení vědy na Univerzitě Karlově v Praze  / Helena KvačkováHodnocení vědy na Univerzitě Karlově v Praze  / Helena Kvačková
Hodnocení vědy na Univerzitě Karlově v Praze / Helena Kvačková
 
Presentation2
Presentation2Presentation2
Presentation2
 
Cleantelligent Benifits Slide Show Carter Cleaning
Cleantelligent Benifits Slide Show Carter Cleaning Cleantelligent Benifits Slide Show Carter Cleaning
Cleantelligent Benifits Slide Show Carter Cleaning
 
Hadoop World Oct 2009 Production Deep Dive With High Availability
Hadoop World Oct 2009 Production Deep Dive With High AvailabilityHadoop World Oct 2009 Production Deep Dive With High Availability
Hadoop World Oct 2009 Production Deep Dive With High Availability
 
Lr4_unguryan
Lr4_unguryanLr4_unguryan
Lr4_unguryan
 
Clinton 2015 KPI
Clinton 2015 KPIClinton 2015 KPI
Clinton 2015 KPI
 
Terrific Thoughts To Treasure
Terrific Thoughts To TreasureTerrific Thoughts To Treasure
Terrific Thoughts To Treasure
 
Automating web content migration
Automating web content migrationAutomating web content migration
Automating web content migration
 
Fred Acler in Romania 2007
Fred Acler in Romania 2007Fred Acler in Romania 2007
Fred Acler in Romania 2007
 
【三重 迷你倉】居家收納 佈置小撇步
 【三重 迷你倉】居家收納 佈置小撇步 【三重 迷你倉】居家收納 佈置小撇步
【三重 迷你倉】居家收納 佈置小撇步
 
Photo shoot plan
Photo shoot planPhoto shoot plan
Photo shoot plan
 
Велика Вітчизняна Війна
Велика Вітчизняна ВійнаВелика Вітчизняна Війна
Велика Вітчизняна Війна
 
Web design winter start
Web design  winter startWeb design  winter start
Web design winter start
 
II Guerra Mundial
II Guerra MundialII Guerra Mundial
II Guerra Mundial
 
mini catalogue
mini cataloguemini catalogue
mini catalogue
 

Similaire à Comparisons

Parallel R in snow (english after 2nd slide)
Parallel R in snow (english after 2nd slide)Parallel R in snow (english after 2nd slide)
Parallel R in snow (english after 2nd slide)
Cdiscount
 
Module 1 notes of data warehousing and data
Module 1 notes of data warehousing and dataModule 1 notes of data warehousing and data
Module 1 notes of data warehousing and data
vijipersonal2012
 

Similaire à Comparisons (20)

Mechanical Engineering Assignment Help
Mechanical Engineering Assignment HelpMechanical Engineering Assignment Help
Mechanical Engineering Assignment Help
 
Design and Implementation of Parallel and Randomized Approximation Algorithms
Design and Implementation of Parallel and Randomized Approximation AlgorithmsDesign and Implementation of Parallel and Randomized Approximation Algorithms
Design and Implementation of Parallel and Randomized Approximation Algorithms
 
My Postdoctoral Research
My Postdoctoral ResearchMy Postdoctoral Research
My Postdoctoral Research
 
Es272 ch1
Es272 ch1Es272 ch1
Es272 ch1
 
Fpga implementation of optimal step size nlms algorithm and its performance a...
Fpga implementation of optimal step size nlms algorithm and its performance a...Fpga implementation of optimal step size nlms algorithm and its performance a...
Fpga implementation of optimal step size nlms algorithm and its performance a...
 
solver (1)
solver (1)solver (1)
solver (1)
 
Fpga implementation of optimal step size nlms algorithm and its performance a...
Fpga implementation of optimal step size nlms algorithm and its performance a...Fpga implementation of optimal step size nlms algorithm and its performance a...
Fpga implementation of optimal step size nlms algorithm and its performance a...
 
Chapter26
Chapter26Chapter26
Chapter26
 
Learn Matlab
Learn MatlabLearn Matlab
Learn Matlab
 
Numerical Methods
Numerical MethodsNumerical Methods
Numerical Methods
 
Parallel R in snow (english after 2nd slide)
Parallel R in snow (english after 2nd slide)Parallel R in snow (english after 2nd slide)
Parallel R in snow (english after 2nd slide)
 
Lab03
Lab03Lab03
Lab03
 
Module 1 notes of data warehousing and data
Module 1 notes of data warehousing and dataModule 1 notes of data warehousing and data
Module 1 notes of data warehousing and data
 
A Comparison Of Methods For Solving MAX-SAT Problems
A Comparison Of Methods For Solving MAX-SAT ProblemsA Comparison Of Methods For Solving MAX-SAT Problems
A Comparison Of Methods For Solving MAX-SAT Problems
 
Parallel Algorithms: Sort & Merge, Image Processing, Fault Tolerance
Parallel Algorithms: Sort & Merge, Image Processing, Fault ToleranceParallel Algorithms: Sort & Merge, Image Processing, Fault Tolerance
Parallel Algorithms: Sort & Merge, Image Processing, Fault Tolerance
 
Analysis of Algorithum
Analysis of AlgorithumAnalysis of Algorithum
Analysis of Algorithum
 
Matlab-free course by Mohd Esa
Matlab-free course by Mohd EsaMatlab-free course by Mohd Esa
Matlab-free course by Mohd Esa
 
Estimating Future Initial Margin with Machine Learning
Estimating Future Initial Margin with Machine LearningEstimating Future Initial Margin with Machine Learning
Estimating Future Initial Margin with Machine Learning
 
Modern Control - Lec 02 - Mathematical Modeling of Systems
Modern Control - Lec 02 - Mathematical Modeling of SystemsModern Control - Lec 02 - Mathematical Modeling of Systems
Modern Control - Lec 02 - Mathematical Modeling of Systems
 
CHAPTER 7.pdfdjdjdjdjdjdjdjsjsjddhhdudsko
CHAPTER 7.pdfdjdjdjdjdjdjdjsjsjddhhdudskoCHAPTER 7.pdfdjdjdjdjdjdjdjsjsjddhhdudsko
CHAPTER 7.pdfdjdjdjdjdjdjdjsjsjddhhdudsko
 

Plus de Zifirio

Comparisons
ComparisonsComparisons
Comparisons
Zifirio
 
Comparisons
ComparisonsComparisons
Comparisons
Zifirio
 
Comparisons
ComparisonsComparisons
Comparisons
Zifirio
 
Comparisons
ComparisonsComparisons
Comparisons
Zifirio
 
Comparisons
ComparisonsComparisons
Comparisons
Zifirio
 
Comparisons
ComparisonsComparisons
Comparisons
Zifirio
 
Comparisons
ComparisonsComparisons
Comparisons
Zifirio
 
Comparisons
ComparisonsComparisons
Comparisons
Zifirio
 
Comparisons
ComparisonsComparisons
Comparisons
Zifirio
 
Comparisons
ComparisonsComparisons
Comparisons
Zifirio
 
Comparisons
ComparisonsComparisons
Comparisons
Zifirio
 
Comparisons
ComparisonsComparisons
Comparisons
Zifirio
 

Plus de Zifirio (12)

Comparisons
ComparisonsComparisons
Comparisons
 
Comparisons
ComparisonsComparisons
Comparisons
 
Comparisons
ComparisonsComparisons
Comparisons
 
Comparisons
ComparisonsComparisons
Comparisons
 
Comparisons
ComparisonsComparisons
Comparisons
 
Comparisons
ComparisonsComparisons
Comparisons
 
Comparisons
ComparisonsComparisons
Comparisons
 
Comparisons
ComparisonsComparisons
Comparisons
 
Comparisons
ComparisonsComparisons
Comparisons
 
Comparisons
ComparisonsComparisons
Comparisons
 
Comparisons
ComparisonsComparisons
Comparisons
 
Comparisons
ComparisonsComparisons
Comparisons
 

Dernier

CNv6 Instructor Chapter 6 Quality of Service
CNv6 Instructor Chapter 6 Quality of ServiceCNv6 Instructor Chapter 6 Quality of Service
CNv6 Instructor Chapter 6 Quality of Service
giselly40
 
Artificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and MythsArtificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and Myths
Joaquim Jorge
 
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptxEIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
Earley Information Science
 

Dernier (20)

Real Time Object Detection Using Open CV
Real Time Object Detection Using Open CVReal Time Object Detection Using Open CV
Real Time Object Detection Using Open CV
 
Presentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreterPresentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreter
 
Driving Behavioral Change for Information Management through Data-Driven Gree...
Driving Behavioral Change for Information Management through Data-Driven Gree...Driving Behavioral Change for Information Management through Data-Driven Gree...
Driving Behavioral Change for Information Management through Data-Driven Gree...
 
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time AutomationFrom Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
 
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdfThe Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
 
Breaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path MountBreaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path Mount
 
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law DevelopmentsTrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
 
Slack Application Development 101 Slides
Slack Application Development 101 SlidesSlack Application Development 101 Slides
Slack Application Development 101 Slides
 
CNv6 Instructor Chapter 6 Quality of Service
CNv6 Instructor Chapter 6 Quality of ServiceCNv6 Instructor Chapter 6 Quality of Service
CNv6 Instructor Chapter 6 Quality of Service
 
Artificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and MythsArtificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and Myths
 
Tata AIG General Insurance Company - Insurer Innovation Award 2024
Tata AIG General Insurance Company - Insurer Innovation Award 2024Tata AIG General Insurance Company - Insurer Innovation Award 2024
Tata AIG General Insurance Company - Insurer Innovation Award 2024
 
Scaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organizationScaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organization
 
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
 
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptxEIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
 
🐬 The future of MySQL is Postgres 🐘
🐬  The future of MySQL is Postgres   🐘🐬  The future of MySQL is Postgres   🐘
🐬 The future of MySQL is Postgres 🐘
 
Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)
 
08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking Men08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking Men
 
The Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptxThe Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptx
 
A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)
 
Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024
 

Comparisons

  • 1. Comparing Least Squares Calculations Douglas Bates R Development Core Team Douglas.Bates@R-project.org September 3, 2012 Abstract Many statistics methods require one or more least squares problems to be solved. There are several ways to perform this calculation, using objects from the base R system and using objects in the classes defined in the Matrix package. We compare the speed of some of these methods on a very small ex- ample and on a example for which the model matrix is large and sparse. 1 Linear least squares calculations Many statistical techniques require least squares solutions β = arg min β y − Xβ 2 (1) where X is an n × p model matrix (p ≤ n), y is n-dimensional and β is p dimensional. Most statistics texts state that the solution to (1) is β = XT X −1 XT y (2) when X has full column rank (i.e. the columns of X are linearly independent) and all too frequently it is calculated in exactly this way. 1.1 A small example As an example, let’s create a model matrix, mm, and corresponding response vector, y, for a simple linear regression model using the Formaldehyde data. > data(Formaldehyde) > str(Formaldehyde) 'data.frame': 6 obs. of 2 variables: $ carb : num 0.1 0.3 0.5 0.6 0.7 0.9 $ optden: num 0.086 0.269 0.446 0.538 0.626 0.782 1
  • 2. > (m <- cbind(1, Formaldehyde$carb)) [,1] [,2] [1,] 1 0.1 [2,] 1 0.3 [3,] 1 0.5 [4,] 1 0.6 [5,] 1 0.7 [6,] 1 0.9 > (yo <- Formaldehyde$optden) [1] 0.086 0.269 0.446 0.538 0.626 0.782 Using t to evaluate the transpose, solve to take an inverse, and the %*% operator for matrix multiplication, we can translate 2 into the S language as > solve(t(m) %*% m) %*% t(m) %*% yo [,1] [1,] 0.005085714 [2,] 0.876285714 On modern computers this calculation is performed so quickly that it cannot be timed accurately in R 1 > system.time(solve(t(m) %*% m) %*% t(m) %*% yo) user system elapsed 0 0 0 and it provides essentially the same results as the standard lm.fit function that is called by lm. > dput(c(solve(t(m) %*% m) %*% t(m) %*% yo)) c(0.00508571428571428, 0.876285714285715) > dput(unname(lm.fit(m, yo)$coefficients)) c(0.00508571428571408, 0.876285714285715) 1From R version 2.2.0, system.time() has default argument gcFirst = TRUE which is as- sumed and relevant for all subsequent timings 2
  • 3. 1.2 A large example For a large, ill-conditioned least squares problem, such as that described in Koenker and Ng (2003), the literal translation of (2) does not perform well. > library(Matrix) > data(KNex, package = "Matrix") > y <- KNex$y > mm <- as(KNex$mm, "matrix") # full traditional matrix > dim(mm) [1] 1850 712 > system.time(naive.sol <- solve(t(mm) %*% mm) %*% t(mm) %*% y) user system elapsed 3.682 0.014 3.718 Because the calculation of a “cross-product” matrix, such as XT X or XT y, is a common operation in statistics, the crossprod function has been provided to do this efficiently. In the single argument form crossprod(mm) calculates XT X, taking advantage of the symmetry of the product. That is, instead of calculating the 7122 = 506944 elements of XT X separately, it only calculates the (712 · 713)/2 = 253828 elements in the upper triangle and replicates them in the lower triangle. Furthermore, there is no need to calculate the inverse of a matrix explicitly when solving a linear system of equations. When the two argument form of the solve function is used the linear system XT X β = XT y (3) is solved directly. Combining these optimizations we obtain > system.time(cpod.sol <- solve(crossprod(mm), crossprod(mm,y))) user system elapsed 0.989 0.007 1.002 > all.equal(naive.sol, cpod.sol) [1] TRUE On this computer (2.0 GHz Pentium-4, 1 GB Memory, Goto’s BLAS, in Spring 2004) the crossprod form of the calculation is about four times as fast as the naive calculation. In fact, the entire crossprod solution is faster than simply calculating XT X the naive way. > system.time(t(mm) %*% mm) 3
  • 4. user system elapsed 1.840 0.001 1.854 Note that in newer versions of R and the BLAS library (as of summer 2007), R’s %*% is able to detect the many zeros in mm and shortcut many operations, and is hence much faster for such a sparse matrix than crossprod which currently does not make use of such optimizations. This is not the case when R is linked against an optimized BLAS library such as GOTO or ATLAS. Also, for fully dense matrices, crossprod() indeed remains faster (by a factor of two, typically) independently of the BLAS library: > fm <- mm > set.seed(11) > fm[] <- rnorm(length(fm)) > system.time(c1 <- t(fm) %*% fm) user system elapsed 1.922 0.002 1.937 > system.time(c2 <- crossprod(fm)) user system elapsed 0.890 0.000 0.896 > stopifnot(all.equal(c1, c2, tol = 1e-12)) 1.3 Least squares calculations with Matrix classes The crossprod function applied to a single matrix takes advantage of symme- try when calculating the product but does not retain the information that the product is symmetric (and positive semidefinite). As a result the solution of (3) is performed using general linear system solver based on an LU decomposition when it would be faster, and more stable numerically, to use a Cholesky decom- position. The Cholesky decomposition could be used but it is rather awkward > system.time(ch <- chol(crossprod(mm))) user system elapsed 0.965 0.000 0.972 > system.time(chol.sol <- + backsolve(ch, forwardsolve(ch, crossprod(mm, y), + upper = TRUE, trans = TRUE))) user system elapsed 0.012 0.000 0.012 > stopifnot(all.equal(chol.sol, naive.sol)) 4
  • 5. The Matrix package uses the S4 class system (Chambers, 1998) to retain information on the structure of matrices from the intermediate calculations. A general matrix in dense storage, created by the Matrix function, has class "dgeMatrix" but its cross-product has class "dpoMatrix". The solve methods for the "dpoMatrix" class use the Cholesky decomposition. > mm <- as(KNex$mm, "dgeMatrix") > class(crossprod(mm)) [1] "dpoMatrix" attr(,"package") [1] "Matrix" > system.time(Mat.sol <- solve(crossprod(mm), crossprod(mm, y))) user system elapsed 0.962 0.000 0.967 > stopifnot(all.equal(naive.sol, unname(as(Mat.sol,"matrix")))) Furthermore, any method that calculates a decomposition or factorization stores the resulting factorization with the original object so that it can be reused without recalculation. > xpx <- crossprod(mm) > xpy <- crossprod(mm, y) > system.time(solve(xpx, xpy)) user system elapsed 0.096 0.000 0.097 > system.time(solve(xpx, xpy)) # reusing factorization user system elapsed 0.001 0.000 0.001 The model matrix mm is sparse; that is, most of the elements of mm are zero. The Matrix package incorporates special methods for sparse matrices, which produce the fastest results of all. > mm <- KNex$mm > class(mm) [1] "dgCMatrix" attr(,"package") [1] "Matrix" > system.time(sparse.sol <- solve(crossprod(mm), crossprod(mm, y))) 5
  • 6. user system elapsed 0.006 0.000 0.005 > stopifnot(all.equal(naive.sol, unname(as(sparse.sol, "matrix")))) As with other classes in the Matrix package, the dsCMatrix retains any factorization that has been calculated although, in this case, the decomposition is so fast that it is difficult to determine the difference in the solution times. > xpx <- crossprod(mm) > xpy <- crossprod(mm, y) > system.time(solve(xpx, xpy)) user system elapsed 0.002 0.000 0.002 > system.time(solve(xpx, xpy)) user system elapsed 0.001 0.000 0.000 Session Info > toLatex(sessionInfo()) • R version 2.15.1 Patched (2012-09-01 r60539), x86_64-unknown-linux-gnu • Locale: LC_CTYPE=de_CH.UTF-8, LC_NUMERIC=C, LC_TIME=en_US.UTF-8, LC_COLLATE=C, LC_MONETARY=en_US.UTF-8, LC_MESSAGES=de_CH.UTF-8, LC_PAPER=C, LC_NAME=C, LC_ADDRESS=C, LC_TELEPHONE=C, LC_MEASUREMENT=de_CH.UTF-8, LC_IDENTIFICATION=C • Base packages: base, datasets, grDevices, graphics, methods, stats, tools, utils • Other packages: Matrix 1.0-9, lattice 0.20-10 • Loaded via a namespace (and not attached): grid 2.15.1 > if(identical(1L, grep("linux", R.version[["os"]]))) { ## Linux - only --- + Scpu <- sfsmisc::Sys.procinfo("/proc/cpuinfo") + Smem <- sfsmisc::Sys.procinfo("/proc/meminfo") + print(Scpu[c("model name", "cpu MHz", "cache size", "bogomips")]) + print(Smem[c("MemTotal", "SwapTotal")]) + } 6
  • 7. _ model name AMD Phenom(tm) II X4 925 Processor cpu MHz 800.000 cache size 512 KB bogomips 5599.95 _ MemTotal 7920288 kB SwapTotal 16777212 kB References John M. Chambers. Programming with Data. Springer, New York, 1998. ISBN 0-387-98503-4. Roger Koenker and Pin Ng. SparseM: A sparse matrix package for R. J. of Statistical Software, 8(6), 2003. 7