SlideShare une entreprise Scribd logo
1  sur  36
Cloud Technologies and Their Applications The Bioinformatics Open Source Conference (BOSC 2010) Boston, Massachusetts Judy Qiu http://salsahpc.indiana.edu   Assistant Director, Pervasive Technology Institute Assistant Professor, School of Informatics and Computing Indiana University
Data Explosion and Challenges Data Deluge Cloud Technologies Why ? How ? Life Science Applications Parallel Computing What ?
Data We’re Looking at ,[object Object],(65535 Patient/GIS records / 54 dimensions each) ,[object Object],    (10 million Sequences / at least 300 to 400 base pair each) ,[object Object],    (60 million chemical compounds/166 fingerprints each) High volume and high dimension require new efficient computing approaches!
Some Life Sciences Applications EST (Expressed Sequence Tag)sequence assembly program using DNA sequence assembly program software CAP3. Metagenomics and Alu repetition alignment using Smith Waterman dissimilarity computations followed by MPI applications for Clustering and MDS (Multi Dimensional Scaling) for dimension reduction before visualization Mapping the 60 million entries in PubCheminto two or three dimensions to aid selection of related chemicals with convenient Google Earth like Browser. This uses either hierarchical MDS (which cannot be applied directly as O(N2)) or GTM (Generative Topographic Mapping). Correlating Childhood obesity with environmental factorsby combining medical records with Geographical Information data with over 100 attributes using correlation computation, MDS and genetic algorithms for choosing optimal environmental factors.
DNA Sequencing Pipeline MapReduce Illumina/Solexa           Roche/454 Life Sciences     Applied Biosystems/SOLiD Pairwise clustering Blocking  MDS MPI Modern Commerical Gene Sequences Visualization Plotviz Sequence alignment Dissimilarity Matrix N(N-1)/2 values block Pairings FASTA FileN Sequences Read Alignment ,[object Object]
User submit their jobs to the pipeline.  The components are services and so is the whole pipeline.Internet
Cloud Services and MapReduce Cloud Technologies Data Deluge Life Science Applications Parallel Computing
Clouds as Cost Effective Data Centers 7 Builds giant data centers with 100,000’s of computers; ~ 200-1000 to a shipping container with Internet access     “Microsoft will cram between 150 and 220 shipping containers filled with data center gear into a new 500,000 square foot Chicago facility. This move marks the most significant, public use of the shipping container systems popularized by the likes of Sun Microsystems and Rackable Systems to date.” ―News Release from Web
Clouds hide Complexity 8 Cyberinfrastructure Is “Research as a Service” SaaS: Software as a Service (e.g. Clustering is a service) PaaS: Platform as a Service IaaS plus core software capabilities on which you build  SaaS (e.g. Azure is a PaaS; MapReduce is a Platform) IaaS(HaaS): Infrasturcture as a Service  (get computer time with a credit card and with a Web interface like EC2)
Commercial Cloud Software
MapReduce Map(Key, Value)   Reduce(Key, List<Value>)   A parallel Runtime coming from Information Retrieval Data Partitions A hash function maps the results of the map tasks to r  reduce tasks Reduce Outputs Implementations support: Splitting of data Passing the output of map functions to reduce functions Sorting the inputs to the reduce function based on the intermediate keys Quality of services
Edge :  communication path Vertex : execution task   Hadoop & DryadLINQ Apache Hadoop Microsoft DryadLINQ Standard LINQ operations Data/Compute Nodes Master Node DryadLINQ operations Job Tracker M M M M R R R R HDFS Name Node Data blocks 1 2 DryadLINQ Compiler 2 3 3 4 Directed Acyclic Graph (DAG) based execution flows Dryad process the DAG executing vertices on compute clusters LINQ provides a query interface for structured data Provide Hash, Range, and Round-Robin partition patterns  Apache Implementation of Google’s MapReduce Hadoop Distributed File System (HDFS) manage data Map/Reduce tasks are scheduled based on data locality in HDFS (replicated data blocks) Dryad Execution Engine Job creation; Resource management; Fault tolerance& re-execution of failed taskes/vertices
Applications using Dryad & DryadLINQ Input files (FASTA) CAP3 - Expressed Sequence Tag assembly  to re-construct full-length mRNA CAP3 CAP3 CAP3 DryadLINQ Output files Perform using DryadLINQ and Apache Hadoop implementations Single “Select” operation in DryadLINQ “Map only” operation in Hadoop  X. Huang, A. Madan, “CAP3: A DNA Sequence Assembly Program,” Genome Research, vol. 9, no. 9, pp. 868-877, 1999.
Classic Cloud Architecture Amazon EC2 and Microsoft Azure MapReduce Architecture Apache Hadoop and Microsoft DryadLINQ HDFS Input Data Set Data File Map() Map() Executable Optional Reduce Phase Reduce Results HDFS
Usability and Performance of Different Cloud Approaches Cap3 Performance Cap3 Efficiency ,[object Object]
Hadoop, DryadLINQ  - 32 nodes (256 cores IDataPlex)
EC2 - 16 High CPU extra large instances (128 cores)
Azure- 128 small instances (128 cores)
Ease of Use – Dryad/Hadoop are easier than EC2/Azure as higher level models
Lines of code including  file copyAzure : ~300  Hadoop: ~400  Dyrad: ~450  EC2 : ~700
Table 1 : Selected EC2 Instance Types
4096 Cap3 data files :  1.06 GB / 1875968 reads (458 readsX4096).. Following is the cost to process 4096 CAP3 files.. Amortized cost in Tempest  (24 core X 32 nodes, 48 GB per node)    = 9.43$ (Assume 70% utilization, write off over 3 years, include support)
Data Intensive Applications Cloud Technologies Data Deluge Life Science Applications Parallel Computing
Alu and Metagenomics Workflow “All pairs” problem                 Data is a collection of N sequences. Need to calcuate N2dissimilarities (distances) between sequnces (all pairs). ,[object Object]
“Multiple Sequence Alignment” (creating vectors of characters) doesn’t seem to work if N larger than O(100), where 100’s of characters long.Step 1: Can calculate N2 dissimilarities (distances) between sequences Step 2: Find families by clustering (using much better methods than Kmeans). As no vectors, use vector free O(N2) methods Step 3: Map to 3D for visualization using Multidimensional Scaling (MDS) – also O(N2) Results:         N = 50,000 runs in 10 hours (the complete pipeline above) on 768 cores Discussions: Need to address millions of sequences ….. Currently using a mix of MapReduce and MPI Twister will do all steps as MDS, Clustering just need MPI Broadcast/Reduce
All-Pairs Using DryadLINQ 125 million distances 4 hours & 46 minutes Calculate  Pairwise Distances (Smith Waterman Gotoh) Calculate pairwise distances for a collection of genes (used for clustering, MDS) Fine grained tasks in MPI Coarse grained tasks in DryadLINQ Performed on 768 cores (Tempest Cluster) Moretti, C., Bui, H., Hollingsworth, K., Rich, B., Flynn, P., & Thain, D. (2009). All-Pairs: An Abstraction for Data Intensive Computing on Campus Grids. IEEE Transactions on Parallel and Distributed Systems, 21, 21-36.
Biology MDS and Clustering Results Alu Families This visualizes results of Alu repeats from Chimpanzee and Human Genomes. Young families (green, yellow) are seen as tight clusters. This is projection of MDS dimension reduction to 3D of 35399 repeats – each with about  400 base pairs Metagenomics This visualizes results of dimension reduction to 3D of 30000 gene sequences from an environmental sample. The many different genes are classified by clustering algorithm and visualized by MDS dimension reduction
Hadoop/Dryad ComparisonInhomogeneous Data I Inhomogeneity of data does not have a significant effect when the sequence lengths are randomly distributed Dryad with Windows HPCS compared to Hadoop with Linux RHEL on Idataplex (32 nodes)
Hadoop/Dryad ComparisonInhomogeneous Data II This shows the natural load balancing of Hadoop MR dynamic task assignment using a global pipe line in contrast to the  DryadLinq static assignment Dryad with Windows HPCS compared to Hadoop with Linux RHEL on Idataplex (32 nodes)
Hadoop VM Performance Degradation Perf. Degradation = (Tvm – Tbaremetal)/Tbaremetal 15.3% Degradation at largest data set size
Parallel Computing and Software Cloud Technologies Data Deluge Life Science Applications Parallel Computing
Twister(MapReduce++) Pub/Sub Broker Network Map Worker ,[object Object]
Intermediate results are directly transferred from the map tasks to the reduce tasks – eliminates local files
Cacheablemap/reduce tasks
Static data remains in memory
Combine phase to combine reductions

Contenu connexe

Tendances

Survey of Parallel Data Processing in Context with MapReduce
Survey of Parallel Data Processing in Context with MapReduce Survey of Parallel Data Processing in Context with MapReduce
Survey of Parallel Data Processing in Context with MapReduce
cscpconf
 
JovianDATA MDX Engine Comad oct 22 2011
JovianDATA MDX Engine Comad oct 22 2011JovianDATA MDX Engine Comad oct 22 2011
JovianDATA MDX Engine Comad oct 22 2011
Satya Ramachandran
 

Tendances (18)

Big dataanalyticsbeyondhadoop public_20_june_2013
Big dataanalyticsbeyondhadoop public_20_june_2013Big dataanalyticsbeyondhadoop public_20_june_2013
Big dataanalyticsbeyondhadoop public_20_june_2013
 
The Matsu Project - Open Source Software for Processing Satellite Imagery Data
The Matsu Project - Open Source Software for Processing Satellite Imagery DataThe Matsu Project - Open Source Software for Processing Satellite Imagery Data
The Matsu Project - Open Source Software for Processing Satellite Imagery Data
 
Introduction to Big Data and Science Clouds (Chapter 1, SC 11 Tutorial)
Introduction to Big Data and Science Clouds (Chapter 1, SC 11 Tutorial)Introduction to Big Data and Science Clouds (Chapter 1, SC 11 Tutorial)
Introduction to Big Data and Science Clouds (Chapter 1, SC 11 Tutorial)
 
IJET-V3I2P24
IJET-V3I2P24IJET-V3I2P24
IJET-V3I2P24
 
useR 2014 jskim
useR 2014 jskimuseR 2014 jskim
useR 2014 jskim
 
Survey of Parallel Data Processing in Context with MapReduce
Survey of Parallel Data Processing in Context with MapReduce Survey of Parallel Data Processing in Context with MapReduce
Survey of Parallel Data Processing in Context with MapReduce
 
Big Graph : Tools, Techniques, Issues, Challenges and Future Directions
Big Graph : Tools, Techniques, Issues, Challenges and Future Directions Big Graph : Tools, Techniques, Issues, Challenges and Future Directions
Big Graph : Tools, Techniques, Issues, Challenges and Future Directions
 
Big Data & Hadoop. Simone Leo (CRS4)
Big Data & Hadoop. Simone Leo (CRS4)Big Data & Hadoop. Simone Leo (CRS4)
Big Data & Hadoop. Simone Leo (CRS4)
 
Managing Big Data (Chapter 2, SC 11 Tutorial)
Managing Big Data (Chapter 2, SC 11 Tutorial)Managing Big Data (Chapter 2, SC 11 Tutorial)
Managing Big Data (Chapter 2, SC 11 Tutorial)
 
Processing Big Data (Chapter 3, SC 11 Tutorial)
Processing Big Data (Chapter 3, SC 11 Tutorial)Processing Big Data (Chapter 3, SC 11 Tutorial)
Processing Big Data (Chapter 3, SC 11 Tutorial)
 
JovianDATA MDX Engine Comad oct 22 2011
JovianDATA MDX Engine Comad oct 22 2011JovianDATA MDX Engine Comad oct 22 2011
JovianDATA MDX Engine Comad oct 22 2011
 
SciVisHalosFinalPaper
SciVisHalosFinalPaperSciVisHalosFinalPaper
SciVisHalosFinalPaper
 
IRJET - Evaluating and Comparing the Two Variation with Current Scheduling Al...
IRJET - Evaluating and Comparing the Two Variation with Current Scheduling Al...IRJET - Evaluating and Comparing the Two Variation with Current Scheduling Al...
IRJET - Evaluating and Comparing the Two Variation with Current Scheduling Al...
 
Introduction to HADOOP
Introduction to HADOOPIntroduction to HADOOP
Introduction to HADOOP
 
Leveraging Map Reduce With Hadoop for Weather Data Analytics
Leveraging Map Reduce With Hadoop for Weather Data Analytics Leveraging Map Reduce With Hadoop for Weather Data Analytics
Leveraging Map Reduce With Hadoop for Weather Data Analytics
 
A0360109
A0360109A0360109
A0360109
 
Big Data Analytics with Storm, Spark and GraphLab
Big Data Analytics with Storm, Spark and GraphLabBig Data Analytics with Storm, Spark and GraphLab
Big Data Analytics with Storm, Spark and GraphLab
 
C0312023
C0312023C0312023
C0312023
 

En vedette

Animations
AnimationsAnimations
Animations
555123
 
Living Environments
Living EnvironmentsLiving Environments
Living Environments
Ray
 
LCIA Training Section 3
LCIA Training Section 3LCIA Training Section 3
LCIA Training Section 3
asriasky
 
11
1111
11
raul
 
Shannon Smith Cv 201109
Shannon Smith Cv 201109Shannon Smith Cv 201109
Shannon Smith Cv 201109
shagsa
 
Presentacio sense titol
Presentacio sense titolPresentacio sense titol
Presentacio sense titol
rnota
 
WeonTV at the EuroITV 2009
WeonTV at the EuroITV 2009WeonTV at the EuroITV 2009
WeonTV at the EuroITV 2009
Social iTV
 
iPad integration through an assessment lens
iPad integration through an assessment lensiPad integration through an assessment lens
iPad integration through an assessment lens
Kevin Amboe
 

En vedette (20)

Dmu scope 3 emissions
Dmu scope 3 emissionsDmu scope 3 emissions
Dmu scope 3 emissions
 
Qivana Ibo Presentation Vietnamese
Qivana Ibo Presentation  VietnameseQivana Ibo Presentation  Vietnamese
Qivana Ibo Presentation Vietnamese
 
G5 Mr Kanter group2
G5 Mr Kanter group2G5 Mr Kanter group2
G5 Mr Kanter group2
 
Animations
AnimationsAnimations
Animations
 
Living Environments
Living EnvironmentsLiving Environments
Living Environments
 
Prime Corpakis Regions2006
Prime Corpakis Regions2006Prime Corpakis Regions2006
Prime Corpakis Regions2006
 
ITGM8. Сергей Атрощенков (Еpam) Buzzword driven development и место тестировщ...
ITGM8. Сергей Атрощенков (Еpam) Buzzword driven development и место тестировщ...ITGM8. Сергей Атрощенков (Еpam) Buzzword driven development и место тестировщ...
ITGM8. Сергей Атрощенков (Еpam) Buzzword driven development и место тестировщ...
 
Mobile Levers for Retail
Mobile Levers for RetailMobile Levers for Retail
Mobile Levers for Retail
 
LCIA Training Section 3
LCIA Training Section 3LCIA Training Section 3
LCIA Training Section 3
 
11
1111
11
 
Shannon Smith Cv 201109
Shannon Smith Cv 201109Shannon Smith Cv 201109
Shannon Smith Cv 201109
 
Ibe presentation sept 2011
Ibe presentation sept 2011Ibe presentation sept 2011
Ibe presentation sept 2011
 
1903 352 2013
1903 352 20131903 352 2013
1903 352 2013
 
Linq to-sql-tutorial
Linq to-sql-tutorialLinq to-sql-tutorial
Linq to-sql-tutorial
 
Presentacio sense titol
Presentacio sense titolPresentacio sense titol
Presentacio sense titol
 
WeonTV at the EuroITV 2009
WeonTV at the EuroITV 2009WeonTV at the EuroITV 2009
WeonTV at the EuroITV 2009
 
Bibliotheken moeten naar buiten toe
Bibliotheken moeten naar buiten toeBibliotheken moeten naar buiten toe
Bibliotheken moeten naar buiten toe
 
Presentation on future of libraries for 50th library week program in Ankara, ...
Presentation on future of libraries for 50th library week program in Ankara, ...Presentation on future of libraries for 50th library week program in Ankara, ...
Presentation on future of libraries for 50th library week program in Ankara, ...
 
iPad integration through an assessment lens
iPad integration through an assessment lensiPad integration through an assessment lens
iPad integration through an assessment lens
 
Plodinec nola-082610
Plodinec nola-082610Plodinec nola-082610
Plodinec nola-082610
 

Similaire à Qiu bosc2010

Slide 1
Slide 1Slide 1
Slide 1
butest
 
Slide 1
Slide 1Slide 1
Slide 1
butest
 
HDFS-HC: A Data Placement Module for Heterogeneous Hadoop Clusters
HDFS-HC: A Data Placement Module for Heterogeneous Hadoop ClustersHDFS-HC: A Data Placement Module for Heterogeneous Hadoop Clusters
HDFS-HC: A Data Placement Module for Heterogeneous Hadoop Clusters
Xiao Qin
 
HPC with Clouds and Cloud Technologies
HPC with Clouds and Cloud TechnologiesHPC with Clouds and Cloud Technologies
HPC with Clouds and Cloud Technologies
Inderjeet Singh
 
Achieving Portability and Efficiency in a HPC Code Using Standard Message-pas...
Achieving Portability and Efficiency in a HPC Code Using Standard Message-pas...Achieving Portability and Efficiency in a HPC Code Using Standard Message-pas...
Achieving Portability and Efficiency in a HPC Code Using Standard Message-pas...
Derryck Lamptey, MPhil, CISSP
 
Application-Aware Big Data Deduplication in Cloud Environment
Application-Aware Big Data Deduplication in Cloud EnvironmentApplication-Aware Big Data Deduplication in Cloud Environment
Application-Aware Big Data Deduplication in Cloud Environment
Safayet Hossain
 

Similaire à Qiu bosc2010 (20)

Slide 1
Slide 1Slide 1
Slide 1
 
Slide 1
Slide 1Slide 1
Slide 1
 
Cyberinfrastructure and Applications Overview: Howard University June22
Cyberinfrastructure and Applications Overview: Howard University June22Cyberinfrastructure and Applications Overview: Howard University June22
Cyberinfrastructure and Applications Overview: Howard University June22
 
CLOUD BIOINFORMATICS Part1
 CLOUD BIOINFORMATICS Part1 CLOUD BIOINFORMATICS Part1
CLOUD BIOINFORMATICS Part1
 
HDFS-HC: A Data Placement Module for Heterogeneous Hadoop Clusters
HDFS-HC: A Data Placement Module for Heterogeneous Hadoop ClustersHDFS-HC: A Data Placement Module for Heterogeneous Hadoop Clusters
HDFS-HC: A Data Placement Module for Heterogeneous Hadoop Clusters
 
HPC with Clouds and Cloud Technologies
HPC with Clouds and Cloud TechnologiesHPC with Clouds and Cloud Technologies
HPC with Clouds and Cloud Technologies
 
My Other Computer is a Data Center: The Sector Perspective on Big Data
My Other Computer is a Data Center: The Sector Perspective on Big DataMy Other Computer is a Data Center: The Sector Perspective on Big Data
My Other Computer is a Data Center: The Sector Perspective on Big Data
 
MAP-REDUCE IMPLEMENTATIONS: SURVEY AND PERFORMANCE COMPARISON
MAP-REDUCE IMPLEMENTATIONS: SURVEY AND PERFORMANCE COMPARISONMAP-REDUCE IMPLEMENTATIONS: SURVEY AND PERFORMANCE COMPARISON
MAP-REDUCE IMPLEMENTATIONS: SURVEY AND PERFORMANCE COMPARISON
 
عصر کلان داده، چرا و چگونه؟
عصر کلان داده، چرا و چگونه؟عصر کلان داده، چرا و چگونه؟
عصر کلان داده، چرا و چگونه؟
 
Architecture and Performance of Runtime Environments for Data Intensive Scala...
Architecture and Performance of Runtime Environments for Data Intensive Scala...Architecture and Performance of Runtime Environments for Data Intensive Scala...
Architecture and Performance of Runtime Environments for Data Intensive Scala...
 
Achieving Portability and Efficiency in a HPC Code Using Standard Message-pas...
Achieving Portability and Efficiency in a HPC Code Using Standard Message-pas...Achieving Portability and Efficiency in a HPC Code Using Standard Message-pas...
Achieving Portability and Efficiency in a HPC Code Using Standard Message-pas...
 
Seminar_Report_hadoop
Seminar_Report_hadoopSeminar_Report_hadoop
Seminar_Report_hadoop
 
MAD skills for analysis and big data Machine Learning
MAD skills for analysis and big data Machine LearningMAD skills for analysis and big data Machine Learning
MAD skills for analysis and big data Machine Learning
 
Application-Aware Big Data Deduplication in Cloud Environment
Application-Aware Big Data Deduplication in Cloud EnvironmentApplication-Aware Big Data Deduplication in Cloud Environment
Application-Aware Big Data Deduplication in Cloud Environment
 
An Introduction to Cloud Computing by Robert Grossman 08-06-09 (v19)
An Introduction to Cloud Computing by Robert Grossman 08-06-09 (v19)An Introduction to Cloud Computing by Robert Grossman 08-06-09 (v19)
An Introduction to Cloud Computing by Robert Grossman 08-06-09 (v19)
 
Cross cloud map reduce for big data
Cross cloud map reduce for big dataCross cloud map reduce for big data
Cross cloud map reduce for big data
 
Embarrassingly/Delightfully Parallel Problems
Embarrassingly/Delightfully Parallel ProblemsEmbarrassingly/Delightfully Parallel Problems
Embarrassingly/Delightfully Parallel Problems
 
DIET_BLAST
DIET_BLASTDIET_BLAST
DIET_BLAST
 
Big Data & Hadoop
Big Data & HadoopBig Data & Hadoop
Big Data & Hadoop
 
Parallel Computing 2007: Overview
Parallel Computing 2007: OverviewParallel Computing 2007: Overview
Parallel Computing 2007: Overview
 

Plus de BOSC 2010

Mercer bosc2010 microsoft_framework
Mercer bosc2010 microsoft_frameworkMercer bosc2010 microsoft_framework
Mercer bosc2010 microsoft_framework
BOSC 2010
 
Langmead bosc2010 cloud-genomics
Langmead bosc2010 cloud-genomicsLangmead bosc2010 cloud-genomics
Langmead bosc2010 cloud-genomics
BOSC 2010
 
Schultheiss bosc2010 persistance-web-services
Schultheiss bosc2010 persistance-web-servicesSchultheiss bosc2010 persistance-web-services
Schultheiss bosc2010 persistance-web-services
BOSC 2010
 
Swertz bosc2010 molgenis
Swertz bosc2010 molgenisSwertz bosc2010 molgenis
Swertz bosc2010 molgenis
BOSC 2010
 
Rice bosc2010 emboss
Rice bosc2010 embossRice bosc2010 emboss
Rice bosc2010 emboss
BOSC 2010
 
Morris bosc2010 evoker
Morris bosc2010 evokerMorris bosc2010 evoker
Morris bosc2010 evoker
BOSC 2010
 
Kono bosc2010 pathway_projector
Kono bosc2010 pathway_projectorKono bosc2010 pathway_projector
Kono bosc2010 pathway_projector
BOSC 2010
 
Kanterakis bosc2010 molgenis
Kanterakis bosc2010 molgenisKanterakis bosc2010 molgenis
Kanterakis bosc2010 molgenis
BOSC 2010
 
Gautier bosc2010 pythonbioconductor
Gautier bosc2010 pythonbioconductorGautier bosc2010 pythonbioconductor
Gautier bosc2010 pythonbioconductor
BOSC 2010
 
Gardler bosc2010 community_developmentattheasf
Gardler bosc2010 community_developmentattheasfGardler bosc2010 community_developmentattheasf
Gardler bosc2010 community_developmentattheasf
BOSC 2010
 
Friedberg bosc2010 iprstats
Friedberg bosc2010 iprstatsFriedberg bosc2010 iprstats
Friedberg bosc2010 iprstats
BOSC 2010
 
Fields bosc2010 bio_perl
Fields bosc2010 bio_perlFields bosc2010 bio_perl
Fields bosc2010 bio_perl
BOSC 2010
 
Chapman bosc2010 biopython
Chapman bosc2010 biopythonChapman bosc2010 biopython
Chapman bosc2010 biopython
BOSC 2010
 
Bonnal bosc2010 bio_ruby
Bonnal bosc2010 bio_rubyBonnal bosc2010 bio_ruby
Bonnal bosc2010 bio_ruby
BOSC 2010
 
Puton bosc2010 bio_python-modules-rna
Puton bosc2010 bio_python-modules-rnaPuton bosc2010 bio_python-modules-rna
Puton bosc2010 bio_python-modules-rna
BOSC 2010
 
Bader bosc2010 cytoweb
Bader bosc2010 cytowebBader bosc2010 cytoweb
Bader bosc2010 cytoweb
BOSC 2010
 
Talevich bosc2010 bio-phylo
Talevich bosc2010 bio-phyloTalevich bosc2010 bio-phylo
Talevich bosc2010 bio-phylo
BOSC 2010
 
Zmasek bosc2010 aptx
Zmasek bosc2010 aptxZmasek bosc2010 aptx
Zmasek bosc2010 aptx
BOSC 2010
 
Wilkinson bosc2010 moby-to-sadi
Wilkinson bosc2010 moby-to-sadiWilkinson bosc2010 moby-to-sadi
Wilkinson bosc2010 moby-to-sadi
BOSC 2010
 
Venkatesan bosc2010 onto-toolkit
Venkatesan bosc2010 onto-toolkitVenkatesan bosc2010 onto-toolkit
Venkatesan bosc2010 onto-toolkit
BOSC 2010
 

Plus de BOSC 2010 (20)

Mercer bosc2010 microsoft_framework
Mercer bosc2010 microsoft_frameworkMercer bosc2010 microsoft_framework
Mercer bosc2010 microsoft_framework
 
Langmead bosc2010 cloud-genomics
Langmead bosc2010 cloud-genomicsLangmead bosc2010 cloud-genomics
Langmead bosc2010 cloud-genomics
 
Schultheiss bosc2010 persistance-web-services
Schultheiss bosc2010 persistance-web-servicesSchultheiss bosc2010 persistance-web-services
Schultheiss bosc2010 persistance-web-services
 
Swertz bosc2010 molgenis
Swertz bosc2010 molgenisSwertz bosc2010 molgenis
Swertz bosc2010 molgenis
 
Rice bosc2010 emboss
Rice bosc2010 embossRice bosc2010 emboss
Rice bosc2010 emboss
 
Morris bosc2010 evoker
Morris bosc2010 evokerMorris bosc2010 evoker
Morris bosc2010 evoker
 
Kono bosc2010 pathway_projector
Kono bosc2010 pathway_projectorKono bosc2010 pathway_projector
Kono bosc2010 pathway_projector
 
Kanterakis bosc2010 molgenis
Kanterakis bosc2010 molgenisKanterakis bosc2010 molgenis
Kanterakis bosc2010 molgenis
 
Gautier bosc2010 pythonbioconductor
Gautier bosc2010 pythonbioconductorGautier bosc2010 pythonbioconductor
Gautier bosc2010 pythonbioconductor
 
Gardler bosc2010 community_developmentattheasf
Gardler bosc2010 community_developmentattheasfGardler bosc2010 community_developmentattheasf
Gardler bosc2010 community_developmentattheasf
 
Friedberg bosc2010 iprstats
Friedberg bosc2010 iprstatsFriedberg bosc2010 iprstats
Friedberg bosc2010 iprstats
 
Fields bosc2010 bio_perl
Fields bosc2010 bio_perlFields bosc2010 bio_perl
Fields bosc2010 bio_perl
 
Chapman bosc2010 biopython
Chapman bosc2010 biopythonChapman bosc2010 biopython
Chapman bosc2010 biopython
 
Bonnal bosc2010 bio_ruby
Bonnal bosc2010 bio_rubyBonnal bosc2010 bio_ruby
Bonnal bosc2010 bio_ruby
 
Puton bosc2010 bio_python-modules-rna
Puton bosc2010 bio_python-modules-rnaPuton bosc2010 bio_python-modules-rna
Puton bosc2010 bio_python-modules-rna
 
Bader bosc2010 cytoweb
Bader bosc2010 cytowebBader bosc2010 cytoweb
Bader bosc2010 cytoweb
 
Talevich bosc2010 bio-phylo
Talevich bosc2010 bio-phyloTalevich bosc2010 bio-phylo
Talevich bosc2010 bio-phylo
 
Zmasek bosc2010 aptx
Zmasek bosc2010 aptxZmasek bosc2010 aptx
Zmasek bosc2010 aptx
 
Wilkinson bosc2010 moby-to-sadi
Wilkinson bosc2010 moby-to-sadiWilkinson bosc2010 moby-to-sadi
Wilkinson bosc2010 moby-to-sadi
 
Venkatesan bosc2010 onto-toolkit
Venkatesan bosc2010 onto-toolkitVenkatesan bosc2010 onto-toolkit
Venkatesan bosc2010 onto-toolkit
 

Dernier

CNv6 Instructor Chapter 6 Quality of Service
CNv6 Instructor Chapter 6 Quality of ServiceCNv6 Instructor Chapter 6 Quality of Service
CNv6 Instructor Chapter 6 Quality of Service
giselly40
 

Dernier (20)

CNv6 Instructor Chapter 6 Quality of Service
CNv6 Instructor Chapter 6 Quality of ServiceCNv6 Instructor Chapter 6 Quality of Service
CNv6 Instructor Chapter 6 Quality of Service
 
Boost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityBoost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivity
 
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
 
What Are The Drone Anti-jamming Systems Technology?
What Are The Drone Anti-jamming Systems Technology?What Are The Drone Anti-jamming Systems Technology?
What Are The Drone Anti-jamming Systems Technology?
 
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
 
Tata AIG General Insurance Company - Insurer Innovation Award 2024
Tata AIG General Insurance Company - Insurer Innovation Award 2024Tata AIG General Insurance Company - Insurer Innovation Award 2024
Tata AIG General Insurance Company - Insurer Innovation Award 2024
 
The Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptxThe Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptx
 
2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...
 
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time AutomationFrom Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
 
Automating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps ScriptAutomating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps Script
 
A Call to Action for Generative AI in 2024
A Call to Action for Generative AI in 2024A Call to Action for Generative AI in 2024
A Call to Action for Generative AI in 2024
 
Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)
 
08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men
 
Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024
 
🐬 The future of MySQL is Postgres 🐘
🐬  The future of MySQL is Postgres   🐘🐬  The future of MySQL is Postgres   🐘
🐬 The future of MySQL is Postgres 🐘
 
Factors to Consider When Choosing Accounts Payable Services Providers.pptx
Factors to Consider When Choosing Accounts Payable Services Providers.pptxFactors to Consider When Choosing Accounts Payable Services Providers.pptx
Factors to Consider When Choosing Accounts Payable Services Providers.pptx
 
Exploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone ProcessorsExploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone Processors
 
Understanding Discord NSFW Servers A Guide for Responsible Users.pdf
Understanding Discord NSFW Servers A Guide for Responsible Users.pdfUnderstanding Discord NSFW Servers A Guide for Responsible Users.pdf
Understanding Discord NSFW Servers A Guide for Responsible Users.pdf
 
Breaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path MountBreaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path Mount
 
Presentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreterPresentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreter
 

Qiu bosc2010

  • 1. Cloud Technologies and Their Applications The Bioinformatics Open Source Conference (BOSC 2010) Boston, Massachusetts Judy Qiu http://salsahpc.indiana.edu Assistant Director, Pervasive Technology Institute Assistant Professor, School of Informatics and Computing Indiana University
  • 2. Data Explosion and Challenges Data Deluge Cloud Technologies Why ? How ? Life Science Applications Parallel Computing What ?
  • 3.
  • 4. Some Life Sciences Applications EST (Expressed Sequence Tag)sequence assembly program using DNA sequence assembly program software CAP3. Metagenomics and Alu repetition alignment using Smith Waterman dissimilarity computations followed by MPI applications for Clustering and MDS (Multi Dimensional Scaling) for dimension reduction before visualization Mapping the 60 million entries in PubCheminto two or three dimensions to aid selection of related chemicals with convenient Google Earth like Browser. This uses either hierarchical MDS (which cannot be applied directly as O(N2)) or GTM (Generative Topographic Mapping). Correlating Childhood obesity with environmental factorsby combining medical records with Geographical Information data with over 100 attributes using correlation computation, MDS and genetic algorithms for choosing optimal environmental factors.
  • 5.
  • 6. User submit their jobs to the pipeline. The components are services and so is the whole pipeline.Internet
  • 7. Cloud Services and MapReduce Cloud Technologies Data Deluge Life Science Applications Parallel Computing
  • 8. Clouds as Cost Effective Data Centers 7 Builds giant data centers with 100,000’s of computers; ~ 200-1000 to a shipping container with Internet access “Microsoft will cram between 150 and 220 shipping containers filled with data center gear into a new 500,000 square foot Chicago facility. This move marks the most significant, public use of the shipping container systems popularized by the likes of Sun Microsystems and Rackable Systems to date.” ―News Release from Web
  • 9. Clouds hide Complexity 8 Cyberinfrastructure Is “Research as a Service” SaaS: Software as a Service (e.g. Clustering is a service) PaaS: Platform as a Service IaaS plus core software capabilities on which you build SaaS (e.g. Azure is a PaaS; MapReduce is a Platform) IaaS(HaaS): Infrasturcture as a Service (get computer time with a credit card and with a Web interface like EC2)
  • 11. MapReduce Map(Key, Value) Reduce(Key, List<Value>) A parallel Runtime coming from Information Retrieval Data Partitions A hash function maps the results of the map tasks to r reduce tasks Reduce Outputs Implementations support: Splitting of data Passing the output of map functions to reduce functions Sorting the inputs to the reduce function based on the intermediate keys Quality of services
  • 12. Edge : communication path Vertex : execution task Hadoop & DryadLINQ Apache Hadoop Microsoft DryadLINQ Standard LINQ operations Data/Compute Nodes Master Node DryadLINQ operations Job Tracker M M M M R R R R HDFS Name Node Data blocks 1 2 DryadLINQ Compiler 2 3 3 4 Directed Acyclic Graph (DAG) based execution flows Dryad process the DAG executing vertices on compute clusters LINQ provides a query interface for structured data Provide Hash, Range, and Round-Robin partition patterns Apache Implementation of Google’s MapReduce Hadoop Distributed File System (HDFS) manage data Map/Reduce tasks are scheduled based on data locality in HDFS (replicated data blocks) Dryad Execution Engine Job creation; Resource management; Fault tolerance& re-execution of failed taskes/vertices
  • 13. Applications using Dryad & DryadLINQ Input files (FASTA) CAP3 - Expressed Sequence Tag assembly to re-construct full-length mRNA CAP3 CAP3 CAP3 DryadLINQ Output files Perform using DryadLINQ and Apache Hadoop implementations Single “Select” operation in DryadLINQ “Map only” operation in Hadoop X. Huang, A. Madan, “CAP3: A DNA Sequence Assembly Program,” Genome Research, vol. 9, no. 9, pp. 868-877, 1999.
  • 14. Classic Cloud Architecture Amazon EC2 and Microsoft Azure MapReduce Architecture Apache Hadoop and Microsoft DryadLINQ HDFS Input Data Set Data File Map() Map() Executable Optional Reduce Phase Reduce Results HDFS
  • 15.
  • 16. Hadoop, DryadLINQ - 32 nodes (256 cores IDataPlex)
  • 17. EC2 - 16 High CPU extra large instances (128 cores)
  • 18. Azure- 128 small instances (128 cores)
  • 19. Ease of Use – Dryad/Hadoop are easier than EC2/Azure as higher level models
  • 20. Lines of code including file copyAzure : ~300 Hadoop: ~400 Dyrad: ~450 EC2 : ~700
  • 21. Table 1 : Selected EC2 Instance Types
  • 22. 4096 Cap3 data files : 1.06 GB / 1875968 reads (458 readsX4096).. Following is the cost to process 4096 CAP3 files.. Amortized cost in Tempest (24 core X 32 nodes, 48 GB per node) = 9.43$ (Assume 70% utilization, write off over 3 years, include support)
  • 23. Data Intensive Applications Cloud Technologies Data Deluge Life Science Applications Parallel Computing
  • 24.
  • 25. “Multiple Sequence Alignment” (creating vectors of characters) doesn’t seem to work if N larger than O(100), where 100’s of characters long.Step 1: Can calculate N2 dissimilarities (distances) between sequences Step 2: Find families by clustering (using much better methods than Kmeans). As no vectors, use vector free O(N2) methods Step 3: Map to 3D for visualization using Multidimensional Scaling (MDS) – also O(N2) Results: N = 50,000 runs in 10 hours (the complete pipeline above) on 768 cores Discussions: Need to address millions of sequences ….. Currently using a mix of MapReduce and MPI Twister will do all steps as MDS, Clustering just need MPI Broadcast/Reduce
  • 26. All-Pairs Using DryadLINQ 125 million distances 4 hours & 46 minutes Calculate Pairwise Distances (Smith Waterman Gotoh) Calculate pairwise distances for a collection of genes (used for clustering, MDS) Fine grained tasks in MPI Coarse grained tasks in DryadLINQ Performed on 768 cores (Tempest Cluster) Moretti, C., Bui, H., Hollingsworth, K., Rich, B., Flynn, P., & Thain, D. (2009). All-Pairs: An Abstraction for Data Intensive Computing on Campus Grids. IEEE Transactions on Parallel and Distributed Systems, 21, 21-36.
  • 27. Biology MDS and Clustering Results Alu Families This visualizes results of Alu repeats from Chimpanzee and Human Genomes. Young families (green, yellow) are seen as tight clusters. This is projection of MDS dimension reduction to 3D of 35399 repeats – each with about 400 base pairs Metagenomics This visualizes results of dimension reduction to 3D of 30000 gene sequences from an environmental sample. The many different genes are classified by clustering algorithm and visualized by MDS dimension reduction
  • 28. Hadoop/Dryad ComparisonInhomogeneous Data I Inhomogeneity of data does not have a significant effect when the sequence lengths are randomly distributed Dryad with Windows HPCS compared to Hadoop with Linux RHEL on Idataplex (32 nodes)
  • 29. Hadoop/Dryad ComparisonInhomogeneous Data II This shows the natural load balancing of Hadoop MR dynamic task assignment using a global pipe line in contrast to the DryadLinq static assignment Dryad with Windows HPCS compared to Hadoop with Linux RHEL on Idataplex (32 nodes)
  • 30. Hadoop VM Performance Degradation Perf. Degradation = (Tvm – Tbaremetal)/Tbaremetal 15.3% Degradation at largest data set size
  • 31. Parallel Computing and Software Cloud Technologies Data Deluge Life Science Applications Parallel Computing
  • 32.
  • 33. Intermediate results are directly transferred from the map tasks to the reduce tasks – eliminates local files
  • 36. Combine phase to combine reductions
  • 37. User Program is the composer of MapReduce computations
  • 38. Extendsthe MapReduce model to iterativecomputationsM Static data Configure() Worker Nodes Reduce Worker R D D MR Driver User Program Iterate MRDeamon D M M M M Data Read/Write R R R R User Program δ flow Communication Map(Key, Value) File System Data Split Reduce (Key, List<Value>) Close() Combine (Key, List<Value>) Different synchronization and intercommunication mechanisms used by the parallel runtimes
  • 40. Iterative Computations K-means Matrix Multiplication Performance of K-Means Parallel Overhead Matrix Multiplication
  • 41.
  • 42. Optimization problem to find mapping in target dimension of the given data based on pairwise proximity information while minimize the objective function.
  • 43. Objective functions: STRESS (1) or SSTRESS (2)
  • 44. Only needs pairwise distances ijbetween original points (typically not Euclidean)
  • 45.
  • 46. Original algorithm use EM method for optimization
  • 47. Deterministic Annealing algorithm can be used for finding a global solution
  • 48. Objective functions is to maximize log-likelihood:[1] I. Borg and P. J. Groenen. Modern Multidimensional Scaling: Theory and Applications. Springer, New York, NY, U.S.A., 2005. [2] C. Bishop, M. Svens´en, and C. Williams. GTM: The generative topographic mapping. Neural computation, 10(1):215–234, 1998.
  • 49. Science Cloud (Dynamic Virtual Cluster) Architecture Smith Waterman Dissimilarities, CAP-3 Gene Assembly, PhyloD Using DryadLINQ, High Energy Physics, Clustering, Multidimensional Scaling, Generative Topological Mapping Applications Services and Workflow Microsoft DryadLINQ / MPI Apache Hadoop / Twister/ MPI Runtimes Linux Bare-system Windows Server 2008 HPC Bare-system Linux Virtual Machines Windows Server 2008 HPC Infrastructure software Xen Virtualization Xen Virtualization XCAT Infrastructure Hardware iDataplex Bare-metal Nodes Dynamic Virtual Cluster provisioning via XCAT Supports both stateful and stateless OS images
  • 50. Dynamic Virtual Clusters Monitoring & Control Infrastructure Monitoring Interface Monitoring Infrastructure Dynamic Cluster Architecture Pub/Sub Broker Network SW-G Using Hadoop SW-G Using Hadoop SW-G Using DryadLINQ Virtual/Physical Clusters Linux Bare-system Linux on Xen Windows Server 2008 Bare-system Switchable clusters on the same hardware (~5 minutes between different OS such as Linux+Xen to Windows+HPCS) Support for virtual clusters SW-G : Smith Waterman Gotoh Dissimilarity Computation as an pleasingly parallel problem suitable for MapReduce style applications XCAT Infrastructure Summarizer iDataplex Bare-metal Nodes (32 nodes) XCAT Infrastructure Switcher iDataplex Bare-metal Nodes
  • 51.
  • 52. At bottom, this cluster is switching between Environments – Linux; Linux +Xen; Windows + HPCS. Takes about ~7 minutes.
  • 53.
  • 54. Summary of Initial Results Cloud technologies (Dryad/Hadoop/Azure/EC2) promising for Biology computations Dynamic Virtual Clusters allow one to switch between different modes Overhead of VM’s on Hadoop (15%) acceptable Inhomogeneous problems currently favors Hadoop over Dryad Twister allows iterative problems (classic linear algebra/datamining) to use MapReduce model efficiently Prototype Twister released
  • 55.
  • 56. References Twister  Open Source Iterative MapReduce Software www.iterativemapreduce.org SALSA Project salsahpc.indiana.edu FutureGrid Project futuregrid.org Sponsors Microsoft, NIH, NSF, Pervasive Technology Institute
  • 57. MapReduce and Clouds for Science http://salsahpc.indiana.edu Indiana University Bloomington Judy Qiu, SALSA Group SALSA project (salsahpc.indiana.edu) investigates new programming models of parallel multicore computing and Cloud/Grid computing. It aims at developing and applying parallel and distributed Cyberinfrastructure to support large scale data analysis. We illustrate this with a study of usability and performance of different Cloud approaches. We will develop MapReduce technology for Azure that matches that available on FutureGrid in three stages: AzureMapReduce (where we already have a prototype), AzureTwister, and TwisterMPIReduce. These offer basic MapReduce, iterative MapReduce, and a library mapping a subset of MPI to Twister. They are matched by a set of applications that test the increasing sophistication of the environment and run on Azure, FutureGrid, or in a workflow linking them. Iterative MapReduce using Java Twister http://www.iterativemapreduce.org/ Twister supports iterative MapReduce Computations and allows MapReduce to achieve higher performance, perform faster data transfers, and reduce the time it takes to process vast sets of data for data mining and machine learning applications. Open source code supports streaming communication and long running processes. MPI is not generally suitable for clouds. But the subclass of MPI style operations supported by Twister – namely, the equivalent of MPI-Reduce, MPI-Broadcast (multicast), and MPI-Barrier – have large messages and offer the possibility of reasonable cloud performance. This hypothesis is supported by our comparison of JavaTwister with MPI and Hadoop. Many linear algebra and data mining algorithms need only this MPI subset, and we have used this in our initial choice of evaluating applications. We wish to compare Twister implementations on Azure with MPI implementations (running as a distributed workflow) on FutureGrid. Thus, we introduce a new runtime, TwisterMPIReduce, as a software library on top of Twister, which will map applications using the broadcast/reduce subset of MPI to Twister. Architecture of Twister MapReduce on Azure − AzureMapReduce AzureMapReduce uses Azure Queues for map/reduce task scheduling, Azure Tables for metadata and monitoring data storage, Azure Blob Storage for input/output/intermediate data storage, and Azure Compute worker roles to perform the computations. The map/reduce tasks of the AzureMapReduce runtime are dynamically scheduled using a global queue. Usability and Performance of Different Cloud and MapReduce Models The cost effectiveness of cloud data centers combined with the comparable performance reported here suggests that loosely coupled science applications will increasingly be implemented on clouds and that using MapReduce will offer convenient user interfaces with little overhead. We present three typical results with two applications (PageRank and SW-G for biological local pairwise sequence alignment) to evaluate performance and scalability of Twister and AzureMapReduce. Architecture of AzureMapReduce Architecture of TwisterMPIReduce Parallel Efficiency of the different parallel runtimes for the Smith Waterman Gotoh algorithm Total running time for 20 iterations of Pagerank algorithm on ClueWeb data with Twister and Hadoop on 256 cores Performance of AzureMapReduce on Smith Waterman Gotoh distance computation as a function of number of instances used

Notes de l'éditeur

  1. Emerging technologies we cannot draw too much conclusion yet but all look promising at the momentEase of development. Dryad and Hadoop &gt;&gt; EC2 and AzureWhy Azure is worse than EC2 although less code lines?Simplest model
  2. #core x 1Ghz
  3. 10k data size
  4. 10k data size
  5. Overhead is independent of computation time. With the size of data go up, overall overhead is reduced.
  6. MDS implemented in C#; GTM in R and C/C++
  7. Support development of new applications and new middleware using Cloud, Grid and Parallel computing (Nimbus, Eucalyptus, Hadoop, Globus, Unicore, MPI, OpenMP. Linux, Windows …) looking at functionality, interoperability, performance Put the “science” back in the computer science of grid computing by enabling replicable experimentsOpen source software built around Moab/xCAT to support dynamic provisioning from Cloud to HPC environment, Linux to Windows ….. with monitoring, benchmarks and support of important existing middlewareJune 2010 Initial users; September 2010 All hardware (except IU shared memory system) accepted and major use starts; October 2011 FutureGrid allocatable via TeraGrid process