SlideShare a Scribd company logo
1 of 36
A Container-based Sizing Framework
for Apache Hadoop/Spark Clusters
October 27, 2016
Hokkaido University
Akiyoshi SUGIKI, Phyo Thandar Thant
Agenda
Hokkaido University Academic Cloud
A Docker-based Sizing Framework for Hadoop
Multi-objective Optimization of Hadoop
1
Information Initiative Center, Hokkaido
University
Founded in 1962 as a national supercomputing center
A member of
– HPCI (High Performance Computing Infrastructure) - 12 institutes
– JHPCN (Joint Usage/Research Center for Interdisciplinary Large-
scale Information Infrastructure) - 8 institutes
University R&D center for supercomputing, cloud computing,
networking, and cyber security
Operating HPC twins
– Supercomputer (172 TFLOPS) and Academic Cloud System (43 TFLOPS)
2
Hokkaido University Academic Cloud (2011-)
Japan’s largest academic cloud system
– > 43 TFLOPS (> 114 nodes)
– ~2,000 VMs
3
Supercomputer Cloud System
Data-science
Cloud System
(Added, 2013-)
SR16000 M1
172 TF/176 nodes
22 TB (128 GB/node)
AMS2500 (File System)
600 TB (SAS, RAID5)
300 TB (SATA, RAID6)
BS2000
44 TF/114 nodes
14 TB (128 GB/node)
Cloud Storage
1.96 PB
AMS2300 (Boot File System)
260 TB (SAS, RAID6)
VFP500N+AMS2500 (NAS)
500 TB (near-line NAS/RAID6)
HA8000/RS210HM
80 GB x 25 nodes
32 GB x 2 nodes
CloudStack 3.x CloudStack 4.x
Hadoop Package for “Bigdata”
(Hadoop, Hive, Mahout, and R)
Supporting “Big Data”
“Big Data” cluster package
• Hadoop, Hive, Mahout, and R
• MPI, OpenMP, and Torque
– Automatic deployment of VM-based clusters
– Custom scheduling policy
• Spread I/O on multiple disks
4
VM
VM
VM
VM
#1
#2
#3
#4
Storage #1
Storage #2
Storage #3
Storage #4
Hadoop Cluster
Virtual Disks
Hadoop
Hive
Mahout
R
Big Data Package
Lessons Learned (So Far)
No single Hadoop (a little like silos)
– Hadoop instance for each group of users
Version problem
– Upgrades and expansion of Hadoop ecosystem
Strong demand of middle person
– Gives advice with deep understanding of research domains,
statistical analysis, and Hadoop-based systems
5
VM VM VM
Hadoop #1
VM VM VM
Hadoop #2
VM VM VM
Hadoop #3
Research Group #1 Research Group #2 Research Group #3
Research
Data
Going Next
A new system will be installed in April, 2018
– x2 CPU cores, x5 storage space
– Bare-metal, accelerating performance at every layer
– Supports both interclouds and hybrid clouds
Still supports Hadoop as well as Spark
– Cluster templates
– Build user community
6
Supercomputer
System Hokkaido U.
Regions
(Tokyo,
Osaka,
Okinawa)
Cloud
Systems
(In other universities
and public clouds)
Cluster Templates (Hadoop, Spark, …)
Requirements
Run Hadoop on multiple Clouds
– Academic Clouds (Community Clouds)
• Hokkaido University Academic Cloud, ...
– Public Clouds
• Amazon AWS, Microsoft Azure, Google Cloud, …
Offer best choice for researches (our users)
– Under multiple criteria
• Cost
• Performance (time constraints)
• Energy
…
7
Our Solution
A Container-based Sizing Framework for Hadoop Clusters
– Docker-based
• Light-weight, easily migrate to other clouds
– Emulation (rather than simulation)
• Close to actual execution times on multiple clouds
– Output:
• Instance type
• Number of instances
• Hadoop configuration (*-site.xml files)
8
Architecture
9
Emulation
Engine
Docker Runtime
Application (HPC, Big Data)
Application (HPC, Big Data)
Docker
Application (HPC, Big Data,…)
CPU
Memory
DiskI/O
NetworkI/O
Interpose
Collect Metrics
Run Profiles
Instance
Profiles
t2 m4 r3c4
Public Clouds
Cost
Estimator
Why Docker?
10
Virtual Machines OS Containers
Size Large Small
Machine Emulation Complete Partial (Share OS kernel)
Launch time Large Small
Migration Sometime requires image
conversion
Easy
Software Xen, KVM, VMware Dockers, rkt, …
App
Lib
App
Lib
OS
Container Container
App
Lib
App
Lib
OS
VM VM
OS
Hypervisor
Container Execution
Cluster Management
– Docker Swarm
– Multi-host (VXLAN-based) networking mode
Container
– Resources
• CPUs, memory, disk, and network I/O
– Regulation
• Docker run options, cgroups, and tc
– Monitoring
• Docker remote API and cgroups
11
Docker Image
“Hadoop all in the box”
– Hadoop
– Spark
– HiBench
The same image for master/slaves
Exports
– (Environment variables)
– File mounts
• *-site.xml files
– (Data volumes)
12
Hadoop
Spark
HiBench
Hadoop
Spark
HiBench
Volume mounts
Hadoop all in the box
core-site.xml
hdfs-site.xml
yarn-site.xml
mapred-site.xml
Resources
Resources How Command
CPU cores Change CPU set Docker run/cgroups
clock rate Change quota & period Docker run/cgroups
Memory size Set memory limit Docker run/cgroups
Out-of-
memory
(OOM)
Change out-of-memory
handling
Docker run/cgroups
Disk IOPS Throttle read/write IOPS Docker run/cgroups
bandwidth Throttle read/write bytes/sec Docker run/cgroups
Network IOPS Throttle TX/RX IOPS Docker run/cgroups
bandwidth Throttle TX/RX bytes/sec Docker run/cgroups
latency Insert latency (> 1 ms) tc
Freezer freeze Suspend/resume cgroups
13
Hadoop Configuration
Must be adjusted according to
– Instance type (CPU, memory, disk, and network)
– Number of instances
Targeting all parameters in *-site.xml
Dependent parameters
– (Instance type)
– YARN container size
– JVM heap size
– Map task size
– Reduce task size
14
Machine Instance Size
YARN Container Size
JVM Heap Size
Map/Reduce
Task Size
Optimization
Multi-objective GAs
– Trading cost and performance (time constraints)
– Other factors: energy, …
– Future: multi-objective to many-objective (> 3)
Generate “Pareto-optimal Front”
Technique: non-dominated sorting
15
Objective 1
Objective 2
X
X X X
X
X
X
X
X
X
X
X
XX
X
(Short) Summary
A Sizing Framework for Hadoop/Spark Clusters
– OS container-based approach
– Combined with Genetic Algorithms
• Multi-objective optimization (cost & perf.)
Future Work
– Docker Container Executor (DCE)
• DCE runs YARN containers into Docker ones
• Designed to provide custom environment for each app.
• We believe DCE can also be utilized for slow-down and speeding-
up of Hadoop tasks
16
Slow Down - Torturing Hadoop
Make strugglers
No intervention is required
17
Map 1 Map 2 Map 3 Map 4 Map 5
Master
Red 1 Red 2 Red 3 Red 4
Map Tasks
Reduce Tasks
Struggler
Struggler
Speeding up - Accelerating Hadoop
Balance resource usage of tasks on the same node
18
Map 1 Map 2 Map 3 Map 4 Map 5
Master
Red 1 Red 2 Red 3 Red 4
Map Tasks
Reduce Tasks
Struggler
Struggler
MHCO: Multi-Objective Hadoop
Configuration Optimization Using
Steady-State NSGA-II
20
Introduction
BIG
DATA
◦ Increasing use of connected devices at the hands of the Internet of
Things and data growth from scientific researches will lead to an
exponential increase in the data
◦ Portion of these data is underutilized or underexploited
◦ Hadoop MapReduce is very popular programming model for large
scale data analytics
Problem Definition I
21
◦ Objective 1  Parameter Tuning for Minimizing Execution Time
mapred-site
.xml
core-site
.xml
hdfs-site
.xml
yarn-site
.xml
Configuration settings
for HDFS core such as
I/O settings
Configuration settings
for HDFS daemons
Configuration settings
for MapReduce
daemons
Configuration settings
for YARN daemons
◦Hadoop provides tunable options have significant effect on
application performance
◦Practitioners and administers lack the expertise to tune
◦Appropriate parameter configuration is the key factor in Hadoop
Problem Definition II
22
◦ Appropriate machine instance selection for Hadoop cluster
◦ Objective 2  Instance Type Selection for Minimizing Hadoop
Cluster Deployment Cost
request
Service
provider
Applic
ation
result
Machine instance type
- small
- medium
- large
- x-large Pay Per
Use
Proposed Search based Approach
23
ssNSGA-II
Performance Optimization
Hadoop Parameter
Tuning
1
Deployment Cost
Optimization
Cluster Instance Type
Selection
2
◦ Chromosome encoding can solve dynamic nature of Hadoop on
version changes
◦ Use Steady State approach for computation overhead reduction
in generic GA approach
◦ Bi-objective optimization (execution time, cluster deployment
cost)
Objective Function
min t(p) , min c(p)
where,
p = [p1,p2,…,pm] ,
configuration parameter
list and instance type
t(p) = execution time of MR job
c(p)= machine instance usage
cost
24
t(p) = twc
c(p) = (SP*NS)*t(p)
where,
twc = workload execution time
SP= instance price
NS=no of machine instances
Assumption
- two objective functions are black-box functions
- no of instances in the cluster is static
Instance
type
Mem(GB)
/ cpu cores
Price per
second (Yen)
X-large 128/40 0.0160
Large 30/10 0.0039
Medium 12/4 0.0016
Small 3/1 0.0004
Parameter Grouping
I. HDFS and MAPREDUCE PARAMETERS
II. YARN PARAMETERS
III.YARN related MAPREDUCE PARAMETERS
25
17
6
7
30
machine instance
type specification
(cpu, mem)
reference from
previous researches
Group I Parameter Values
26
Parameter Name Value Range
dfs.namenode.handler.count 10, 20
dfs.datanode.handler.count 10, 20
dfs.blocksize 134217728,
268435456
mapreduce.map.output.compress True, False
mapreduce.job.jvm.numtasks 1: limited,
-1: unlimited
mapreduce.map.sort.spill.percent 0.8, 0.9
mapreduce.reduce.shuffle.input.buffer.p
ercent
0.7, 0.8
mapreduce.reduce.shuffle.memory.limit.
percent
0.25, 0.5
mapreduce.reduce.shuffle.merge.percent 0.66, 0.9
mapreduce.reduce.input.buffer.percent 0.0, 0.5
Parameter Name Value Range
dfs.datanode.max.transfer.threads 4096, 5120,
6144, 7168
dfs.datanode.balance.bandwidthPer
Sec
1048576,
2097152,
194304, 8388608
mapreduce.task.io.sort.factor 10, 20, 30, 40
mapreduce.task.io.sort.mb 100, 200, 300,
400
mapreduce.tasktracker.http.threads 40, 45, 50, 60
mapreduce.reduce.shuffle.parallelco
pies
5, 10, 15, 20
mapreduce.reduce.merge.inmem.thr
eshold
1000, 1500,
2000, 2500
Group II and III Parameter Values
YARN Parameters x-large large medium small
yarn.nodemanager.resource.memory.mb 102400 26624 10240 3072
yarn.nodemanager.resource.cpu-vcores 39 9 3 1
yarn.scheduler.maximum.allocation-mb 102400 26624 10240 3072
yarn.scheduler.minimum.allocation-mb 5120 2048 2048 1024
yarn.scheduler.maximum.allocation-vcores 39 9 3 1
yarn.scheduler.minimum.allocation-vcores 10 3 1 1
mapreduce.map.memory.mb 5120 2048 2048 1024
mapreduce.reduce.memory.mb 10240 4096 2048 1024
mapreduce.map.cpu.vcores 10 3 1 1
mapreduce.reduce.cpu.vcores 10 3 1 1
mapreduce.child.java.opts 8192 3277 1638 819
yarn.app.mapreduce.am.resource-mb 10240 4096 2048 1024
yarn.app.mapreduce.am.command-opts 8192 3277 1638 819
27
Chromosome Encoding
28
HDFS and MAPREDUCE Parameters
Binary Chromosome
Machine Instance Type
Single bit or two consecutive bits
represents parameter values,
instance type
Dependent Parameters
YARN Parameters
small
YARN related MapReduce Parameters
Chromosome Length = 26 bits
System Architecture
29
ssNSGA-II
optimization
workload
Resourc
e
Manager
Node
Manager
Node
Manager
Node
Manager
List of optimal
setting
Time Cost
…Cluster
deployment
cost
ssNSGA-II Based Hadoop Configuration Optimization
30
Generate n Sample Configuration Chromosomes
C1,C2,…,Cn
Select 2 Random Parents P1,P2
Perform 2 Point Crossover on P1, P2 (probability Pc =1)
Generate Offspring Coffspring
Perform Mutation on Coffspring (probability Pm= 0.1)
Coffspring Fitness Calculation
Update Population P
Perform Non-dominated Sorting
Update Population P
Output Pareto Solutions List, Copt
REPEAT
CONDITION = YES
Experiment Benchmark
31
Type Workload Input Size Benchmark
MicroBenchmark - Sort
- TeraSort
- Wordcount
2.98023GB - measure cluster
performance
(intrinsic behavior
of the cluster)
Web Search - Pagerank 5000 pages with
3 Iterations
- measure the
execution
performance for
real world big data
applications
Benchmark used : Hibench Benchmark suite version 4.0,
https://github.com/intel-hadoop/HiBench/releases
Experiment Environment
32
Setup Information Specification
CPU Intel ® Xeon R
E7-8870(40
cores)
Memory 128 GB RAM
Storage 400 TB
Hadoop version 2.7.1
JDK 1.8.0
NameNode
DataNode1 DataNode2 DataNode3
DataNode4 DataNode 5
User
Public
network
6-node cluster
1 NameNode
5 DataNodes
ssNSGA-II
optimization
Experimental Results
33
0
1
2
3
4
5
6
7
8
0 50 100 150 200
cost(¥)
execution time (sec)
sort workload result
small medium large x-large
0
1
2
3
4
5
6
30 40 50 60 70 80cost(¥)
execution time (sec)
terasort workload result
small medium large x-large
Population Size =30 Number of Evaluations=180
Number of Objectives = 2 Mutation Probability = 0.1
Crossover Probability = 1.0
* significant effects on HDFS and MapReduce Parameters
Experimental Results Cont’d
34
0
2
4
6
8
10
12
14
16
18
50 100 150 200 250 300cost(¥)
execution time (sec)
pagerank workload result
medium large x-largesmall
0
1
2
3
4
5
6
7
8
0 100 200 300 400 500 600
cost(¥)
execution time (sec)
wordcount workload result
small medium large x-large
* depend on YARN / related Parameters compared to HDFS and MapReduce Parameters
Population Size =30 Number of Evaluations=180
Number of Objectives = 2 Mutation Probability = 0.1
Crossover Probability = 1.0
Conclusion & Continuing Work
35
◦ Offline Hadoop configuration optimization using the ssNSGA-II
based search strategy
◦ x-large instance type cluster is not a suitable option for the
current workloads and input data size
◦ Large or medium instance type cluster show the balance for our
objective functions
◦Continuing process - dynamic cluster resizing through containers
and online configuration optimization of M/R workloads for
scientific workflow applications for effective Big Data Processing

More Related Content

What's hot

Low latency high throughput streaming using Apache Apex and Apache Kudu
Low latency high throughput streaming using Apache Apex and Apache KuduLow latency high throughput streaming using Apache Apex and Apache Kudu
Low latency high throughput streaming using Apache Apex and Apache Kudu
DataWorks Summit
 

What's hot (20)

LLAP: Sub-Second Analytical Queries in Hive
LLAP: Sub-Second Analytical Queries in HiveLLAP: Sub-Second Analytical Queries in Hive
LLAP: Sub-Second Analytical Queries in Hive
 
Spark Summit EU talk by Debasish Das and Pramod Narasimha
Spark Summit EU talk by Debasish Das and Pramod NarasimhaSpark Summit EU talk by Debasish Das and Pramod Narasimha
Spark Summit EU talk by Debasish Das and Pramod Narasimha
 
Why Apache Spark is the Heir to MapReduce in the Hadoop Ecosystem
Why Apache Spark is the Heir to MapReduce in the Hadoop EcosystemWhy Apache Spark is the Heir to MapReduce in the Hadoop Ecosystem
Why Apache Spark is the Heir to MapReduce in the Hadoop Ecosystem
 
October 2014 HUG : Hive On Spark
October 2014 HUG : Hive On SparkOctober 2014 HUG : Hive On Spark
October 2014 HUG : Hive On Spark
 
Apache Eagle - Monitor Hadoop in Real Time
Apache Eagle - Monitor Hadoop in Real TimeApache Eagle - Monitor Hadoop in Real Time
Apache Eagle - Monitor Hadoop in Real Time
 
Moving towards enterprise ready Hadoop clusters on the cloud
Moving towards enterprise ready Hadoop clusters on the cloudMoving towards enterprise ready Hadoop clusters on the cloud
Moving towards enterprise ready Hadoop clusters on the cloud
 
Low latency high throughput streaming using Apache Apex and Apache Kudu
Low latency high throughput streaming using Apache Apex and Apache KuduLow latency high throughput streaming using Apache Apex and Apache Kudu
Low latency high throughput streaming using Apache Apex and Apache Kudu
 
Operationalizing YARN based Hadoop Clusters in the Cloud
Operationalizing YARN based Hadoop Clusters in the CloudOperationalizing YARN based Hadoop Clusters in the Cloud
Operationalizing YARN based Hadoop Clusters in the Cloud
 
The Future of Apache Storm
The Future of Apache StormThe Future of Apache Storm
The Future of Apache Storm
 
Disaster Recovery and Cloud Migration for your Apache Hive Warehouse
Disaster Recovery and Cloud Migration for your Apache Hive WarehouseDisaster Recovery and Cloud Migration for your Apache Hive Warehouse
Disaster Recovery and Cloud Migration for your Apache Hive Warehouse
 
Running Spark in Production
Running Spark in ProductionRunning Spark in Production
Running Spark in Production
 
DeathStar: Easy, Dynamic, Multi-Tenant HBase via YARN
DeathStar: Easy, Dynamic, Multi-Tenant HBase via YARNDeathStar: Easy, Dynamic, Multi-Tenant HBase via YARN
DeathStar: Easy, Dynamic, Multi-Tenant HBase via YARN
 
Achieving 100k Queries per Hour on Hive on Tez
Achieving 100k Queries per Hour on Hive on TezAchieving 100k Queries per Hour on Hive on Tez
Achieving 100k Queries per Hour on Hive on Tez
 
Tachyon and Apache Spark
Tachyon and Apache SparkTachyon and Apache Spark
Tachyon and Apache Spark
 
Apache Spark 2.4 Bridges the Gap Between Big Data and Deep Learning
Apache Spark 2.4 Bridges the Gap Between Big Data and Deep LearningApache Spark 2.4 Bridges the Gap Between Big Data and Deep Learning
Apache Spark 2.4 Bridges the Gap Between Big Data and Deep Learning
 
Apache Spark on K8S Best Practice and Performance in the Cloud
Apache Spark on K8S Best Practice and Performance in the CloudApache Spark on K8S Best Practice and Performance in the Cloud
Apache Spark on K8S Best Practice and Performance in the Cloud
 
Hadoop 2 - Going beyond MapReduce
Hadoop 2 - Going beyond MapReduceHadoop 2 - Going beyond MapReduce
Hadoop 2 - Going beyond MapReduce
 
Apache Spark & Hadoop
Apache Spark & HadoopApache Spark & Hadoop
Apache Spark & Hadoop
 
Interactive SQL-on-Hadoop and JethroData
Interactive SQL-on-Hadoop and JethroDataInteractive SQL-on-Hadoop and JethroData
Interactive SQL-on-Hadoop and JethroData
 
TriHUG Feb: Hive on spark
TriHUG Feb: Hive on sparkTriHUG Feb: Hive on spark
TriHUG Feb: Hive on spark
 

Viewers also liked

Viewers also liked (20)

The truth about SQL and Data Warehousing on Hadoop
The truth about SQL and Data Warehousing on HadoopThe truth about SQL and Data Warehousing on Hadoop
The truth about SQL and Data Warehousing on Hadoop
 
Comparison of Transactional Libraries for HBase
Comparison of Transactional Libraries for HBaseComparison of Transactional Libraries for HBase
Comparison of Transactional Libraries for HBase
 
Case study of DevOps for Hadoop in Recruit.
Case study of DevOps for Hadoop in Recruit.Case study of DevOps for Hadoop in Recruit.
Case study of DevOps for Hadoop in Recruit.
 
SEGA : Growth hacking by Spark ML for Mobile games
SEGA : Growth hacking by Spark ML for Mobile gamesSEGA : Growth hacking by Spark ML for Mobile games
SEGA : Growth hacking by Spark ML for Mobile games
 
Hadoop Summit Tokyo HDP Sandbox Workshop
Hadoop Summit Tokyo HDP Sandbox Workshop Hadoop Summit Tokyo HDP Sandbox Workshop
Hadoop Summit Tokyo HDP Sandbox Workshop
 
Leveraging smart meter data for electric utilities: Comparison of Spark SQL w...
Leveraging smart meter data for electric utilities: Comparison of Spark SQL w...Leveraging smart meter data for electric utilities: Comparison of Spark SQL w...
Leveraging smart meter data for electric utilities: Comparison of Spark SQL w...
 
The real world use of Big Data to change business
The real world use of Big Data to change businessThe real world use of Big Data to change business
The real world use of Big Data to change business
 
#HSTokyo16 Apache Spark Crash Course
#HSTokyo16 Apache Spark Crash Course #HSTokyo16 Apache Spark Crash Course
#HSTokyo16 Apache Spark Crash Course
 
From a single droplet to a full bottle, our journey to Hadoop at Coca-Cola Ea...
From a single droplet to a full bottle, our journey to Hadoop at Coca-Cola Ea...From a single droplet to a full bottle, our journey to Hadoop at Coca-Cola Ea...
From a single droplet to a full bottle, our journey to Hadoop at Coca-Cola Ea...
 
Why is my Hadoop cluster slow?
Why is my Hadoop cluster slow?Why is my Hadoop cluster slow?
Why is my Hadoop cluster slow?
 
Hadoop Summit Tokyo Apache NiFi Crash Course
Hadoop Summit Tokyo Apache NiFi Crash CourseHadoop Summit Tokyo Apache NiFi Crash Course
Hadoop Summit Tokyo Apache NiFi Crash Course
 
Use case and Live demo : Agile data integration from Legacy system to Hadoop ...
Use case and Live demo : Agile data integration from Legacy system to Hadoop ...Use case and Live demo : Agile data integration from Legacy system to Hadoop ...
Use case and Live demo : Agile data integration from Legacy system to Hadoop ...
 
Introduction to Hadoop and Spark (before joining the other talk) and An Overv...
Introduction to Hadoop and Spark (before joining the other talk) and An Overv...Introduction to Hadoop and Spark (before joining the other talk) and An Overv...
Introduction to Hadoop and Spark (before joining the other talk) and An Overv...
 
Network for the Large-scale Hadoop cluster at Yahoo! JAPAN
Network for the Large-scale Hadoop cluster at Yahoo! JAPANNetwork for the Large-scale Hadoop cluster at Yahoo! JAPAN
Network for the Large-scale Hadoop cluster at Yahoo! JAPAN
 
Major advancements in Apache Hive towards full support of SQL compliance
Major advancements in Apache Hive towards full support of SQL complianceMajor advancements in Apache Hive towards full support of SQL compliance
Major advancements in Apache Hive towards full support of SQL compliance
 
Rebuilding Web Tracking Infrastructure for Scale
Rebuilding Web Tracking Infrastructure for ScaleRebuilding Web Tracking Infrastructure for Scale
Rebuilding Web Tracking Infrastructure for Scale
 
Streamline Hadoop DevOps with Apache Ambari
Streamline Hadoop DevOps with Apache AmbariStreamline Hadoop DevOps with Apache Ambari
Streamline Hadoop DevOps with Apache Ambari
 
Apache Hadoop 3.0 What's new in YARN and MapReduce
Apache Hadoop 3.0 What's new in YARN and MapReduceApache Hadoop 3.0 What's new in YARN and MapReduce
Apache Hadoop 3.0 What's new in YARN and MapReduce
 
Data infrastructure architecture for medium size organization: tips for colle...
Data infrastructure architecture for medium size organization: tips for colle...Data infrastructure architecture for medium size organization: tips for colle...
Data infrastructure architecture for medium size organization: tips for colle...
 
Enabling Apache Zeppelin and Spark for Data Science in the Enterprise
Enabling Apache Zeppelin and Spark for Data Science in the EnterpriseEnabling Apache Zeppelin and Spark for Data Science in the Enterprise
Enabling Apache Zeppelin and Spark for Data Science in the Enterprise
 

Similar to A Container-based Sizing Framework for Apache Hadoop/Spark Clusters

Optimal Execution Of MapReduce Jobs In Cloud - Voices 2015
Optimal Execution Of MapReduce Jobs In Cloud - Voices 2015Optimal Execution Of MapReduce Jobs In Cloud - Voices 2015
Optimal Execution Of MapReduce Jobs In Cloud - Voices 2015
Deanna Kosaraju
 
Apache spark sneha challa- google pittsburgh-aug 25th
Apache spark  sneha challa- google pittsburgh-aug 25thApache spark  sneha challa- google pittsburgh-aug 25th
Apache spark sneha challa- google pittsburgh-aug 25th
Sneha Challa
 

Similar to A Container-based Sizing Framework for Apache Hadoop/Spark Clusters (20)

Cloud Services for Big Data Analytics
Cloud Services for Big Data AnalyticsCloud Services for Big Data Analytics
Cloud Services for Big Data Analytics
 
Cloud Services for Big Data Analytics
Cloud Services for Big Data AnalyticsCloud Services for Big Data Analytics
Cloud Services for Big Data Analytics
 
Big Data Meets HPC - Exploiting HPC Technologies for Accelerating Big Data Pr...
Big Data Meets HPC - Exploiting HPC Technologies for Accelerating Big Data Pr...Big Data Meets HPC - Exploiting HPC Technologies for Accelerating Big Data Pr...
Big Data Meets HPC - Exploiting HPC Technologies for Accelerating Big Data Pr...
 
In Memory Analytics with Apache Spark
In Memory Analytics with Apache SparkIn Memory Analytics with Apache Spark
In Memory Analytics with Apache Spark
 
Hadoop Summit San Jose 2015: What it Takes to Run Hadoop at Scale Yahoo Persp...
Hadoop Summit San Jose 2015: What it Takes to Run Hadoop at Scale Yahoo Persp...Hadoop Summit San Jose 2015: What it Takes to Run Hadoop at Scale Yahoo Persp...
Hadoop Summit San Jose 2015: What it Takes to Run Hadoop at Scale Yahoo Persp...
 
AWS re:Invent 2016: High Performance Computing on AWS (CMP207)
AWS re:Invent 2016: High Performance Computing on AWS (CMP207)AWS re:Invent 2016: High Performance Computing on AWS (CMP207)
AWS re:Invent 2016: High Performance Computing on AWS (CMP207)
 
A performance analysis of OpenStack Cloud vs Real System on Hadoop Clusters
A performance analysis of OpenStack Cloud vs Real System on Hadoop ClustersA performance analysis of OpenStack Cloud vs Real System on Hadoop Clusters
A performance analysis of OpenStack Cloud vs Real System on Hadoop Clusters
 
An introduction To Apache Spark
An introduction To Apache SparkAn introduction To Apache Spark
An introduction To Apache Spark
 
AWS (Hadoop) Meetup 30.04.09
AWS (Hadoop) Meetup 30.04.09AWS (Hadoop) Meetup 30.04.09
AWS (Hadoop) Meetup 30.04.09
 
hadoop-spark.ppt
hadoop-spark.ppthadoop-spark.ppt
hadoop-spark.ppt
 
Dynamic Resource Allocation Algorithm using Containers
Dynamic Resource Allocation Algorithm using ContainersDynamic Resource Allocation Algorithm using Containers
Dynamic Resource Allocation Algorithm using Containers
 
Optimal Execution Of MapReduce Jobs In Cloud - Voices 2015
Optimal Execution Of MapReduce Jobs In Cloud - Voices 2015Optimal Execution Of MapReduce Jobs In Cloud - Voices 2015
Optimal Execution Of MapReduce Jobs In Cloud - Voices 2015
 
High Performance Processing of Streaming Data
High Performance Processing of Streaming DataHigh Performance Processing of Streaming Data
High Performance Processing of Streaming Data
 
Apache Hadoop YARN - The Future of Data Processing with Hadoop
Apache Hadoop YARN - The Future of Data Processing with HadoopApache Hadoop YARN - The Future of Data Processing with Hadoop
Apache Hadoop YARN - The Future of Data Processing with Hadoop
 
Architecting a Scalable Hadoop Platform: Top 10 considerations for success
Architecting a Scalable Hadoop Platform: Top 10 considerations for successArchitecting a Scalable Hadoop Platform: Top 10 considerations for success
Architecting a Scalable Hadoop Platform: Top 10 considerations for success
 
Apache spark sneha challa- google pittsburgh-aug 25th
Apache spark  sneha challa- google pittsburgh-aug 25thApache spark  sneha challa- google pittsburgh-aug 25th
Apache spark sneha challa- google pittsburgh-aug 25th
 
Hadoop Summit Brussels 2015: Architecting a Scalable Hadoop Platform - Top 10...
Hadoop Summit Brussels 2015: Architecting a Scalable Hadoop Platform - Top 10...Hadoop Summit Brussels 2015: Architecting a Scalable Hadoop Platform - Top 10...
Hadoop Summit Brussels 2015: Architecting a Scalable Hadoop Platform - Top 10...
 
Next Generation Grid: Integrating Parallel and Distributed Computing Runtimes...
Next Generation Grid: Integrating Parallel and Distributed Computing Runtimes...Next Generation Grid: Integrating Parallel and Distributed Computing Runtimes...
Next Generation Grid: Integrating Parallel and Distributed Computing Runtimes...
 
Gestione gerarchica dei dati con SUSE Enterprise Storage e HPE DMF
Gestione gerarchica dei dati con SUSE Enterprise Storage e HPE DMFGestione gerarchica dei dati con SUSE Enterprise Storage e HPE DMF
Gestione gerarchica dei dati con SUSE Enterprise Storage e HPE DMF
 
Hadoop tutorial
Hadoop tutorialHadoop tutorial
Hadoop tutorial
 

More from DataWorks Summit/Hadoop Summit

How Hadoop Makes the Natixis Pack More Efficient
How Hadoop Makes the Natixis Pack More Efficient How Hadoop Makes the Natixis Pack More Efficient
How Hadoop Makes the Natixis Pack More Efficient
DataWorks Summit/Hadoop Summit
 
Breaking the 1 Million OPS/SEC Barrier in HOPS Hadoop
Breaking the 1 Million OPS/SEC Barrier in HOPS HadoopBreaking the 1 Million OPS/SEC Barrier in HOPS Hadoop
Breaking the 1 Million OPS/SEC Barrier in HOPS Hadoop
DataWorks Summit/Hadoop Summit
 

More from DataWorks Summit/Hadoop Summit (20)

Running Apache Spark & Apache Zeppelin in Production
Running Apache Spark & Apache Zeppelin in ProductionRunning Apache Spark & Apache Zeppelin in Production
Running Apache Spark & Apache Zeppelin in Production
 
State of Security: Apache Spark & Apache Zeppelin
State of Security: Apache Spark & Apache ZeppelinState of Security: Apache Spark & Apache Zeppelin
State of Security: Apache Spark & Apache Zeppelin
 
Unleashing the Power of Apache Atlas with Apache Ranger
Unleashing the Power of Apache Atlas with Apache RangerUnleashing the Power of Apache Atlas with Apache Ranger
Unleashing the Power of Apache Atlas with Apache Ranger
 
Enabling Digital Diagnostics with a Data Science Platform
Enabling Digital Diagnostics with a Data Science PlatformEnabling Digital Diagnostics with a Data Science Platform
Enabling Digital Diagnostics with a Data Science Platform
 
Revolutionize Text Mining with Spark and Zeppelin
Revolutionize Text Mining with Spark and ZeppelinRevolutionize Text Mining with Spark and Zeppelin
Revolutionize Text Mining with Spark and Zeppelin
 
Double Your Hadoop Performance with Hortonworks SmartSense
Double Your Hadoop Performance with Hortonworks SmartSenseDouble Your Hadoop Performance with Hortonworks SmartSense
Double Your Hadoop Performance with Hortonworks SmartSense
 
Hadoop Crash Course
Hadoop Crash CourseHadoop Crash Course
Hadoop Crash Course
 
Data Science Crash Course
Data Science Crash CourseData Science Crash Course
Data Science Crash Course
 
Apache Spark Crash Course
Apache Spark Crash CourseApache Spark Crash Course
Apache Spark Crash Course
 
Dataflow with Apache NiFi
Dataflow with Apache NiFiDataflow with Apache NiFi
Dataflow with Apache NiFi
 
Schema Registry - Set you Data Free
Schema Registry - Set you Data FreeSchema Registry - Set you Data Free
Schema Registry - Set you Data Free
 
Building a Large-Scale, Adaptive Recommendation Engine with Apache Flink and ...
Building a Large-Scale, Adaptive Recommendation Engine with Apache Flink and ...Building a Large-Scale, Adaptive Recommendation Engine with Apache Flink and ...
Building a Large-Scale, Adaptive Recommendation Engine with Apache Flink and ...
 
Real-Time Anomaly Detection using LSTM Auto-Encoders with Deep Learning4J on ...
Real-Time Anomaly Detection using LSTM Auto-Encoders with Deep Learning4J on ...Real-Time Anomaly Detection using LSTM Auto-Encoders with Deep Learning4J on ...
Real-Time Anomaly Detection using LSTM Auto-Encoders with Deep Learning4J on ...
 
Mool - Automated Log Analysis using Data Science and ML
Mool - Automated Log Analysis using Data Science and MLMool - Automated Log Analysis using Data Science and ML
Mool - Automated Log Analysis using Data Science and ML
 
How Hadoop Makes the Natixis Pack More Efficient
How Hadoop Makes the Natixis Pack More Efficient How Hadoop Makes the Natixis Pack More Efficient
How Hadoop Makes the Natixis Pack More Efficient
 
HBase in Practice
HBase in Practice HBase in Practice
HBase in Practice
 
The Challenge of Driving Business Value from the Analytics of Things (AOT)
The Challenge of Driving Business Value from the Analytics of Things (AOT)The Challenge of Driving Business Value from the Analytics of Things (AOT)
The Challenge of Driving Business Value from the Analytics of Things (AOT)
 
Breaking the 1 Million OPS/SEC Barrier in HOPS Hadoop
Breaking the 1 Million OPS/SEC Barrier in HOPS HadoopBreaking the 1 Million OPS/SEC Barrier in HOPS Hadoop
Breaking the 1 Million OPS/SEC Barrier in HOPS Hadoop
 
From Regulatory Process Verification to Predictive Maintenance and Beyond wit...
From Regulatory Process Verification to Predictive Maintenance and Beyond wit...From Regulatory Process Verification to Predictive Maintenance and Beyond wit...
From Regulatory Process Verification to Predictive Maintenance and Beyond wit...
 
Backup and Disaster Recovery in Hadoop
Backup and Disaster Recovery in Hadoop Backup and Disaster Recovery in Hadoop
Backup and Disaster Recovery in Hadoop
 

Recently uploaded

Recently uploaded (20)

Driving Behavioral Change for Information Management through Data-Driven Gree...
Driving Behavioral Change for Information Management through Data-Driven Gree...Driving Behavioral Change for Information Management through Data-Driven Gree...
Driving Behavioral Change for Information Management through Data-Driven Gree...
 
🐬 The future of MySQL is Postgres 🐘
🐬  The future of MySQL is Postgres   🐘🐬  The future of MySQL is Postgres   🐘
🐬 The future of MySQL is Postgres 🐘
 
The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024
 
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemkeProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
 
Data Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt RobisonData Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt Robison
 
TrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data DiscoveryTrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
 
presentation ICT roal in 21st century education
presentation ICT roal in 21st century educationpresentation ICT roal in 21st century education
presentation ICT roal in 21st century education
 
Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)
 
Understanding Discord NSFW Servers A Guide for Responsible Users.pdf
Understanding Discord NSFW Servers A Guide for Responsible Users.pdfUnderstanding Discord NSFW Servers A Guide for Responsible Users.pdf
Understanding Discord NSFW Servers A Guide for Responsible Users.pdf
 
Boost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityBoost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivity
 
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
 
Automating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps ScriptAutomating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps Script
 
A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)
 
[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdf[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdf
 
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
 
Developing An App To Navigate The Roads of Brazil
Developing An App To Navigate The Roads of BrazilDeveloping An App To Navigate The Roads of Brazil
Developing An App To Navigate The Roads of Brazil
 
Advantages of Hiring UIUX Design Service Providers for Your Business
Advantages of Hiring UIUX Design Service Providers for Your BusinessAdvantages of Hiring UIUX Design Service Providers for Your Business
Advantages of Hiring UIUX Design Service Providers for Your Business
 
What Are The Drone Anti-jamming Systems Technology?
What Are The Drone Anti-jamming Systems Technology?What Are The Drone Anti-jamming Systems Technology?
What Are The Drone Anti-jamming Systems Technology?
 
Strategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a FresherStrategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a Fresher
 
AWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of TerraformAWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of Terraform
 

A Container-based Sizing Framework for Apache Hadoop/Spark Clusters

  • 1. A Container-based Sizing Framework for Apache Hadoop/Spark Clusters October 27, 2016 Hokkaido University Akiyoshi SUGIKI, Phyo Thandar Thant
  • 2. Agenda Hokkaido University Academic Cloud A Docker-based Sizing Framework for Hadoop Multi-objective Optimization of Hadoop 1
  • 3. Information Initiative Center, Hokkaido University Founded in 1962 as a national supercomputing center A member of – HPCI (High Performance Computing Infrastructure) - 12 institutes – JHPCN (Joint Usage/Research Center for Interdisciplinary Large- scale Information Infrastructure) - 8 institutes University R&D center for supercomputing, cloud computing, networking, and cyber security Operating HPC twins – Supercomputer (172 TFLOPS) and Academic Cloud System (43 TFLOPS) 2
  • 4. Hokkaido University Academic Cloud (2011-) Japan’s largest academic cloud system – > 43 TFLOPS (> 114 nodes) – ~2,000 VMs 3 Supercomputer Cloud System Data-science Cloud System (Added, 2013-) SR16000 M1 172 TF/176 nodes 22 TB (128 GB/node) AMS2500 (File System) 600 TB (SAS, RAID5) 300 TB (SATA, RAID6) BS2000 44 TF/114 nodes 14 TB (128 GB/node) Cloud Storage 1.96 PB AMS2300 (Boot File System) 260 TB (SAS, RAID6) VFP500N+AMS2500 (NAS) 500 TB (near-line NAS/RAID6) HA8000/RS210HM 80 GB x 25 nodes 32 GB x 2 nodes CloudStack 3.x CloudStack 4.x Hadoop Package for “Bigdata” (Hadoop, Hive, Mahout, and R)
  • 5. Supporting “Big Data” “Big Data” cluster package • Hadoop, Hive, Mahout, and R • MPI, OpenMP, and Torque – Automatic deployment of VM-based clusters – Custom scheduling policy • Spread I/O on multiple disks 4 VM VM VM VM #1 #2 #3 #4 Storage #1 Storage #2 Storage #3 Storage #4 Hadoop Cluster Virtual Disks Hadoop Hive Mahout R Big Data Package
  • 6. Lessons Learned (So Far) No single Hadoop (a little like silos) – Hadoop instance for each group of users Version problem – Upgrades and expansion of Hadoop ecosystem Strong demand of middle person – Gives advice with deep understanding of research domains, statistical analysis, and Hadoop-based systems 5 VM VM VM Hadoop #1 VM VM VM Hadoop #2 VM VM VM Hadoop #3 Research Group #1 Research Group #2 Research Group #3 Research Data
  • 7. Going Next A new system will be installed in April, 2018 – x2 CPU cores, x5 storage space – Bare-metal, accelerating performance at every layer – Supports both interclouds and hybrid clouds Still supports Hadoop as well as Spark – Cluster templates – Build user community 6 Supercomputer System Hokkaido U. Regions (Tokyo, Osaka, Okinawa) Cloud Systems (In other universities and public clouds) Cluster Templates (Hadoop, Spark, …)
  • 8. Requirements Run Hadoop on multiple Clouds – Academic Clouds (Community Clouds) • Hokkaido University Academic Cloud, ... – Public Clouds • Amazon AWS, Microsoft Azure, Google Cloud, … Offer best choice for researches (our users) – Under multiple criteria • Cost • Performance (time constraints) • Energy … 7
  • 9. Our Solution A Container-based Sizing Framework for Hadoop Clusters – Docker-based • Light-weight, easily migrate to other clouds – Emulation (rather than simulation) • Close to actual execution times on multiple clouds – Output: • Instance type • Number of instances • Hadoop configuration (*-site.xml files) 8
  • 10. Architecture 9 Emulation Engine Docker Runtime Application (HPC, Big Data) Application (HPC, Big Data) Docker Application (HPC, Big Data,…) CPU Memory DiskI/O NetworkI/O Interpose Collect Metrics Run Profiles Instance Profiles t2 m4 r3c4 Public Clouds Cost Estimator
  • 11. Why Docker? 10 Virtual Machines OS Containers Size Large Small Machine Emulation Complete Partial (Share OS kernel) Launch time Large Small Migration Sometime requires image conversion Easy Software Xen, KVM, VMware Dockers, rkt, … App Lib App Lib OS Container Container App Lib App Lib OS VM VM OS Hypervisor
  • 12. Container Execution Cluster Management – Docker Swarm – Multi-host (VXLAN-based) networking mode Container – Resources • CPUs, memory, disk, and network I/O – Regulation • Docker run options, cgroups, and tc – Monitoring • Docker remote API and cgroups 11
  • 13. Docker Image “Hadoop all in the box” – Hadoop – Spark – HiBench The same image for master/slaves Exports – (Environment variables) – File mounts • *-site.xml files – (Data volumes) 12 Hadoop Spark HiBench Hadoop Spark HiBench Volume mounts Hadoop all in the box core-site.xml hdfs-site.xml yarn-site.xml mapred-site.xml
  • 14. Resources Resources How Command CPU cores Change CPU set Docker run/cgroups clock rate Change quota & period Docker run/cgroups Memory size Set memory limit Docker run/cgroups Out-of- memory (OOM) Change out-of-memory handling Docker run/cgroups Disk IOPS Throttle read/write IOPS Docker run/cgroups bandwidth Throttle read/write bytes/sec Docker run/cgroups Network IOPS Throttle TX/RX IOPS Docker run/cgroups bandwidth Throttle TX/RX bytes/sec Docker run/cgroups latency Insert latency (> 1 ms) tc Freezer freeze Suspend/resume cgroups 13
  • 15. Hadoop Configuration Must be adjusted according to – Instance type (CPU, memory, disk, and network) – Number of instances Targeting all parameters in *-site.xml Dependent parameters – (Instance type) – YARN container size – JVM heap size – Map task size – Reduce task size 14 Machine Instance Size YARN Container Size JVM Heap Size Map/Reduce Task Size
  • 16. Optimization Multi-objective GAs – Trading cost and performance (time constraints) – Other factors: energy, … – Future: multi-objective to many-objective (> 3) Generate “Pareto-optimal Front” Technique: non-dominated sorting 15 Objective 1 Objective 2 X X X X X X X X X X X X XX X
  • 17. (Short) Summary A Sizing Framework for Hadoop/Spark Clusters – OS container-based approach – Combined with Genetic Algorithms • Multi-objective optimization (cost & perf.) Future Work – Docker Container Executor (DCE) • DCE runs YARN containers into Docker ones • Designed to provide custom environment for each app. • We believe DCE can also be utilized for slow-down and speeding- up of Hadoop tasks 16
  • 18. Slow Down - Torturing Hadoop Make strugglers No intervention is required 17 Map 1 Map 2 Map 3 Map 4 Map 5 Master Red 1 Red 2 Red 3 Red 4 Map Tasks Reduce Tasks Struggler Struggler
  • 19. Speeding up - Accelerating Hadoop Balance resource usage of tasks on the same node 18 Map 1 Map 2 Map 3 Map 4 Map 5 Master Red 1 Red 2 Red 3 Red 4 Map Tasks Reduce Tasks Struggler Struggler
  • 20. MHCO: Multi-Objective Hadoop Configuration Optimization Using Steady-State NSGA-II
  • 21. 20 Introduction BIG DATA ◦ Increasing use of connected devices at the hands of the Internet of Things and data growth from scientific researches will lead to an exponential increase in the data ◦ Portion of these data is underutilized or underexploited ◦ Hadoop MapReduce is very popular programming model for large scale data analytics
  • 22. Problem Definition I 21 ◦ Objective 1  Parameter Tuning for Minimizing Execution Time mapred-site .xml core-site .xml hdfs-site .xml yarn-site .xml Configuration settings for HDFS core such as I/O settings Configuration settings for HDFS daemons Configuration settings for MapReduce daemons Configuration settings for YARN daemons ◦Hadoop provides tunable options have significant effect on application performance ◦Practitioners and administers lack the expertise to tune ◦Appropriate parameter configuration is the key factor in Hadoop
  • 23. Problem Definition II 22 ◦ Appropriate machine instance selection for Hadoop cluster ◦ Objective 2  Instance Type Selection for Minimizing Hadoop Cluster Deployment Cost request Service provider Applic ation result Machine instance type - small - medium - large - x-large Pay Per Use
  • 24. Proposed Search based Approach 23 ssNSGA-II Performance Optimization Hadoop Parameter Tuning 1 Deployment Cost Optimization Cluster Instance Type Selection 2 ◦ Chromosome encoding can solve dynamic nature of Hadoop on version changes ◦ Use Steady State approach for computation overhead reduction in generic GA approach ◦ Bi-objective optimization (execution time, cluster deployment cost)
  • 25. Objective Function min t(p) , min c(p) where, p = [p1,p2,…,pm] , configuration parameter list and instance type t(p) = execution time of MR job c(p)= machine instance usage cost 24 t(p) = twc c(p) = (SP*NS)*t(p) where, twc = workload execution time SP= instance price NS=no of machine instances Assumption - two objective functions are black-box functions - no of instances in the cluster is static Instance type Mem(GB) / cpu cores Price per second (Yen) X-large 128/40 0.0160 Large 30/10 0.0039 Medium 12/4 0.0016 Small 3/1 0.0004
  • 26. Parameter Grouping I. HDFS and MAPREDUCE PARAMETERS II. YARN PARAMETERS III.YARN related MAPREDUCE PARAMETERS 25 17 6 7 30 machine instance type specification (cpu, mem) reference from previous researches
  • 27. Group I Parameter Values 26 Parameter Name Value Range dfs.namenode.handler.count 10, 20 dfs.datanode.handler.count 10, 20 dfs.blocksize 134217728, 268435456 mapreduce.map.output.compress True, False mapreduce.job.jvm.numtasks 1: limited, -1: unlimited mapreduce.map.sort.spill.percent 0.8, 0.9 mapreduce.reduce.shuffle.input.buffer.p ercent 0.7, 0.8 mapreduce.reduce.shuffle.memory.limit. percent 0.25, 0.5 mapreduce.reduce.shuffle.merge.percent 0.66, 0.9 mapreduce.reduce.input.buffer.percent 0.0, 0.5 Parameter Name Value Range dfs.datanode.max.transfer.threads 4096, 5120, 6144, 7168 dfs.datanode.balance.bandwidthPer Sec 1048576, 2097152, 194304, 8388608 mapreduce.task.io.sort.factor 10, 20, 30, 40 mapreduce.task.io.sort.mb 100, 200, 300, 400 mapreduce.tasktracker.http.threads 40, 45, 50, 60 mapreduce.reduce.shuffle.parallelco pies 5, 10, 15, 20 mapreduce.reduce.merge.inmem.thr eshold 1000, 1500, 2000, 2500
  • 28. Group II and III Parameter Values YARN Parameters x-large large medium small yarn.nodemanager.resource.memory.mb 102400 26624 10240 3072 yarn.nodemanager.resource.cpu-vcores 39 9 3 1 yarn.scheduler.maximum.allocation-mb 102400 26624 10240 3072 yarn.scheduler.minimum.allocation-mb 5120 2048 2048 1024 yarn.scheduler.maximum.allocation-vcores 39 9 3 1 yarn.scheduler.minimum.allocation-vcores 10 3 1 1 mapreduce.map.memory.mb 5120 2048 2048 1024 mapreduce.reduce.memory.mb 10240 4096 2048 1024 mapreduce.map.cpu.vcores 10 3 1 1 mapreduce.reduce.cpu.vcores 10 3 1 1 mapreduce.child.java.opts 8192 3277 1638 819 yarn.app.mapreduce.am.resource-mb 10240 4096 2048 1024 yarn.app.mapreduce.am.command-opts 8192 3277 1638 819 27
  • 29. Chromosome Encoding 28 HDFS and MAPREDUCE Parameters Binary Chromosome Machine Instance Type Single bit or two consecutive bits represents parameter values, instance type Dependent Parameters YARN Parameters small YARN related MapReduce Parameters Chromosome Length = 26 bits
  • 31. ssNSGA-II Based Hadoop Configuration Optimization 30 Generate n Sample Configuration Chromosomes C1,C2,…,Cn Select 2 Random Parents P1,P2 Perform 2 Point Crossover on P1, P2 (probability Pc =1) Generate Offspring Coffspring Perform Mutation on Coffspring (probability Pm= 0.1) Coffspring Fitness Calculation Update Population P Perform Non-dominated Sorting Update Population P Output Pareto Solutions List, Copt REPEAT CONDITION = YES
  • 32. Experiment Benchmark 31 Type Workload Input Size Benchmark MicroBenchmark - Sort - TeraSort - Wordcount 2.98023GB - measure cluster performance (intrinsic behavior of the cluster) Web Search - Pagerank 5000 pages with 3 Iterations - measure the execution performance for real world big data applications Benchmark used : Hibench Benchmark suite version 4.0, https://github.com/intel-hadoop/HiBench/releases
  • 33. Experiment Environment 32 Setup Information Specification CPU Intel ® Xeon R E7-8870(40 cores) Memory 128 GB RAM Storage 400 TB Hadoop version 2.7.1 JDK 1.8.0 NameNode DataNode1 DataNode2 DataNode3 DataNode4 DataNode 5 User Public network 6-node cluster 1 NameNode 5 DataNodes ssNSGA-II optimization
  • 34. Experimental Results 33 0 1 2 3 4 5 6 7 8 0 50 100 150 200 cost(¥) execution time (sec) sort workload result small medium large x-large 0 1 2 3 4 5 6 30 40 50 60 70 80cost(¥) execution time (sec) terasort workload result small medium large x-large Population Size =30 Number of Evaluations=180 Number of Objectives = 2 Mutation Probability = 0.1 Crossover Probability = 1.0 * significant effects on HDFS and MapReduce Parameters
  • 35. Experimental Results Cont’d 34 0 2 4 6 8 10 12 14 16 18 50 100 150 200 250 300cost(¥) execution time (sec) pagerank workload result medium large x-largesmall 0 1 2 3 4 5 6 7 8 0 100 200 300 400 500 600 cost(¥) execution time (sec) wordcount workload result small medium large x-large * depend on YARN / related Parameters compared to HDFS and MapReduce Parameters Population Size =30 Number of Evaluations=180 Number of Objectives = 2 Mutation Probability = 0.1 Crossover Probability = 1.0
  • 36. Conclusion & Continuing Work 35 ◦ Offline Hadoop configuration optimization using the ssNSGA-II based search strategy ◦ x-large instance type cluster is not a suitable option for the current workloads and input data size ◦ Large or medium instance type cluster show the balance for our objective functions ◦Continuing process - dynamic cluster resizing through containers and online configuration optimization of M/R workloads for scientific workflow applications for effective Big Data Processing

Editor's Notes

  1. Tell about configuration file a little Slaves contain a list of hosts, one per line, that are needed to host DataNode and TaskTracker servers. The Masters contain a list of hosts, one per line, that are required to host secondary NameNode servers. The Masters file informs about the Secondary NameNode location to Hadoop daemon. The ‘Masters’ file at Master server contains a hostname, Secondary Name Node servers.  core-site.xml file informs Hadoop daemon where NameNode runs in the cluster. It contains the configuration settings for Hadoop Core such as I/O settings that are common to HDFS and MapReduce  hdfs-site.xml file contains the configuration settings for HDFS daemons; the NameNode, the Secondary NameNode, and the DataNodes. Here, we can configure hdfs-site.xml to specify default block replication and permission checking on HDFS. The mapred-site.xml file contains the configuration settings for MapReduce daemons; the job tracker and the task-trackers. Yarn-site.xml for resource manager and node manager
  2. Service provider provides services in pay per use basic Instance price differ according to the instance type
  3. In business, health That allow us to leverage all types of data to gain insights and add value
  4. In order to optimize these two objectives, we need to select sensitive parameters , a total of 30 parameters are selected 17 parameters for general Hadoop configuration optimization for execution performance , these 17 parameters are encoded and other 13 parameters are dependent parameters according to the encoded machine instance type 13 parameters for dynamic machine instance type optimization during execution
  5. Group 2 and Group 3 parameters differ according to the instance type, the table shows the associated parameter values for various machine instance types
  6. Why steady state algorithm is selected??? Nebro [1] state that ssNSGA-II outperforms generational NSGAII in terms of quality, convergence speed, and computing time Specify cloud in this case
  7. Description of workloads (what kind of tasks they are conducting) Other benchmarks just include workload for measuring cluster performance Hibench , a new realistic and comprehensive benchmark suite for Hadoop to properly evaluate and characterize Hadoop framework Developed by intel Dynamic input size changes Evaluation on both hardware and software
  8. Specify that Genetic Algorithm will run on NameNode of the Hadoop cluster
  9. Why large instance type execution time is shorter than x-large instance type How long it takes to do the experiment for each of these workload? Because of the costly execution of Hadoop Mapreduce workloads in experiments, we can only perform to get intermediate optimized solution result. For each workload for 150 evaluations, it takes 1 or 2 days for this intermediate results
  10. shows overlap points, big difference only in single machine type, further experiment is necessary
  11. In business, health That allow us to leverage all types of data to gain insights and add value