3. Information Initiative Center, Hokkaido
University
Founded in 1962 as a national supercomputing center
A member of
– HPCI (High Performance Computing Infrastructure) - 12 institutes
– JHPCN (Joint Usage/Research Center for Interdisciplinary Large-
scale Information Infrastructure) - 8 institutes
University R&D center for supercomputing, cloud computing,
networking, and cyber security
Operating HPC twins
– Supercomputer (172 TFLOPS) and Academic Cloud System (43 TFLOPS)
2
5. Supporting “Big Data”
“Big Data” cluster package
• Hadoop, Hive, Mahout, and R
• MPI, OpenMP, and Torque
– Automatic deployment of VM-based clusters
– Custom scheduling policy
• Spread I/O on multiple disks
4
VM
VM
VM
VM
#1
#2
#3
#4
Storage #1
Storage #2
Storage #3
Storage #4
Hadoop Cluster
Virtual Disks
Hadoop
Hive
Mahout
R
Big Data Package
6. Lessons Learned (So Far)
No single Hadoop (a little like silos)
– Hadoop instance for each group of users
Version problem
– Upgrades and expansion of Hadoop ecosystem
Strong demand of middle person
– Gives advice with deep understanding of research domains,
statistical analysis, and Hadoop-based systems
5
VM VM VM
Hadoop #1
VM VM VM
Hadoop #2
VM VM VM
Hadoop #3
Research Group #1 Research Group #2 Research Group #3
Research
Data
7. Going Next
A new system will be installed in April, 2018
– x2 CPU cores, x5 storage space
– Bare-metal, accelerating performance at every layer
– Supports both interclouds and hybrid clouds
Still supports Hadoop as well as Spark
– Cluster templates
– Build user community
6
Supercomputer
System Hokkaido U.
Regions
(Tokyo,
Osaka,
Okinawa)
Cloud
Systems
(In other universities
and public clouds)
Cluster Templates (Hadoop, Spark, …)
8. Requirements
Run Hadoop on multiple Clouds
– Academic Clouds (Community Clouds)
• Hokkaido University Academic Cloud, ...
– Public Clouds
• Amazon AWS, Microsoft Azure, Google Cloud, …
Offer best choice for researches (our users)
– Under multiple criteria
• Cost
• Performance (time constraints)
• Energy
…
7
9. Our Solution
A Container-based Sizing Framework for Hadoop Clusters
– Docker-based
• Light-weight, easily migrate to other clouds
– Emulation (rather than simulation)
• Close to actual execution times on multiple clouds
– Output:
• Instance type
• Number of instances
• Hadoop configuration (*-site.xml files)
8
10. Architecture
9
Emulation
Engine
Docker Runtime
Application (HPC, Big Data)
Application (HPC, Big Data)
Docker
Application (HPC, Big Data,…)
CPU
Memory
DiskI/O
NetworkI/O
Interpose
Collect Metrics
Run Profiles
Instance
Profiles
t2 m4 r3c4
Public Clouds
Cost
Estimator
11. Why Docker?
10
Virtual Machines OS Containers
Size Large Small
Machine Emulation Complete Partial (Share OS kernel)
Launch time Large Small
Migration Sometime requires image
conversion
Easy
Software Xen, KVM, VMware Dockers, rkt, …
App
Lib
App
Lib
OS
Container Container
App
Lib
App
Lib
OS
VM VM
OS
Hypervisor
12. Container Execution
Cluster Management
– Docker Swarm
– Multi-host (VXLAN-based) networking mode
Container
– Resources
• CPUs, memory, disk, and network I/O
– Regulation
• Docker run options, cgroups, and tc
– Monitoring
• Docker remote API and cgroups
11
13. Docker Image
“Hadoop all in the box”
– Hadoop
– Spark
– HiBench
The same image for master/slaves
Exports
– (Environment variables)
– File mounts
• *-site.xml files
– (Data volumes)
12
Hadoop
Spark
HiBench
Hadoop
Spark
HiBench
Volume mounts
Hadoop all in the box
core-site.xml
hdfs-site.xml
yarn-site.xml
mapred-site.xml
15. Hadoop Configuration
Must be adjusted according to
– Instance type (CPU, memory, disk, and network)
– Number of instances
Targeting all parameters in *-site.xml
Dependent parameters
– (Instance type)
– YARN container size
– JVM heap size
– Map task size
– Reduce task size
14
Machine Instance Size
YARN Container Size
JVM Heap Size
Map/Reduce
Task Size
16. Optimization
Multi-objective GAs
– Trading cost and performance (time constraints)
– Other factors: energy, …
– Future: multi-objective to many-objective (> 3)
Generate “Pareto-optimal Front”
Technique: non-dominated sorting
15
Objective 1
Objective 2
X
X X X
X
X
X
X
X
X
X
X
XX
X
17. (Short) Summary
A Sizing Framework for Hadoop/Spark Clusters
– OS container-based approach
– Combined with Genetic Algorithms
• Multi-objective optimization (cost & perf.)
Future Work
– Docker Container Executor (DCE)
• DCE runs YARN containers into Docker ones
• Designed to provide custom environment for each app.
• We believe DCE can also be utilized for slow-down and speeding-
up of Hadoop tasks
16
18. Slow Down - Torturing Hadoop
Make strugglers
No intervention is required
17
Map 1 Map 2 Map 3 Map 4 Map 5
Master
Red 1 Red 2 Red 3 Red 4
Map Tasks
Reduce Tasks
Struggler
Struggler
19. Speeding up - Accelerating Hadoop
Balance resource usage of tasks on the same node
18
Map 1 Map 2 Map 3 Map 4 Map 5
Master
Red 1 Red 2 Red 3 Red 4
Map Tasks
Reduce Tasks
Struggler
Struggler
21. 20
Introduction
BIG
DATA
◦ Increasing use of connected devices at the hands of the Internet of
Things and data growth from scientific researches will lead to an
exponential increase in the data
◦ Portion of these data is underutilized or underexploited
◦ Hadoop MapReduce is very popular programming model for large
scale data analytics
22. Problem Definition I
21
◦ Objective 1 Parameter Tuning for Minimizing Execution Time
mapred-site
.xml
core-site
.xml
hdfs-site
.xml
yarn-site
.xml
Configuration settings
for HDFS core such as
I/O settings
Configuration settings
for HDFS daemons
Configuration settings
for MapReduce
daemons
Configuration settings
for YARN daemons
◦Hadoop provides tunable options have significant effect on
application performance
◦Practitioners and administers lack the expertise to tune
◦Appropriate parameter configuration is the key factor in Hadoop
23. Problem Definition II
22
◦ Appropriate machine instance selection for Hadoop cluster
◦ Objective 2 Instance Type Selection for Minimizing Hadoop
Cluster Deployment Cost
request
Service
provider
Applic
ation
result
Machine instance type
- small
- medium
- large
- x-large Pay Per
Use
24. Proposed Search based Approach
23
ssNSGA-II
Performance Optimization
Hadoop Parameter
Tuning
1
Deployment Cost
Optimization
Cluster Instance Type
Selection
2
◦ Chromosome encoding can solve dynamic nature of Hadoop on
version changes
◦ Use Steady State approach for computation overhead reduction
in generic GA approach
◦ Bi-objective optimization (execution time, cluster deployment
cost)
25. Objective Function
min t(p) , min c(p)
where,
p = [p1,p2,…,pm] ,
configuration parameter
list and instance type
t(p) = execution time of MR job
c(p)= machine instance usage
cost
24
t(p) = twc
c(p) = (SP*NS)*t(p)
where,
twc = workload execution time
SP= instance price
NS=no of machine instances
Assumption
- two objective functions are black-box functions
- no of instances in the cluster is static
Instance
type
Mem(GB)
/ cpu cores
Price per
second (Yen)
X-large 128/40 0.0160
Large 30/10 0.0039
Medium 12/4 0.0016
Small 3/1 0.0004
26. Parameter Grouping
I. HDFS and MAPREDUCE PARAMETERS
II. YARN PARAMETERS
III.YARN related MAPREDUCE PARAMETERS
25
17
6
7
30
machine instance
type specification
(cpu, mem)
reference from
previous researches
29. Chromosome Encoding
28
HDFS and MAPREDUCE Parameters
Binary Chromosome
Machine Instance Type
Single bit or two consecutive bits
represents parameter values,
instance type
Dependent Parameters
YARN Parameters
small
YARN related MapReduce Parameters
Chromosome Length = 26 bits
31. ssNSGA-II Based Hadoop Configuration Optimization
30
Generate n Sample Configuration Chromosomes
C1,C2,…,Cn
Select 2 Random Parents P1,P2
Perform 2 Point Crossover on P1, P2 (probability Pc =1)
Generate Offspring Coffspring
Perform Mutation on Coffspring (probability Pm= 0.1)
Coffspring Fitness Calculation
Update Population P
Perform Non-dominated Sorting
Update Population P
Output Pareto Solutions List, Copt
REPEAT
CONDITION = YES
32. Experiment Benchmark
31
Type Workload Input Size Benchmark
MicroBenchmark - Sort
- TeraSort
- Wordcount
2.98023GB - measure cluster
performance
(intrinsic behavior
of the cluster)
Web Search - Pagerank 5000 pages with
3 Iterations
- measure the
execution
performance for
real world big data
applications
Benchmark used : Hibench Benchmark suite version 4.0,
https://github.com/intel-hadoop/HiBench/releases
33. Experiment Environment
32
Setup Information Specification
CPU Intel ® Xeon R
E7-8870(40
cores)
Memory 128 GB RAM
Storage 400 TB
Hadoop version 2.7.1
JDK 1.8.0
NameNode
DataNode1 DataNode2 DataNode3
DataNode4 DataNode 5
User
Public
network
6-node cluster
1 NameNode
5 DataNodes
ssNSGA-II
optimization
34. Experimental Results
33
0
1
2
3
4
5
6
7
8
0 50 100 150 200
cost(¥)
execution time (sec)
sort workload result
small medium large x-large
0
1
2
3
4
5
6
30 40 50 60 70 80cost(¥)
execution time (sec)
terasort workload result
small medium large x-large
Population Size =30 Number of Evaluations=180
Number of Objectives = 2 Mutation Probability = 0.1
Crossover Probability = 1.0
* significant effects on HDFS and MapReduce Parameters
35. Experimental Results Cont’d
34
0
2
4
6
8
10
12
14
16
18
50 100 150 200 250 300cost(¥)
execution time (sec)
pagerank workload result
medium large x-largesmall
0
1
2
3
4
5
6
7
8
0 100 200 300 400 500 600
cost(¥)
execution time (sec)
wordcount workload result
small medium large x-large
* depend on YARN / related Parameters compared to HDFS and MapReduce Parameters
Population Size =30 Number of Evaluations=180
Number of Objectives = 2 Mutation Probability = 0.1
Crossover Probability = 1.0
36. Conclusion & Continuing Work
35
◦ Offline Hadoop configuration optimization using the ssNSGA-II
based search strategy
◦ x-large instance type cluster is not a suitable option for the
current workloads and input data size
◦ Large or medium instance type cluster show the balance for our
objective functions
◦Continuing process - dynamic cluster resizing through containers
and online configuration optimization of M/R workloads for
scientific workflow applications for effective Big Data Processing
Editor's Notes
Tell about configuration file a little
Slaves contain a list of hosts, one per line, that are needed to host DataNode and TaskTracker servers. The Masters contain a list of hosts, one per line, that are required to host secondary NameNode servers. The Masters file informs about the Secondary NameNode location to Hadoop daemon. The ‘Masters’ file at Master server contains a hostname, Secondary Name Node servers.
core-site.xml file informs Hadoop daemon where NameNode runs in the cluster. It contains the configuration settings for Hadoop Core such as I/O settings that are common to HDFS and MapReduce
hdfs-site.xml file contains the configuration settings for HDFS daemons; the NameNode, the Secondary NameNode, and the DataNodes. Here, we can configure hdfs-site.xml to specify default block replication and permission checking on HDFS.
The mapred-site.xml file contains the configuration settings for MapReduce daemons; the job tracker and the task-trackers.
Yarn-site.xml for resource manager and node manager
Service provider provides services in pay per use basic
Instance price differ according to the instance type
In business, health
That allow us to leverage all types of data to gain insights and add value
In order to optimize these two objectives, we need to select sensitive parameters , a total of 30 parameters are selected
17 parameters for general Hadoop configuration optimization for execution performance , these 17 parameters are encoded and other 13 parameters are dependent parameters according to the encoded machine instance type
13 parameters for dynamic machine instance type optimization during execution
Group 2 and Group 3 parameters differ according to the instance type, the table shows the associated parameter values for various machine instance types
Why steady state algorithm is selected??? Nebro [1] state that ssNSGA-II outperforms generational NSGAII in terms of quality, convergence speed, and computing time
Specify cloud in this case
Description of workloads (what kind of tasks they are conducting)
Other benchmarks just include workload for measuring cluster performance
Hibench , a new realistic and comprehensive benchmark suite for Hadoop to properly evaluate and characterize Hadoop framework
Developed by intel
Dynamic input size changes
Evaluation on both hardware and software
Specify that Genetic Algorithm will run on NameNode of the Hadoop cluster
Why large instance type execution time is shorter than x-large instance type
How long it takes to do the experiment for each of these workload?
Because of the costly execution of Hadoop Mapreduce workloads in experiments, we can only perform to get intermediate optimized solution result. For each workload for 150 evaluations, it takes 1 or 2 days for this intermediate results
shows overlap points, big difference only in single machine type, further experiment is necessary
In business, health
That allow us to leverage all types of data to gain insights and add value