A Host Selection Algorithm for Dynamic Container Consolidation in Cloud Data ...
kogatam_swetha
1. ENERGY- EFFICIENT AND TRAFFIC AWARE VIRTUAL
MACHINE MANAGEMENT IN CLOUD COMPUTING
A Writing Project
Presented to
The Faculty of the Department of Computer Science
San Jose State University
In Partial Fulfilment
Of the Requirements for the Degree
Masters of Science
By,
Swetha Kogatam
December 2015
3. iii
The Writing Project Committee Approves the Project Titled
ENERGY –EFFICIENT AND TRAFFIC AWARE VIRTUAL MACHINE
MANAGEMENT IN CLOUD COMPUTING
By
Swetha Kogatam
APPROVED FOR THE DEPARTMENT OF COMPUTER SCIENCE
SAN JOSE STATE UNIVERSITY
DECEMBER 2015
Dr. Melody Moh Department of Computer Science
Dr. Robert Chun Department of Computer Science
Mr. Giridhar Jayavelu VMWare INC
4. iv
ABSTRACT
ENERGY –EFFICIENT AND TRAFFIC AWARE VIRTUAL MACHINE
MANAGEMENT IN CLOUD COMPUTING
By Swetha Kogatam
Cloud computing has revolutionized the technology industry by enabling on-demand
provisioning of elastic computing resources at pay-as-per use model. In this process cloud
datacenters consume large amounts of power and emits Carbon dioxide (CO2) posing great
problems to the environment. So there is always a need for an approach to minimize the
amount of power consumed and increase the cost efficiency of the underlying resources.
Dynamic Virtual Machine Consolidation is one of the solution to reduce the power
consumption . But ,with data centers containing most of the applications/services that
require communication, there is a need for reducing the communication latency . In this
paper , we propose a mechanism of dynamic virtual machine consolidation by considering
both the power and traffic among the VMs so as to increase the energy efficiency and
improve communication performance by reducing the overall traffic cost. We have
improved the dynamic VM Migration algorithms by implementing the clustering
mechanism which combines the VMs based on their CPU Utilization and the
communication/traffic factor. The performance is evaluated for these algorithms and have
significantly reduced the energy consumption , number of VM Migrations and SLA
Violations. By implementing these two algorithms , the data centers can improve the power
efficiency while being able to reduce the communication latency between the services at
the same time .
5. v
TABLE OF CONTENTS
1.0 INTRODUCTION…………………………………………………………………….1
2.0 RELATED WORK…………………………………………………………………….5
3.0 PROPOSED WORK –ENERGY EFFICIENT AND TRAFFIC AWARE VM
CONSOLIDATION ALGORITHMS……………………………………………………..8
3.1 Mechanism to detect Over utilized and Underutilized hosts……………….9
3.2 VMClustering Mechanism and VMCluster Selection Policy…………….10
3.2.1 VMClustering Mechanism…………………………………………10
3.3 VMCluster Placement/Allocation Policies……………………………….13
3.3.1 Best Fit Host Algorithm ( Existing)………………………………13
3.3.2 Least Increase in Power with Host Sort for VMClusters
Algorithm :LIP-VMCL (Newly Improved)………………………14
3.3.3 Best Fit Host Algorithm for VMClusters Algorithm : BFH-VMCL
(Newly Improved)………………………………………………..17
3.3.4 Best Fit VM Cluster Algorithm : BF-VMCL (Newly Improved)…18
3.3.5 Dynamic Mechanism for creating New Host……………………..20
6. vi
3.3.6 Overall Algorithm for Dynamic Consolidation of Virtual
Machines…………………………………………………………21
4.0 PERFORMANCE EVALUATION………………………………………………….25
4.1 Experimental Setup………………………………………………………25
4.2 Performance Results……………………………………………………...26
4.2.1 Energy Calculations……………………………………………...27
5.0 CONCLUSION & FUTURE WORK………………………………………………...36
7. vii
LIST OF FIGURES
Figure 1. Clustering Example…………………………………………………………….12
Figure 2. Total Energy Consumption of four Algorithms………………………………...30
Figure 3. Migration Energy Consumption of four Algorithms…………………………...30
Figure 4. Energy Consumption in creating new host for four Algorithms………………...31
Figure 5. Total number of VM Migrations for four Algorithms…………………………..31
Figure 6. Total number of additional hosts created in four Algorithms…………………...32
Figure 7. Energy Consumption for three independent Algorithms…………...…………..33
8. viii
LIST OF TABLES
Table 1. VM Clustering Algorithm……………………………………………………….11
Table 2. BFH Placement Algorithm (Existing)…………………………………………...13
Table 3. Least Increase in Power with Host Sort for VMClusters Algorithm :
LIP_VMCL………………………………………………………………………………15
Table 4. Best Fit Host Algorithm for VMClusters Algorithm : BFH-VMCL…………….17
Table 5. Best Fit VM Cluster Algorithm : BF-VMCL (Newly Improved)………………..18
Table 6. Dynamic Mechanism for creating New Host……………………………………21
Table 7. Overall Algorithm for Dynamic Consolidation of Virtual Machines……………22
Table 8. Power Consumption At Different CPU Utilization Levels……………………...25
Table 9. The four evaluated algorithms…………………………………………………..28
Table 10. SLA Violations………………………………………………………………...35
9. 1
1.0 INTRODUCTION
Cloud computing has revolutionized the information and communication technology
industry by enabling the on demand provisioning of the elastic computing resources based
on the pay-as-you-go model [1]. Due to the tremendous incline in the industry towards the
cloud computing model , the cloud data centers today consume huge electricity which
results in very high operational costs and carbon dioxide (CO2) emissions in the
environment. It is estimated that energy consumption by data centers worldwide has risen
by 56% from 2005 to 2010 and was accounted to be between 1.1% and 1.5% of the global
electricity use in 2010 [13]. Furthermore, carbon dioxide emissions of the Information and
Communications Technology industry were estimated to be 2% of the global emissions,
which is equivalent to the emissions of the aviation industry [23].
There is a demanding need for the cloud data centers to reduce the power consumption
in order to save the cost of operation and reduce CO2 emissions. This problem can be
solved by focusing on optimizing the way the resources are allocated and utilized to serve
application workload[1].One of the solutions to improve the utilization of resources and
reduce energy consumption is Dynamic consolidation of Virtual Machines[2,34-39]. In
dynamic consolidation of virtual machines , the underutilized hosts are identified and the
VMs from that host are migrated to minimize the number of active hosts . The underutilized
hosts are now turned off to save the power. But it should be observed that , it consumes
additional power to turn on these hosts when required. On the other hand the over utilized
10. 2
hosts are also identified and the VMs are selected from that host to be migrated in order to
maintain QoS and avoid performance degradation of the physical machines.
The VM Migrations are of two types [4]: Hot (live) migration and Cold (non-live)
migration. In Live migration , the VM does not lose its status and is running when the
actual migration is being performed . The users don’t notice any interruption in the services.
In Non-Live migration , the VM’s status is lost and the users notices the interruption in the
services[5]. In this paper, we have simulated the Live migration mechanism for its obvious
reason of providing non-interruption of the services. The Live migration can be classified
again into two types based on the storage [5] i.e., Live Block Migration (non-shared storage)
and Live Migration (shared storage).
In Live Block Migration , the source and the destination host do not share the storage
and this mechanism is mainly used when there are less number of migrations to be
performed. One of the disadvantages of this method is that block migration migrates the
disk via TCP , making it comparatively slow [5]. In live migration the storage is shared
between the source and destination host and the guest operating system doesn't get effected
by the migration process and services don't get interrupted . The Live migration with shared
storage is comparatively faster than the Live block migration . In this paper , we have
simulated the Live Migration technique with the shared storage so as to achieve the faster
migration without any interruption to the services.
Dynamic Consolidation of Virtual Machines is a very complicated task as the physical
host has to provide all the resources required by VM , including CPU Utilization , memory ,
11. 3
storage and bandwidth[6]. Among all the resources available , CPU Utilization is the only
resource which is provided dynamically according to the performance requirement ,
whereas the other resources are provided with fixed size[6]. Due to that reason, most of
current researches migrate VMs based on CPU utilization[7, 8, 9].However , for most of
the applications in the data center , the performance is not relied just on the CPU Utilization.
With the applications that require high communication amongst its services ,the
communication cost can greatly influence the overall performance. Considering an
example, for a 3 tier web application, migrating an application server to a host far from the
front end web server and the database server will increase the communication latency and
thus reduce the overall throughput. One such example is non- overlap MPI applications
which wait for messages before continuing[6]. Research [10] shows that service fragment
can affect the data center network performance. In paper [11], the placement of virtual
machines that execute the reduce phase of a MapReduce application can reduce total job
runtime by 4 times.
The host at an idle state still consumes about 70% of the energy it consumes at full
CPU speed and this is due to the energy consumption of hard disk , memory , main board
[12]. Hence to save energy the idle hosts are turned off using dynamic consolidation of
VM techniques. In this paper we propose a mechanism in which the dynamic consolidation
of VMs considering both the traffic and the power among the VMs without any
interruptions to QoS by keeping the SLA violations at its lowest possible. This approach
consists of comprehensive methods to determine the under-utilized and over-utilized hosts ,
selection of VMs and cost effective VM Placement strategies . This is an extension and
12. 4
continuation of our efforts in the area of Green cloud computing research towards
achieving a cost efficient and environmental friendly approaches[14,17].
The main contributions of this paper include:
(i) Formulated a dynamic VM consolidation methodologies by clustering the VMs
based on the communication/traffic factor in the VM Placement policies for
reducing the number of VM Migrations and hence achieving the cost efficient
Energy consumption model.
(ii) Presenting two VM placement heuristics , the Best Fit Host for the VMClusters
(BFH-VMCL) and the Best Fit VMClusters (BF-VMCL) placement algorithms
for live migration and consolidation to avoid performance degradation of
server with over utilized hosts and power saving mechanisms with underutilized
hosts.
(iii) Added an efficient new dynamic mechanism to create new hosts when there are
no more servers available for VM consolidation & migration and use this host
until effectively utilized.
The rest of the paper is organized as follows : We discuss some related works and
background in Section 2.0 . We explain our dynamic heuristic VM Consolidation
mechanisms considering both the power and communication/traffic factor in Section 3.0.
The Section 4.0 shows the performance evaluation which are simulation based experiments
using CloudSim 3.0.3 . Finally we end with the conclusion and future works in Section 5.0.
13. 5
2.0 RELATED WORK
In this section , we discuss and review the recent researches in the area of Dynamic
VM Consolidation and Migrations , Power Efficient Green Cloud Computing methods.
Beloglazov et al. [3] has proposed the dynamic VM consolidation algorithms based
on the analysis of historical data of the resource usage by VMs. The whole dynamic
consolidation problem is broken down into 4 small problems i.e., Host Underload and
Overload detection ,VM Selection and Placement algorithms. They proposed different
heuristics like Median Absolute Deviation , Local Regression and Robust Local Regression
methods to find the overloaded host. The VM Selection policies are designed based on the
minimum migration time and maximum correlation policies. VM Placement Algorithms
are solved as NP hard problems .Implemented the Best Fit Decreasing algorithm in which
the VMs are sorted based on the decreasing order based on the current CPU Utilization and
is allocated to the host in which there will be least increase in power by allocating the VM.
The authors of this paper have only considered the power as a coefficient towards saving
energy and not considering any other factors that play a role between the VMs
Huang et al. [17] recently has extended the work of [3] and proposed new dynamic
VM consolidation heuristics . One such algorithm for placement policy is Least increase
in Power by sorting the Hosts as opposed to the work[3] in which the hosts are not sorted.
The authors have also proposed Best Fit Host Algorithm in which the best fit host is found
for the VMs to be migrated based on the predicted CPU history utilization and Best fit VM
allocation policy by choosing the optimal combination of VMs for the Hosts available
14. 6
using dynamic programming method. They have achieved better results and also lesser
SLA Violations. The authors have again concentrated on energy as the only factor in
developing the dynamic heuristics
Trong Vu et al.[6] have proposed a mechanism in which they considered energy
and the communication latency between the VMs as the factors to achieve the better results
in energy efficiency . But the traffic factor is applied only to the VMs from underutilized
hosts and not to the over utilized hosts.
Dias et al.[29] reduce the traffic cost by combining the same VMs which have high
communication dependency into clusters and map these clusters to the hosts using the bin-
packing algorithm
Nguyen et al.[30] has proposed the dynamic consolidation mechanisms to reduce
the power consumed by trying to reduce the switching energy of the underutilized hosts
that are changed to sleep mode and back to active state . They have proposed an efficient
VM consolidation usage prediction method to reduce the number of hosts being turned off
to sleep mode and reduce the power consumption.
Chowdhury et al.[31] have proposed a different approach in which they considered
Clustered based VM Placement strategies. They have proposed multiple redesigned VM
Placement algorithms in which they grouped the VMs to be migrated based on the CPU
utilization and allocated RAM . K-means algorithm has been used to cluster the VMs and
modified the Worst fit placement algorithm for clustering.
15. 7
Chen et al[32] have proposed an approach in which the over utilized hosts are
divided into three categories : Overloaded , full-load and underload . The migrations are
performed using the Best fit decreasing mechanism and utilization based migration
optimization methods. The migration probability is calculated and then the VMs are chosen
to be placed in queue for the Migrations.
Farahnakian et al.[33] have proposed dynamic VM consolidation approach called
Utilization Prediction-aware Best Fit Decreasing (UP-BFD) that optimizes the VM
placement according to the current and future resource requirements to reduce the number
of VM Migrations and the rate of SLA Violations.
16. 8
3.0 PROPOSED WORK – ENERGY EFFICIENT AND TRAFFIC AWARE VM
CONSOLIDATION ALGORITHMS
In this section we discuss about our proposed work which consists of VM
Consolidation algorithms designed considering both the energy consumption and traffic
factor amongst the VMs. We have extended the work in [17-18, 3] and proposed a new
efficient model which reduces the energy consumption and communication latency
between the VMs.
The whole work was subdivided into smaller parts to solve the problem [5]
1. Adopted a mechanism to find the over utilized hosts and underutilized hosts based
on the CPU utilization
2. Adopted a selection policy which selects the cluster of VMs to be migrated from
the over utilized host until it becomes normal utilized and there is no interruption
in QoS .
3. Adopted three different allocation policies, Least Increase in Power with Host Sort
Allocation policy , Best Fit Host for VM Clusters Allocation policy and Best Fit
VM Cluster Allocation policy which finds the appropriate destination host for the
selected VM Clusters in the previous step to be migrated.
4. Adopted a dynamic mechanism in which the allocation polices create a new host
when a destination host is not available for the VM Clusters to be accommodated .
The new host is created and the VM clusters are migrated to that newly created host.
Each of the above problems are discussed in detail the following sub sections.
17. 9
3.1 Mechanism to detect Over utilized and Underutilized hosts
The host can become over utilized over a period of time when it has to handle more
number of requests especially during peak periods . This can happen frequently or
occasionally in the data centers and there should be a mechanism to detect the over utilized
hosts and migrate the VMs from that host to make it normal utilized.
There are mainly two distinguishable methods to detect if a host is over utilized . The
static heuristic which holds a static CPU Utilization threshold value and compares the host
against its CPU Utilization . If the CPU Utilization value of the host is beyond the value of
the threshold value , then it is considered as the over utilized host and the hosts which fall
way below the threshold value are considered as the underutilized hosts. But the static CPU
Utilization threshold value would not serve all the situations considering the dynamic and
unpredictable workload in the huge data centers[19]. There is a need for more efficient and
dynamically adapting mechanism to find the over utilized host.
Another method is a heuristic algorithm for auto-adjustment of the utilization threshold
based on statistical analysis of historical data collected during the lifetime of VMs[19].This
algorithm uses a more robust method i.e., “Loess Regression“ which is better than other
classical methods and the proposed adaptive-threshold algorithm adjusts the value of the
CPU utilization threshold according to the dynamic changing requirements[19].This
method is applied to estimate the next observation and if the inequalities are satisfied , the
host is detected to be the over utilized host[19].
18. 10
3.2 VMClustering Mechanism and VMCluster Selection Policy
Once the host is detected as over utilized , the VMClustering Mechanism determines
the VMClusters that have to be migrated from the over utilized host to make it normal
utilized. We have used the Minimum Migration Time (MMT) policy [19] that selects a
VMCluster that takes the minimum time to complete the migration when compared to other
VMClusters. We have modified the approach in [19] to fit the enhanced VM Clustering
mechanism. This selection policy is iteratively repeated until the overloaded host becomes
normal utilized.
3.2.1 VMClustering Mechanism
The VMClustering mechanism forms a cluster of VMs by choosing the VMs that
have highly correlated services and exchange more traffic . The basic idea is to consolidate
those correlated virtual machines as one VMCluster and migrate it to the same host server
leading to more efficient usage with less utilization in the topology[20]. The VMClustering
mechanism analyzes the virtual machine traffic matrix and we model the data center
network as a graph where virtual machines are vertices and network links are edges
weighted according to the traffic exchanged between the VMs[20].The VMClustering
algorithm tries to cluster virtual machines that exchange traffic using the concept of graph
community[20]. According to Porter [21], a community consists of a group of nodes that
are relatively densely connected to each other but sparsely connected to other dense groups
in the network.
19. 11
The very high level representation of VM Clustering Algorithm is as below :
Table 1 VM Clustering Algorithm
1.Calculate the graph with the virtual machines as vertices and the traffic matrix as edges
based on communication/traffic factor
2. Remove the lower values edges from the graph and
3. Then the resulted connected vertices forms clusters
We try to explain the VMClustering algorithm with the help of below example in Fig[1].
For illustration purposes we have kept the number of VMs and the VMClusters to a smaller
number. The actual number distinguishes in real time situations in data centers. Let’s
consider that a host is detected as an overloaded host and has 12 VMs on it. The overall
VMClustering mechanism analyzes the traffic between the VMs and models a graph in
which the virtual machines are the vertices and the network links are the weighted edges
according to the traffic exchanged between the VMs[20]. The VM Clustering algorithm
now breaks the whole graph into multiple clusters each consisting a group of VMs or
sometimes individual VMs too.
20. 12
Fig 1 : Clustering Example referred from [20]
From the Fig 1, The weighted edge represents the communication factor or the traffic
exchanged between the VMs[20]. The VM1 is linked with VM2 and VM5 as it has highest
weighted edge values with those vertices. Now VM2 is again linked with VM3 and VM6
VM
4
VM
2
After Removing
Edges
21. 13
as it has the highest weighted edge values with those vertices. The VM5 which is already
connected to VM1 has highest weighted edge value with VM6 and becomes part of the
VMCluster 1. Hence the VM1VM5VM6VM2 VM3 forms one cluster. The
remaining clusters are also formed in the same manner. Hence the VMClustering algorithm
takes the VM connected graph as input and generates the VMClusters as output .
The VMCluster selection policy selects the VMCluster using the minimum migration time
policy[19] and repeats the process iteratively until the over utilized host becomes normal
utilized.
3.3 VMCluster Placement/Allocation Policies
Once the VMClusters are selected using VMCluster and VMCluster Selection
algorithms , the VMCluster Placement policy determines the destination host(s) to be
migrate the VMClusters. VM placement can be considered as a bin packing problem, with
the main goal of minimizing the number of bins (hosts) used[17]. But the bin packing
problem is NP-hard and many algorithms have been developed to solve the problem within
reasonable time. In our proposed work we are extending the work of [17,3,6] and
improvised the existing solutions to match up the current trends in the data centers by
considering traffic as one of the factor .
3.3.1 Best Fit Host Algorithm ( Existing)
In [17] , a Best Fit Host Algorithm was proposed as a new VM placement policy
which selects the host that has the highest predicted utilization value among all the
22. 14
available hosts if the VM is allocated onto that host. The high level structure of this
algorithm is shown below :
Table 2 BFH Placement Algorithm (Existing)
Sort VMs in MigratingVMList by descending order of current CPU utilization
Sort hosts in AvailableHostList by descending order of current CPU utilization
For each VM in MigratingVMList do
For each host in AvailableHostList do
If host is suitable for VM then
Calculate predicted utilization for VM allocated to that host using local
regression;
Find the host with the highest predicted utilization (and < 1)
Add (VM, host) pair to MigrationMap
This algorithm uses the Loess regression method to calculate the predicted utilization value
of the host. The current MIPS of the VM which is selected to migrated is added to calculate
the utilization of the host and then the predicted value is obtained by using the local
regression[17]. Once the predicted values for all the hosts are calculated , the host
whichever has the maximum predicted value (and the value < 1) is selected as the Best fit
Host for migrating or placing the VM . With the help of this algorithm, the best fit host for
each VM is determined.
23. 15
3.3.2 Least Increase in Power with Host Sort for VMClusters Algorithm :LIP-
VMCL (Newly Improved)
The existing Least increase in Power with Host Sort algorithm sorts all the available
hosts in the decreasing order of their current CPU Utilization. This algorithm takes only
CPU Utilization into consideration when sorting the hosts and then selects the VMs to be
migrated. We have improvised this algorithm by adding a new mechanism to select the
VMClusters considering both the CPU Utilization and traffic factor. All the hosts which
are not over utilized are added to available Host list and are sorted in the decreasing order
of their CPU Utilization . This way , the first host selected will be the host with the highest
CPU Utilization amongst all in the available host list and by moving the VMCluster to this
host , the maximum CPU Utilization is achieved without overloading the host and hence
both the energy efficient and traffic aware mechanism is achieved with this new dynamic
algorithm. The high level representation of this algorithm is shown below :
Table 3 Least Increase in Power with Host Sort for VMClusters Algorithm :
LIP_VMCL
Input : Migrating VM list , AvailableHostListPartition
Output : MigrationMap
Sort hosts in AvailableHostListPartition by descending order of current CPU utilization
Select the VMs to be migrated in a MigratingVMList
Loop: Cluster the VMs using VM clustering algorithm as VMClusterList
24. 16
For each VMCluster in VMClusterList do
For each host in AvailableHostListPartition do
If host is suitable for VMCluster then
Store the host in SuitableHostsList
Calculate and store increased power of the host in suitableHostList after
allocating the VMCluster
Find the host with the least increased power for VMCluster
Add (VMCluster, host) pair to MigrationMap
Delete VMCluster from VMClusterList
Go to Loop
If VMClusterList is equal to empty then
break.
The VMClustering algorithm is developed in such a way that , once a VMCluster is paired
with host and added to MigrationMap , that VMCluster is deleted from the VMClusterList
and the process continues iteratively until all the VMClusters are migrated and host
becomes normal utilized.
25. 17
3.3.3 Best Fit Host Algorithm for VMClusters Algorithm : BFH-VMCL (Newly
Improved)
The existing Best Fit Host Algorithm uses local regression method to select the host
that has highest predicted CPU Utilization value. In this method , only CPU utilization
value is considered and not the communication factor between the VMs. The best option
would be to migrate the heavily communicated network aware VMs to a single host [20]
so that there is no communication latency between them and also reduces the response time.
The high level representation of this newly improvised algorithm is mentioned below :
Table 4 Best Fit Host Algorithm for VMClusters Algorithm : BFH-VMCL
Input : Migrating VM list , AvailableHostListPartition
Output : MigrationMap
Sort hosts in AvailableHostListPartition by descending order of current CPU
utilization
Select the VMs to be migrated in a MigratingVMList
Loop: Cluster the VMs using VM clustering algorithm as VMClusterList
For each VMCluster in VMClusterList do
For each host in AvailableHostListPartition do
If host is suitable for VMCluster then
26. 18
Store the host in SuitableHostsList
Calculate predicted utilization for VMCluster allocated to that host
using local regression;
Find the host with the highest predicted utilization (and < 1)
Add (VMCluster, host) pair to MigrationMap
Delete VMCluster from VMClusterList
Go to Loop
If VMClusterList is equal to empty then
break.
In this algorithm , once the over utilized host is identified the migrating VMs on this host
are clustered using the VM Clustering algorithm and the VMClusters are formed . Then
the CPU utilization of each VMCluster is calculated and the predicted CPU Utilization for
the host to which this VMCluster is migrated is calculated . This value is calculated for all
the hosts in the available Host list and the host with the highest predicted utilization ( and
< 1) is calculated for the VMCluster to be migrated. Once the VMCluster is migrated , that
is deleted from the VMClusterList and this process is iteratively repeated until all the over
utilized host becomes underutilized.
3.3.4 Best Fit VM Cluster Algorithm : BF-VMCL (Newly Improved)
The present algorithm uses the dynamic programming approach to solve the bin
packing problem of finding the right host for the VMs which only considers CPU
27. 19
Utilization. This algorithm is again improvised by considering the traffic factor between
the VMs and clustering them. Selecting the Best Fit VM Clusters for the available host
using 0-1 Knapsack dynamic programming approach proved to be the one of the best
approaches. The high level representation of the newly improvised algorithm is presented
below :
Table 5 Best Fit VM Cluster Algorithm : BF-VMCL (Newly Improved)
Input : Migrating VM list , AvailableHostListPartition
Output : MigrationMap
Sort hosts in AvailableHostListPartition by descending order of current CPU utilization
Select the VMs to be migrated in a MigratingVMList
Loop: Cluster the VMs using VM clustering algorithm as VMClusterList
For each Cluster in VMClusterList do
For each host in AvailableHostListPartition do
If VMClusterList is not empty then
Find optimal combination of VMClusters in VMClusterList for that host using
dynamic programming
For each Cluster in optimal combination of VMClusterList do
Add (VMCluster, host) pair to MigrationMap
Delete VMCluster from VMClusterList
28. 20
Go to Loop
If VMClusterList is equal to empty then
break.
In this algorithm , the VMClusters are formed based on the communication factor amongst
them and it’s been taken care to migrate a VMCluster to a single host to reduce the
communication latency between them as well as increase the energy efficiency. Once the
optimal combination of VMClusters is formed using the dynamic programming approach ,
the selected VMClusters are migrated to that particular Host . This problem is solved using
0-1 Knapsack dynamic programming solution in which we try to find the best fit sack(host)
for the bins (VMClusters) . This algorithm achieves the best results amongst all the
allocation/placement policies discussed in this paper as it starts with the host having the
minimum available capacity and highest current CPU utilization level and finds the best
set of migrated VMClusters from the migrating VMClusterList[17]. This process is
repeated iteratively until the over utilized host becomes normal utilized.
3.3.5 Dynamic Mechanism for creating New Host
There can arise situations when there are not sufficient number of destination hosts
available to accommodate the ongoing VM Migrations. Then it is necessary to create a new
host to be able to migrate the VMClusters from the over utilized host . The creation of new
hosts is a dynamic process which happens when there is a need for creating a new Host.
Also , once the VMClusters are migrated to this newly created host , in the next iteration
29. 21
this host is added to the Available hosts list and the same host can be used to accommodate
more VMClusters. The CPU Utilization is considered as one of the important factor and
the calculations are made dynamically to create the additional hosts only when needed. The
new host creation is included as dynamic mechanism in VMCluster placement algorithm
to find the available hosts before migrating the virtual Machines. During the VMCluster
migration , the algorithm continuously checks if there are enough hosts available for the
VM migrations to be performed – if there are not enough available then a new host is
created and below is the snippet of the high level representation of the dynamic New Host
creation method :
Table 6 Dynamic Mechanism for creating New Host
If the AvailableHostList is empty do
Create NewHost
Migrate the VMCluster(s) to the newly created host
Add the new host to the AvailableHostsList
3.3.6 Overall Algorithm for Dynamic Consolidation of Virtual Machines
The complete algorithm is divided into following steps
1. Detection of over-utilized and underutilized hosts
2. Cluster the VMs from over-utilized host into VMClusters
3. Select the VMCluster(s) to be migrated from over-utilized host.
30. 22
4. Placement of VMCluster(s) onto Available hosts
5. If underutilized hosts still exist , cluster the VMs to VMClusters and
6. Placement of VMCluster(s) onto Available hosts
Table 7 Overall Algorithm for Dynamic Consolidation of Virtual Machines
For each host do
Step 1: Finding over-utilized and underutilized hosts using host
overload/underload detection algorithm
If host is over-utilized then
Step 2: Select the VMs to be migrated from over-utilized hosts using VM
Clustering Algorithm.
All the VMs are clustered based on the CPU Utilization and traffic matrix
Add the VMClusters to the Migrating VMClusterList
Step 3: Select the VMCluster(s) to be migrated from Migrating
VMClusterList using VMCluster selection algorithm
Select appropriate VMCluster(s) from that overloaded host
Add selected VMCluster(s) to MigratingVMList
31. 23
Step 4: Placement of VMCluster(s) from over-utilized hosts to Available
host(s)
Select appropriate VMCluster(s) from that host and Calculate
MigrationMap using Energy and Traffic aware VM placement algorithm
Add MigrationMap to TotalMigrationMap
Clear allocated VMCluster(s) from MigratingVMList
if the host is still overloaded then
Go to Step 3
Else
Break
Else the host is under-utilized
Step 5: Find the most under-utilized host
If under-utilized host exists then
Select all VMs from that host and generate VMCluster(s) using VMClustering
Algorithm
Add all VMCluster(s) to Migrating VMClusterList
Step 6: Placement of VMCluster(s) from the under-utilized host to other host(s)
Calculate MigrationMap for VMClusterList using Energy and Traffic aware
VM placement algorithm
32. 24
Add MigrationMap to TotalMigrationMap
Clear MigratingVMList
Else then
Break
The algorithm first determines if the host is over utilized or underutilized by using
local regression method. In the next step , the VMs from the over utilized host are clustered
based on VMClustering Algorithm. The VMClusters are now selected based on the
minimum migration time to be migrated to the destination host. The Energy and Power
Aware VM Placement algorithms described in the Part - C determine the destination host
and migrate the VMCluster(s) to it. This whole process is iteratively repeated until the
overloaded host becomes normal utilized. In the process of VMCluster Migrations , if there
is no available host for VMCluster Migrations , then a new host is created dynamically as
part of the Placement Algorithms.
The VMClusters from the over utilized hosts can be migrated to the underutilized hosts
if they are determined to be the right available destination host. Hence , after all the over
utilized hosts become normal utilized ,the underutilized hosts are checked .If there exists
any underutilized host , the VMs from them are clustered into VMClusters and are migrated
to the available destination hosts.
33. 25
4. PERFORMANCE EVALUATION
In this section we discuss about the experiment setup and evaluate the Performance
results with respect to the Overall energy , Migration energy , New host creation energy
consumption , and SLA Violations.
4.1 Experimental Setup
We evaluated the efficiency of energy and traffic aware virtual machine consolidation
algorithms using CloudSim Toolkit. CloudSim is the framework for modelling and
simulating cloud computing Infrastructure and services [22].The source code used for this
experiment setup is CloudSim 3.0.3[24]. The energy consumption simulation model of [25]
is included in the CloudSim 3.0.3 Tool kit. In this particular experimental setup - the
configurations of the hosts, VMs , workload and other details are maintained as same as
the work[25]. The relationship between power consumption (in Watts) and CPU utilization
of the two host types used in this simulation experiment are HP ProLiant ML 110 G4 and
HP ProLiant ML110 G5 is mentioned in the below Table 8 [25].
Table 8 Power Consumption At Different CPU Utilization Levels
CPU Utilization
(%)
Power Consumption (Watt)
HP ProLiant
ML110 G4
HP ProLiant
ML110 G5
0 86 93.7
10 89.4 97
34. 26
20 92.6 101
30 96 105
40 99.5 110
50 102 116
60 106 121
70 108 125
80 112 129
90 114 133
100 117 135
We have made some modifications regarding the ram size of VMs and the ram sizes of
four VM types are modified to 1GB, 2GB, 4GB, and 8GB[17]. The unit power
consumption for migration is varied from 2KW to 8KW by referring to the work in [26].
4.2 Performance Results
The following graphs shows the performance results for the experiments conducted in
CloudSim .The results are collected for three allocation policies i.e., LIP-Host Sort
_VMCL , BFH_VMCL , BFV_VMCL using the clustering mechanism which groups the
VMs based on their traffic matrix and migrates the VM Clusters instead of VMs. All these
three allocation policies are compared against the Best Fit Host Allocation policy with
respect to the work in [17] .
35. 27
4.2.1 Energy Calculations
In this paper , we calculate the total energy consumed as sum of the total computation
energy , total migration energy and the additional energy required to create new hosts. The
computation energy is related to the power consumed for maintaining and running of the
hosts and VMs on it. We extend the work in [17] and use the same equations as discussed
in [17] . Let 𝐸𝑐𝑜𝑚𝑝 be the computation energy for all the servers
𝐸𝑐𝑜𝑚𝑝 = ∑ ∫ 𝑃𝑐(𝑢𝑖(𝑡))𝑑𝑡
𝑁
𝑖=1
(1)
where 𝑁 is the number of active servers, 𝑃𝑐(𝑢𝑖(𝑡)) is the power consumption of the 𝑖 th
server, which is a function of 𝑢𝑖(𝑡), CPU utilization of the 𝑖 th server. Although energy
consumption is related to both CPU utilization and memory usage, to simplify the model,
in this paper we assume that computation power consumption is solely related to CPU
utilization as according to [17].
The Migration energy is consumed because of the energy required to migrate the
VMs , power consumption by network devices and the VM’s memory size[17]. Let 𝐸 𝑚𝑖𝑔𝑟
be the migration energy consumption of all the migrated VMs.
𝐸 𝑚𝑖𝑔𝑟 = 4 𝑋 ∑ 𝑃𝑚
𝑀
𝑗=1
𝐶 𝑗
𝐵𝑊 𝑗
(2)
where 𝑀 is the total number of migrated VMs, 𝑃𝑚 is the unit power consumption for, 𝐶𝑗 is
the memory size of the 𝑗th migrated VM, and 𝐵𝑊𝑗is the available bandwidth for migrating
the 𝑗th VM[17]. Here, 𝑃𝑚 is a constant for a switch of a given bandwidth [26].The
36. 28
migration energy consumption includes at least two transmissions and two receptions
considering the transmission from the host sending to one or more switches , and from the
last switch to the host receiving it.
The energy required to create an additional host is modelled as follows. Let 𝐸 𝐻𝑜𝑠𝑡
denote the energy required to create the new host .This value is considered to be between
86 – 89 watts for each host created every time and is referred from Table 1.
𝐸 𝐻𝑜𝑠𝑡 = ∑ 𝑃ℎ
𝑁
𝑘=1 (3)
Where N is the total number of additional hosts created , 𝑃ℎ is the power consumption for
creating an additional host.
With all the these values available , we can now formulate the Total Energy Consumption
denoted by 𝐸 𝑇𝑜𝑡𝑎𝑙
𝐸 𝑇𝑜𝑡𝑎𝑙 = 𝐸𝑐𝑜𝑚𝑝 + 𝐸 𝑚𝑖𝑔𝑟 + 𝐸 𝐻𝑜𝑠𝑡
𝐸 𝑇𝑜𝑡𝑎𝑙= ∑ ∫ 𝑃𝑐(𝑢𝑖(𝑡))𝑑𝑡
𝑁
𝑖=1
+ 4 𝑋 ∑ 𝑃𝑚
𝑀
𝑗=1
𝐶 𝑗
𝐵𝑊 𝑗
+ ∑ 𝑃ℎ
𝑁
𝑘=1 (4)
Table 9 The four evaluated algorithms
S.No Algorithm
1 BFH
2 LIP_VMCL
3 BFH_VMCL
4 BFV_VMCL
37. 29
The below Fig.2 shows the Total energy consumed for the four VM Allocation Policies.
The rest of the figures depict the results of the evaluated measures. From the results below ,
it can be observed that the LIP-HostSort_VMCL consumes the most energy when
compared to other allocation policies . The BFH without clustering is the next allocation
policy which consumes more energy and BFV_VMCL consumes the least energy amongst
all because it packs the optimal VMClusters together and then migrates all of them in
parallel so that it consumes less energy . Even though the number of VMs migrated in
BFH_VMCL and BFV_VMCL is same or almost same in most cases , the migration energy
consumed is less in BFV_VMCL . This is because all the VMs in the VMClusters are
migrated in parallel so as to save the energy . The BFV_VMCL is formulated in such a
way to allow parallel migrations because of the formation of optimal combination of the
VMs. Whereas the BFH_VMCL consumes more migrating energy than BFV_VMCL
though the number of migrated VMs are same because , it migrates the VMClusters
sequentially one after the other and hence the more energy values. From the results it can
be observed that the BFV_VMCL has achieved good results amongst all.
38. 30
Fig 2. Total Energy Consumption of four Algorithms
Fig 3. Migration Energy Consumption of four Algorithms
125.639
175.159
96.739
63.279
0
20
40
60
80
100
120
140
160
180
200
BFH LIP-Host Sort_VMCL BFH_VMCL BFV_VMCL
TotalEnergyConsumption(KWH)
Algorithm
Total Energy Consumption
25.38
39.95
8.13
3.2
0
5
10
15
20
25
30
35
40
45
BFH LIP-HostSort_VMCL BFH_VMCL BFV_VMCL
TotalEnergyConsumption(KWH)
Algorithm
Total Migration Energy Consumption
39. 31
Fig 4. Energy Consumption in creating new host for four Algorithms
Fig 5. Total number of VM Migrations for four Algorithms
0.88
0.6
0.24
0.08
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
BFH LIP-Host Sort_VMCL BFH_VMCL BFV_VMCL
TotalEnergyConsumption(KWH)
Algorithm
Total Energy Consumption in creating Additional Hosts
40. 32
Fig 6. Total number of additional hosts created in four Algorithms
In addition to the results above , we wanted to consider the energy consumed for
independent algorithms i.e., just the Best Fit Host Algorithm , the Clustering algorithm
and when both the clustering and the BFH algorithm are combined to form the
BFH_VMCL algorithm. From the results below , we can observe that the clustering
mechanism consumes very less energy when executed independently compared to the
other BFH and BFH_VMCL algorithms. By combining the clustering with the BFH
algorithm more than 40KWH of energy can be saved at an instance.
41. 33
Fig 7. Energy Consumption for three independent algorithms.
4.2.2 SLA Violations
SLA Violations is the one of important performance metrics for any cloud computing
model. In this paper along with reducing the energy consumption we also try to keep the
SLA Violations at the lowest possible. We have considered two metrics when calculating
the SLA violations.
First, SLA can be violated because of performance degradation due to overloaded hosts.
This metric is denoted by 𝑆𝐿𝐴𝑉𝑜𝑣𝑒𝑟−𝑙𝑜𝑎𝑑𝑒𝑑 𝐶𝑃𝑈 and represents the ratio of SLA violation
time and the total active time of the servers [17,25,27]
𝑆𝐿𝐴𝑉𝑜𝑣𝑒𝑟−𝑙𝑜𝑎𝑑𝑒𝑑 𝐶𝑃𝑈 =
∑ 𝑇𝑣 𝑖
𝑁
𝑖=1
∑ 𝑇𝑎 𝑖
𝑁
𝑖=1
(5)
135.69
1.45
96.739
0
20
40
60
80
100
120
140
160
BFH Clustering BFH_VMCL
IndividualEnergyConsumption(KWH)
Algorithm
Energy Consumption for Individual Algorithm
42. 34
where 𝑁 is the number of active servers, 𝑇𝑣 𝑖
is the SLA violation time of the 𝑖 th server,
and 𝑇𝑎 𝑖
is the total active time of the 𝑖 th server.
Second , VM Migrations also causes performance degradation[28] and so we
consider the SLA violations during VM Migrations . Let 𝑆𝐿𝐴𝑉𝑢𝑛𝑚𝑒𝑡 𝑀𝐼𝑃𝑆 denote the ratio
of total unmet MIPS and the total requested MIPS during VM Migrations. :
𝑆𝐿𝐴𝑉𝑢𝑛𝑚𝑒𝑡 𝑀𝐼𝑃𝑆 =
∑ ∫(𝑈 𝑟 𝑗
𝑀
𝑗=1 (𝑡)−𝑈 𝑎 𝑗
(𝑡)) 𝑑𝑡
∑ ∫(𝑈 𝑟 𝑗
𝑀
𝑗=1 (𝑡) 𝑑𝑡
(6)
where 𝑀 is the total number of migrated VMs, (𝑈𝑟 𝑗
is the requested MIPS of the 𝑗th VM,
and 𝑈 𝑎 𝑗
is the allocated MIPS of the 𝑗 th VM. The overall SLA violations denoted by
𝑆𝐿𝐴𝑉𝑜𝑣𝑒𝑟𝑎𝑙𝑙 combines the above two SLA violation metrics[17]:
𝑆𝐿𝐴𝑉𝑜𝑣𝑒𝑟𝑎𝑙𝑙 = 𝑆𝐿𝐴𝑉𝑜𝑣𝑒𝑟−𝑙𝑜𝑎𝑑𝑒𝑑 𝐶𝑃𝑈 × 𝑆𝐿𝐴𝑉𝑢𝑛𝑚𝑒𝑡 𝑀𝐼𝑃𝑆 (7)
The table below shows the SLA Violations for the four allocation policies discussed in this
paper
43. 35
Table 10 SLA Violations
Algorithm
SLAV Metrics
𝑆𝐿𝐴𝑉𝑢𝑛𝑚𝑒𝑡 𝑀𝐼𝑃𝑆 𝑆𝐿𝐴𝑉𝑜𝑣𝑒𝑟−𝑙𝑜𝑎𝑑𝑒𝑑 𝐶𝑃𝑈 𝑆𝐿𝐴𝑉𝑜𝑣𝑒𝑟𝑎𝑙𝑙
LIP-HostSort_VMCL 0.00107 0.727 0.0000077
BFH_VMCL 0.00038 0.917 0.0000034
BFV_VMCL 0.00019 5.946 0.0000112
BFH[17-] 0.00053 1.262 0.0000066
From the table , BFV_VMCL has achieved the least 𝑆𝐿𝐴𝑉𝑢𝑛𝑚𝑒𝑡 𝑀𝐼𝑃𝑆 and this is because
of the less migration energy consumed . BFV_VMCL has the highest SLA violations when
it comes to 𝑆𝐿𝐴𝑉𝑜𝑣𝑒𝑟−𝑙𝑜𝑎𝑑𝑒𝑑 𝐶𝑃𝑈 and hence the overall SLA Violations are considerably
high in case of BFV_VMCL. BFH_VMCL has the least SLA overall violation and is
considered the best amongst all with minimum SLA violations. From the table it can be
observed that the BFH_VMCL SLA violations are almost reduced by 50% when
considered with BFH[17] and this proves the efficiency of energy and traffic aware
allocation polices.
44. 36
5.0 CONCLUSION & FUTURE WORK
In this paper , we extended the work of [17, 3,6] by implementing the traffic awareness for
the VM consolidation algorithms . We have proposed the clustering mechanism to cluster
the VMs considering both the energy and traffic matrix and have implemented the four
different allocation policies. The energy calculations and SLA violations for these
algorithms have been discussed along with the supporting results. The results have proved
that the clustering mechanism has definitely helped solve both the problems of energy
efficiency and reducing the communication latency between the VMs. The total energy
consumed is considerably reduced with energy and traffic aware VM consolidation
algorithms when compared to [17]. The number of VMs migrated have reduced
significantly for BFH_VMCL and BFV_VMCL when compared to LIP-HostSort_VMCL.
The SLA metrics also show that clustering mechanism have achieved almost 50% better
results in terms of SLA violations. Amongst all , considering minimal SLA violations and
better efficiency in terms of energy BFH_VMCL has performed well. BFV_VMCL is the
best energy efficiency model at the price of high SLA violations. Further work can be
concentrated on improving the BFV_VMCL algorithm by trying to reduce the SLA
Violations. Research can be further made on migration analysis and coming up with more
efficient models to reduce the energy efficiency and achieving less communication latency
between the VMs. Also , this dynamic VM Consolidation mechanism is not yet
implemented in real time data centers. Once this is implemented in Data centers , then real
time situations might demand different kind of approaches. This can be considered as
potential future work towards an efficient green cloud computing solution.
45. 37
REFERENCES :
1. A. Beloglazov and R.Buyya . “OpenStack Neat: a framework for dynamic and energy-
efficient consolidation of virtual machines in OpenStack clouds”, Concurrency and
Computation : Practice and Experience, 2014; Wiley Online Library - DOI:
10.1002/cpe.3314
2. A. Beloglazov , R.Buyya , Lee YC, Zomaya A. “A taxonomy and survey of energy-
efficient data centers and cloud computing systems.” In Advances in Computers, Vol. 82,
Zelkowitz M (ed.). Academic Press: Waltham, MA, USA, 2011; 47–111.
3. A. Beloglazov, J. Abawajy, and R. Buyya, “Energy-aware resource allocation
heuristics for efficient management of data centers for cloud computing,” Future
Generation Computer Systems, vol. 28, no. 5, pp. 755-768, 2012.
4. T.Arthi and S.Hamead , SSN College of Engineering, Anna University Chennai - 603110.
Tamil Nadu, India, ”Energy Aware Cloud Service Provisioning Approach for Green
Computing Environment”, at 2013 3rd IEEE International Advance Computing
Conference (IACC).
5. R.Kashyap , S.Chaudhary and P.M.Jat . “Virtual Machine Migration for Back-end
Mashup Application Deployed on OpenStack Environment”. International Conference on
Parallel, Distributed and Grid Computing.2014.
6. H. Trong Vu, S. Hwang. “A Traffic and Power aware Algorithm for Virtual Machine
Placement in Cloud Data Center”. International Journal of Grid and Distributed
Computing Vol.7, No.1 (2014)
7. X. Meng, V. Pappas, and L. Zhang. “Improving the scalability of data center networks
with traffic-aware virtual machine placement.” In INFOCOM, 2010 Proceedings IEEE,
pages 1-9, 2010.
46. 38
8.V. Shrivastava, P. Zerfos, K.W. Lee, H. Jamjoom, Yew-Huey Liu, and S. Banerjee.
“Application-aware virtual machine migration in data centers.” In INFOCOM, 2011
Proceedings IEEE, pages 66-70,2011.
9. A. Soule, A. Nucci, R. Cruz, E. Leonardi, and N. Taft. “How to identify and estimate
the largest traffic matrix elements in a dynamic environment”. In Proceedings of the joint
international conference on Measurement and modeling of computer systems,
SIGMETRICS '04/Performance '04, pages 73-84, New York, NY, USA, 2004. ACM.
10. T.Wood, P.Shenoy, A.Venkataramani, and M.Yousif. “Black-box and gray-box
strategies for virtual machine migration”. In Proceedings of the 4th USENIX conference
on Networked systems design & implementation, NSDI'07, pages 17-17, Berkeley,
CA, USA, 2007. USENIX Association.
11. A. Nucci, A. Sridharan, and N. Taft. “The problem of synthetically generating IP traffic
matrices”. SIGCOMM Comput. Commun. Rev., 35(3):19-32, July 2005.
12. R. Buyya, A. Beloglazov, and J. H. Abawajy. “Energy-efficient management of data
center resources for cloud computing”: A vision, architectural elements, and open
challenges. CoRR, abs/1006.0308, 2010.
13. J. Koomey . Growth in Data Center Electricity Use 2005 to 2010. Analytics Press:
Oakland, CA, 2011.
14. R. Alvarez-Horine and M. Moh, “Experimental evaluation of Linux TCP for adaptive
video streaming over the cloud,” Proc. of the 1st IEEE Int. workshop on Management and
Security technologies for Cloud Computing, with the IEEE Globecom (Global
Communications Conference), Anaheim, CA, Dec. 2012.
15. J. Huang, K. Wu, L. Leong, S. Ma, and M. Moh, “A tunable workflow scheduling
algorithm based on particle swarm optimization for cloud computing,” International
Journal of Soft Computing and Software Engineering [JSCSE], Vol. 3, No. 3, pp. 351-358,
2013.
47. 39
16. S. Gaur, M. Moh, and M. Balakrishnan, “Hiding behind the clouds: efficient, privacy-
preserving queries via cloud proxies,” Proc. Of International Workshop on Cloud
Computing Systems, Networks, and Applications, in conjunction with the IEEE Globecom
(Global Communications Conference), held in Atlanta, GA., Dec. 2013.
17. J. Huang ; K. Wu ; M. Moh. “Dynamic Virtual Machine Migration Algorithms Using
Enhanced Energy Consumption Model for Green Cloud Data Centers” , High Performance
Computing & Simulation (HPCS), International Conference, pp. 902 – 910 , 2014.
18. H. Trong Vu, S. Hwang. “A Traffic and Power aware Algorithm for Virtual Machine
Placement in Cloud Data Center”. International Journal of Grid and Distributed
Computing Vol.7, No.1 (2014)
19. A. Beloglazov and R.Buyya, “Energy-Efficient Management of Virtual Machines in
Data Centers for Cloud Computing” ,Department of Computing and Information Systems ,
The University of Melbourne ,2012.
20. D.S.Dias and L.H.M.K.Costa , “Online Traffic-aware Virtual Machine Placement in
Data Center Networks”. Global Information Infrastructure Symposium with the IEEE,
2012.
21. M. A. Porter, J.-P. Onnela, and P. J. Mucha, “Communities in networks,” 2009.
22. R. N. Calheiros, R. Ranjan, A. Beloglazov, C. A. F. De Rose, and R.Buyya, “CloudSim:
a toolkit for modeling and simulation of cloud computing environments and evaluation of
resource provisioning algorithms,” Software-Practice and Experience, v. 41, pp. 23-50,
2011.
23. Gartner, Inc. Gartner Estimates ICT Industry Accounts for 2 Percent of Global CO2
Emissions. Gartner Press: Stamford, CT, USA, Release (April 2007).
24. CloudSim 3.0.3. Retrieved from http://code.google.com/p/cloudsim/
25. A. Beloglazov, and R. Buyya, “Optimal online deterministic algorithms and adaptive
heuristics for energy and performance efficient dynamic consolidation of virtual machines
48. 40
in cloud data centers,” Concurrency Computation: Practice and Experience, vol. 24, pp.
1397-1420, 2012.
26. J. Baliga, R. W. A. Ayre, K. Hinton, and R. S. Tucker, "Green cloud computing:
balancing energy in processing, storage, and transport," Proceedings of the IEEE, pp. 149-
167, 2011.
27. A. Beloglazov, and R. Buyya, “Adaptive threshold-based approach for energy-efficient
consolidation of virtual machines in cloud data centers,” Proc. 8th International
Workshop on Middleware for Grids, Clouds and e-Science, 2010.
28. W. Voorsluys, J. Broberg, S. Venugopal, and R.Buyya, “Cost of Virtual Machine Live
Migration in Clouds: A Performance Evaluation,” 1st
International Conference on Cloud
Computing, pp. 254-265, 2009.
29. D.S. Dias and L.H.M.K. Costa. “Online traffic-aware virtual machine placement in
data center networks.” In Global Information Infrastructure and Networking Symposium
(GIIS), 2012, pages 1-8, 2012.
30. T. H.Nguyen, M.D. Francesco, and A.Y. Aski , “ Virtual Machine Consolidation with
Usage Prediction for Energy-Efficient Cloud Data Centers”, IEEE 8th International
Conference on Cloud Computing,2015
31. M.R.Chowdhury, M.R.Mahmud, R.M.Rahman,”Clustered Based VM Placement
Strategies ” , IEEE/ACIS International Conference on Computer and Information ,2015
32. Q. Chen, J. Chen, B. Zheng, J. Cui., & Y.Qian, “Utilization-based VM consolidation
scheme for power efficiency in cloud data centers”, 2015 IEEE International Conference
on Communication Workshop (ICCW).
33. F. Farahnakian., T.Pahikkala., P.Liljeberg., J.Plosila., & H.Tenhunen, “Utilization
Prediction Aware VM Consolidation Approach for Green Cloud Computing”, 2015 IEEE
8th International Conference on Cloud Computing
49. 41
34. E.Feller L., Rilling , C.Morin , ”A scalable and autonomic virtual machine
management framework for private clouds”. Proceedings of the 12th IEEE/ACM
International Symposium on Cluster, Cloud and Grid Computing (CCGrid), Ottawa,
Canada, 2012; 482–489
35. D.Kusic , N.Kandasamy , G.Jiang . “Combined power and performance management
of virtualized computing environments serving session-based workloads” IEEE
Transactions on Network and Service Management (TNSM) 2011; 8(3):245–258.
36. D.Ardagna , B.Panicucci , M.Trubian , L.Zhang . “Energy-aware autonomic resource
allocation in multitier virtualized environments.” IEEE Transactions on Services
Computing (TSC) 2012; 5(1):2–19.
37. J.Yang , K.Zeng , H.Hu , H.Xi . “Dynamic cluster reconfiguration for energy
conservation in computation intensive service.” IEEE Transact ions on Computers 2012;
61(10):1401–1416.
38. D.Carrera , M.Steinder , I.Whalley , J.Torres , E.Ayguadé . “Autonomic placement of
mixed batch and transactional workloads.” IEEE Transactions on Parallel and Distributed
Systems (TPDS) 2012; 23(2):219–231.
39. I.Goiri , J.Berral , J.Oriol Fitó , F.Julià , R.Nou , J.Guitart , R.Gavaldà , J.Torres .
“Energy-efficient and multifaceted resource management for profit-driven virtualized data
centers.” Future Generation Computer Systems (FGCS) 2012; 28(5):718–731.