2. B. Source of deployments
As containers could be deployed across a wide range of
computing platforms like cloud and datacenter, it becomes
complex for any scheduling scheme for deciding the best fit
resources. The user requirements also varied based on the
deployment model, for e.g., if datacenter is chosen for
deployment, user may provide the basic system requirements
including CPU, RAM, disk space, etc. Similarly, if
deployment on Cloud, the user may provide the pricing
details, etc. Hence, source of deployment also considered to
be an important factor.
C. Nature of application
Types of application deployed as containers also vary.
Microservices [11] starts replacing the traditional multi-tier
application. The requirement of such application also differs
from the traditional container application.
Once the containers being placed across the resources, it
has become important to monitor at regular intervals based on
the performance. In this paper, we propose an architecture of
resource-aware placement and SLA based monitoring for
better management of resources.
The paper is organized as follows: Section II details on
various container orchestration platforms followed by our
focus on container placement and SLA. Section IV compares
our algorithm with default scheduling and proposed
architecture is described in section V followed by advantages
and conclusion in Section VII.
II. CONTAINER ORCHESTRATION AND SCHEDULING
Docker Swarm [12] is one of the popular open-source
orchestration platforms. Its architecture is simple and consists
of Manager and Worker nodes. Manager Node controls the
entire cluster management and task distribution. Worker
nodes run containers as directed by Manager Nodes. Recent
Docker version includes the “Swarm Mode” by default which
helps to manage a cluster of Docker engine.
Kubernetes [13], another popular solution for container
orchestration. Architecture consists of Master and nodes
similar to Swarm. Latest versions of Kubernetes provide four
different advanced scheduling policies viz., node affinity/anti-
affinity, taints and tolerations, pod affinity/anti-affinity and
custom. Mesos [14], yet another orchestration solution for
managing diverse cluster computing frameworks including
Hadoop. Another orchestration framework based on Mesos is
Marathon [15].
Most of the orchestration solution provides scheduling
scheme which could identify the resources based on the
current demand and availability of resources for faster
deployment. The need for resource scheduling and continuous
monitoring is explained in the next section.
III. RESOURCE-AWARE PLACEMENT AND SLA MONITORING
Most of the traditional orchestration scheduling identifies
the resources matching minimum requirement set by the user.
Usually the resources are rated/graded based on the current
availability and characteristics to process the
scheduling/placement of containers. After the container being
deployed, it has become equally important to monitor the
performance of the container. Since more than one containers
could be placed on the same physical server/VM, one could
affect each other’s performance, hence it requires a resource
aware placement and SLA monitoring control for better
management of the resources. Our main motivation behind
Container placement and SLA monitoring are explained as
follows
A. Container placement
Containers placed on the server or VM consume memory
based on the level of usage. Some users request for scaling
application based on threshold values. However, on certain
conditions, the containers residing at the same host may have
an impact on each other’s performance. Monitoring at host
level could identify the container’s performance metrics such
as CPU utilization, memory consumption, etc. For proper
placement of containers, user could recommend the
maximum level of the CPU, RAM or other metrics. Most of
the scheduling scheme analyzes resource based on the current
condition and place the container. It doesn’t forecast the
running containers impact and how the current deployment
will affect each other. Hence, this could create an
environment with higher chances of getting overloaded
which also result in degradation of performance and even
result in restarting of containers. An example of such
condition is illustrated as follows
Fig. 1 shows the CPU utilization of containers
deployed across four servers at a time interval Tx. Now if a
new container is to be scheduled across these servers, based
on low utilization, server-2 might have chosen.
Fig. 2 shows the CPU utilization of containers after a time
interval Tx+y. Therefore, if server-2 has chosen for new
deployment, as based on Tx, server-2 might have overloaded
during Tx+y time interval and may result in degradation of
performance across the containers deployed.
Fig. 1. CPU utilization of Containers during time interval (Tx)
Fig. 2. CPU utilization of Containers during time interval (Tx+y)
3. B. SLA Monitoring and negotiation
After a container being placed across the resources, it has
to be monitored at regular interval. Default monitoring
scheme aims at verifying the actual requirement by the user
for example, if the user requested a maximum of 4GB
memory for Container C, it monitors at a periodic interval on
the memory usage of the Container C and eventually before
it reaches 4GB, alert will be provided. Our proposed solution
aims at carrying out desired action on the affected container,
during such conditions. To ensure smooth operation, the user
could choose the default action which can be carried on such
conditions for e.g.: scaling, pause, stop and termination.
Negotiation with the user can be made to handle any
uncertain conditions for e.g.: no resources available at the
moment based on request, abnormal behavior of containers,
etc. Thus negotiation helps to overcome any issues on the
deployment of containers.
SLA monitoring and negotiation are required for the
verification on deployment of container and ensures the
smooth management. SLA monitoring we refer to the
requirements specified by the user on the deployment of a
container. Similarly, the resources are also monitored to
ensure that the containers are not affected on performance.
SLA monitoring ensures that the given container utilizes the
resources as requested. SLA negotiation is also made on
certain cases when required, for e.g., if the container
consumes very less resource than requested over a long
period of time. This is to ensure that the resource is not
underutilized and to provide an optimal usage. Thus
negotiation helps to manage resources as well as containers.
IV. COMPARISON OF ALGORITHM
This section explains in detail about our proposed and
default scheduling algorithm. Default scheduling algorithm
selects the resource based on the current trend of resource
utilization. Fig. 3 represents the default scheduling algorithm.
In this experiment, we have used four servers (Server1,
Server2, Server3 and Server4). There are 3 containers
deployed across each server. The total amount of CPU
utilized across all the servers are 40, 30, 45 and 60
respectively. In this scenario, if there is new request for the
deployment of container, default scheduling scheme chooses
the server-2 based on the minimal utilization of the resources.
Fig. 4. Proposed scheduler
Fig. 4 explains our proposed scheduling algorithm. In the
same scenario, where the server’s CPU is utilized as 40, 30,
45 and 60. When there is a new request for deployment of the
container, our proposed algorithm chooses the server-1.
Although Server-1’s current CPU utilization is more than that
of the Server-2, it predicts the future load based on the
requirements. Hence, this could help in identifying the
optimal resource, minimizing the migration and improving
the overall performance.
Our proposed scheduling algorithm differs from the
default scheduling algorithm. It identifies the resources based
on the future utilization of resources. The differences
between our proposed and default scheduling is tabulated as
follows
TABLE I. Comparison of scheduling algorithm
Proposed algorithm Default scheduling algorithm
a. Identify the resource list {Rlist }
matching User requirements (Ur)
from the total resource {Ri}
Where i=1..n
b. If Rlist > 1, then identify the
maximum requirement of each
container deployed across Rlist
c. Identify total resource
usage[TMaxcpu] =
,
where
→Maximum value of
request,
→ Container running on
resource
n→ Total number of Rlist
m→ total number of containers
deployed at a resource.
d. Identify optimal resource Rd from
Min{ TMaxcpu }
e. Deploy container on Rd
a. Identify the resource list {Rlist }
matching User requirements
(Ur) from the total resource
{Ri} Where i=1..n
b. If Rlist > 1, then identify the
current least utilized resource
Rleast
c. Deploy container on Rleast
Note: User requirements can be of any performance metrics. In our experiment,
we have used CPU utilization.
Fig. 3. Default scheduler
4. V. PROPOSED ARCHITECTURE
Our proposed architecture manages the advance
scheduling of container across Kubernetes platform and the
overall architecture is as follows.
Request manger, Information collector, Policy manager,
SLA Manager and Resource manager are the major
components. Request Manager validates the container
request and checks for the feasibility of deploying the
requested number of containers. Information collector
manages the complete information about all resources
(Kubernetes cluster, containers) and updates in NoSQL based
databases (e.g.: MONGO). It also records each container’s
historic behavior. SLA Manager verify the container
performance against the policy which has been updated by
the policy manager. User can create and monitor their policy
using Policy manager. Resource Manager is core component
managing all other components and it also ensures secure
communication across Kubernetes cluster. It aids in
scheduling and identifies the best matched resources for the
successful deployment of containers over Kubernetes cluster.
It manages the resources based on the algorithm described in
section IV.
VI. ADVANTAGES OF PROPOSED SOLUTION
Proposed solution analyze the future utilization on the
targeted resource before the deployment which could
minimize the migration of resource and ensure the smooth
function of all applications or containers across the resources.
Our solution is based on the approach of user specification and
for the new users who don’t have any upfront information on
specification, SLA negotiation helps in finding the optimal
resources. However, it is recommended for the new user to
provide the minimum requirement for the initial placement of
containers. SLA based monitoring helps to verify against the
commitment from both the provider and user.
VII. CONCLUSION
Our proposed solution could benefit any container
orchestrator for effective scheduling of containers and
improve application performance. We experimented our
solution with CPU requirement, similarly it can span across
any set of performance metrics. Our solution aims at
minimizing the issues before and after the scheduling. Thus, it
aids in minimizing the amount of migration or restart of
containers. We also understand that it’s difficult for the user
in specifying the performance upfront, hence in future we plan
to predict the performance based on historic reference and
create SLA policy accordingly. We also plan to expand our
experiment with additional performance metrics over wide
range of micro service applications.
REFERENCES
[1] A. J. Younge, R. Henschel, J. T. Brown, G. von Laszewski, J. Qiu and
G. C. Fox, "Analysis of Virtualization Technologies for High
Performance Computing Environments," 2011 IEEE 4th International
Conference on Cloud Computing, Washington, DC, 2011, pp. 9-16.
doi: 10.1109/CLOUD.2011.29
[2] S. G. Soriga and M. Barbulescu, "A comparison of the performance
and scalability of Xen and KVM hypervisors," 2013 RoEduNet
International Conference 12th Edition: Networking in Education and
Research, Iasi, Romania, 2013, pp. 1-6.
[3] J. P. Walters et al., "GPU Passthrough Performance: A Comparison of
KVM, Xen, VMWare ESXi, and LXC for CUDA and OpenCL
Applications," 2014 IEEE 7th International Conference on Cloud
Computing, Anchorage, AK, 2014, pp. 636-643.
[4] F. Faniyi and R. Bahsoon, “A Systematic Review of Service Level
Management in the Cloud”, ACM Comput. Surv. 48, 3, Article 43
(December 2015), 27 pages.
[5] D. Bernstein, "Containers and Cloud: From LXC to Docker to
Kubernetes," in IEEE Cloud Computing, vol. 1, no. 3, pp. 81-84, 2014.
[6] M. Raho, A. Spyridakis, M. Paolino and D. Raho, "KVM, Xen and
Docker: A performance analysis for ARM based NFV and cloud
computing," 2015 IEEE 3rd Workshop on Advances in Information,
Electronic and Electrical Engineering (AIEEE), Riga, 2015, pp. 1-8.
[7] T. Adufu, J. Choi and Y. Kim, “Is container-based technology a winner
for high performance scientific applications?”, Network Operations
and Management Symposium, 17th Asia-Pacific, Busan, 2015, pp.
507-510.
[8] E. N. Preeth, F. J. P. Mulerickal, B. Paul and Y. Sastri, "Evaluation of
Docker containers based on hardware utilization," 2015 International
Conference on Control Communication & Computing India (ICCC),
Trivandrum, 2015, pp. 697-700
[9] F. Paraiso, S. Challita, Y. Al-Dhuraibi, P. Merle. “Model-Driven
Management of Docker Containers”, 9th IEEE International
Conference on Cloud Computing (CLOUD), Jun 2016, San Francisco,
United States.
[10] A. Tosatto, P. Ruiu, A. Attanasio, “Container-based orchestration in
cloud: State of the art and challenges”, in: Proc. of 9th Int'l Conf. on
Complex, Intelligent, and Software Intensive Systems, CISIS '15,
2015, pp. 70-75
[11] D. Jaramillo, D. V. Nguyen and R. Smart, "Leveraging microservices
architecture by using Docker technology," SoutheastCon 2016,
Norfolk, VA, 2016, pp. 1-5.
[12] N. Naik, "Building a virtual system of systems using docker swarm in
multiple clouds," 2016 IEEE International Symposium on Systems
Engineering (ISSE), Edinburgh, 2016, pp. 1-3.
doi: 10.1109/SysEng.2016.7753148
[13] L. Abdollahi Vayghan, M. A. Saied, M. Toeroe and F. Khendek,
"Deploying Microservice Based Applications with Kubernetes:
Experiments and Lessons Learned," 2018 IEEE 11th International
Conference on Cloud Computing (CLOUD), San Francisco, CA, USA,
2018, pp. 970-973.
[14] B. Hindman, A. Konwinski, M. Zaharia, A. Ghodsi, A. D. Joseph, R.
H. Katz, S. Shenker, and I. Stoica. “Mesos: A platform for fine-grained
resource sharing in the data center”, In NSDI, volume 11, pages 22–22,
2011.
[15] Marathon, [online]. Available: https://mesosphere.github.io/marathon.
[Accessed: 09- June- 2019]
Fig. 5. Proposed architecture