Resource scheduling considers the execution time of every distinct workload, but most importantly, the overall performance is also based on type of workload i.e. with different QoS requirements (heterogeneous workloads) and with similar QoS requirements (homogenous workloads).
Strategies for Landing an Oracle DBA Job as a Fresher
Shceduling iot application on cloud computing
1. Scheduling internet of things
applications in cloud computing
Prepared by: Eng. Eman Ahmed Ismail version no:1
ISO QMS/ SMS/ ISMS LEAD AUDITOR issue date: 22 April 2017
IT Service expert
Port said University
Faculty of Engineering
Computer and Control Department
2. IoT
Internet of things (IoT) is the inter-networking of physical devices, vehicles (also referred to as
"connected devices" and “smart devices"), buildings, and other embedded
with electronics, software, sensors, actuators, and network connectivity that enable these objects to
collect and exchange data.
The basic idea of (IoT): is to allow autonomous and secure connection and exchange of data
between real world devices and applications .
As mentioned: Most of the mobile devices embed different sensors and actuators that can sense,
perform computation, take intelligent decisions and transmit useful collected information over the
Internet. Using a network of such devices with different sensors can give birth to enormous
amazing applications and services that can bring significant personal, professional and economic
benefits.
3. workflow of IoT
can be described as follows:
1) Object: sensing, identification and communication of object specific information. The
information is the “sensed data about temperature, orientation, motion, vibration, acceleration,
humidity, chemical changes in the air etc depending on the type of sensors”.
2) Trigger an action. The received object information is processed by a smart device/system
that then determines an automated action to be invoked.
3) The smart device/system provide rich services and includes a mechanism to provide
feedback to the administrator about the current system status and the results of actions
invoked.
5. IoT Architecture
1. Device layer
2. Data ingestion and transformation layer
3. Data processing layer
4. Application layer
6. Possible IOT Applications:
1) Prediction of natural disasters
2) Industry applications
3) Water Scarcity monitoring
4) Design of smart home
5) Medical applications
6) Agriculture application
7) Intelligent transport system design
8) Design of smart cities
9) Smart metering and monitoring
10) Smart Security
7. Gartner Symposium/Itxpo
Gartner Symposium/Itxpo is the world's most important gathering of CIOs and senior IT
executives.
This event delivers independent and objective content with the authority and weight of the
world's leading IT research and advisory organization;
provides access to the latest solutions from key technology providers.
Gartner's annual Symposium/ITxpo events are key components of attendees' annual planning
efforts. IT executives rely on Gartner Symposium/ITxpo to gain insight into how their
organizations can use IT to address business challenges and improve operational efficiency.
8. Gartner Symposium/Itxpo estimation of
IoT
Gartner estimates that 4 billion connected things will be in use in the consumer sector in 2016,
and will reach 13.5 billion in 2020
Gartner considers two classes of connected things. The first class consists of generic or cross-
industry devices that are used in multiple industries, and vertical-specific devices that are
found in particular industries.
Cross-industry devices include connected light bulbs, HVAC and building management
systems that are mainly deployed for purposes of cost saving. The second class includes
vertical-specific devices, such as specialized equipment used in hospital operating theatres,
tracking devices in container ships, and many others.
"Connected things for specialized use are currently the largest category, however, this is
quickly changing with the increased use of generic devices. By 2020, cross-industry devices will
dominate the number of connected things used in the enterprise,"
9. Gartner Symposium/Itxpo estimation of
IoT
Category 2014 2015 2016 2020
Consumer 2,277 3,023 4,024 13,509
Business: Cross-Industry 632 815 1,092 4,408
Business: Vertical-
Specific
898 1,065 1,276 2,880
Grand Total 3,807 4,902 6,392 20,797
Gartner estimates that 4 billion connected things will be in
use in the consumer sector in 2016,
11. IOT and Cloud Computing
cloud computing over the traditional computing:
Simplicity of usage;
flexibility of data access;
ease of maintenance;
time and energy efficiency;
pay as you go policy
12. Cloud computing
Cloud computing is a model for on-demand access to a shared pool of configurable resources (e.g.
compute, networks, servers, storage, applications, services, and software)
Deployment models of cloud service namely Infrastructure as a Service (laaS), Platform as a Service
(PaaS), Software as a Service (SaaS).
Cloud Types:
1. Public clouds
2. private cloud
3. hybrid cloud
13. Recourse management
Resource management is an umbrella activity comprising of different stages of resources and
workloads from workload submission to workload execution.
Resource management in Cloud includes two stages: i) resource provisioning and ii) resource
scheduling.
Resource provisioning is defined to be the stage to identify adequate resources for a given
workload based on QoS requirements described by cloud consumers.
Resource scheduling is mapping and execution of cloud consumer workloads based on selected
resources through resource provisioning.
17. Resource scheduling
To identify the suitable resources for scheduling the appropriate workloads on time and to increase
the effectiveness of resource utilization.
The amount of resources should be minimum for a workload to maintain a required level of service
quality, or minimize workload completion time (or maximize throughput) of a workload.
Resource scheduling considers the execution time of every distinct workload, but most importantly,
the overall performance is also based on type of workload i.e. with different QoS requirements
(heterogeneous workloads) and with similar QoS requirements (homogenous workloads).
Effective resource scheduling reduces execution cost, execution time, energy consumption and
considering other QoS requirements like reliability, security, availability and scalability.
18. Scheduling algorithms
the traditional cloud computing server schedulers are not ready to provide services to IoT because it
consists of a number of heterogeneous devices and applications which are far away from
standardization ”standardization approaches with cloud computing (BSI, CSA, DMTF and NIST)” .
Therefore, to meet the expectations of users, the traditional cloud computing server schedulers
should be improved to efficiently schedule and allocate IoT requests by considering the
heterogeneity of servers and priorities (here, requests can also be software, platform, infrastructure,
etc.)
19. Related works: 1. A queueing-based model for
performance management on cloud
Peng Chen H, Chong Li S (2010) “A queueing-based model for performance management on cloud”:
In this model, the web applications are modeled as queues and the virtual machines are modeled as service
centers. apply the queueing theory onto how to dynamically create and remove virtual machines in order to
implement scaling up and down
1. When a web application is deployed on a cloud, the Cloud Controller, as the portal of cloud, will stablish a
queue for it to hold the client requests. Meanwhile, a certain number of VMs will be created by Cloud
Controller on Cloud Node(s), The number of initially created VMs can be specified by service level agreement
or by empirical value in case of no constraint on it in service level agreement
2. When a client sends a request to a web application on cloud, the request will be sent to the Cloud
Controller. The dispatcher in Cloud Controller forwards the request to the queue of the target web application.
The instances of the target web application running into VMs act as service centers to process the requests in
the queue.
20. Related works: 1. A queueing-based model for
performance management on cloud
Follow: Peng Chen H, Chong Li S (2010) “A queueing-based model for performance management
on cloud”:
M indicate the number of requests and the service time. Since each web application on cloud has one or
multiple instances running into VM(s), and each instance can serve a certain number of requests.
S the number of the requests concurrently processed by a web application.
K indicates capacity of the request. it has the upper limitation of the number of clients they can serve. This
upper limitation is the sum of S and the number of waiting requests the queue can hold.
Thus, the queue model for the web application on cloud is abstracted as an M/M/S/k one.
21. Related works: 2. A priority based job scheduling
algorithm in cloud computing
. Ghanbari S, Othman M (2012) A priority based job scheduling algorithm in cloud computing
1) Enter J={j1,...,Jm} a set of jobs
2) Enter C={c1,...,Cd} a set of resource
3) For all jobs make consistent comparison matrix according to priority of resources accessibilities(d matrix
with m row and m column)
4) Compute priority vector for all d matrixes in step 3 based on Eq. (3).
5) Make a matrix with priority vectors in step 4 based on Eq. (8) and name it .
6) For C compute a consistent comparison matrix according to decision maker(s).
7) Compute priority vector for matrix in step 6 same as step 4 and name it .
8) Compute PVS which is a vector included value of priority of jobs from Eq. (9).
9) Choose a job with maximum amount of priority value based on PVS and allocate it suitable resource.
10) Update the list of jobs. ,
22. Related works: 3. A new class of priority based
weighted fair scheduling algorithm
Yang L, Pan C, Zhang E, Liu H (2012) A new class of priority based weighted fair scheduling
algorithm. Phys Procedia 33:942– 948
This paper address the issue of how to meet the quality voice and other real-time services while fakir
delivery of other business.
A class of based weighted fair queue is agile than weight fair queue, CBWFQ adopt a class of based idea, a
class can be single stream, also several stream aggregation, it can make sort in different operation streams,
different class correspond to different queue, different queue distribute the least bandwidth guarantee.
CBWFQ is a high algorithm to the data application, but it also has the same disadvantage as the WFQ, lack
of time application support strict low delay's guarantee such as voice and video alteration. Therefore, we
can set up a strict rob priority queue based on CBWFQ, fulfill the real time application's low delay request.
At one time, keep the same justice as the CBWFQ in some extent, the text can call Strict Rob PQ-CBWFQ,
the logical function as figure1
23. Related works: 3. A new class of priority based
weighted fair scheduling algorithm
24. Related works: 3. A new class of priority based
weighted fair scheduling algorithm
The merit of the new algorithm is that it has introduced rob rule, together with dropping rule.
high weight services have priority of rob the free buffers of the low weight services. With this tactic,
the new algorithm can not only increase the usage of buffers, but also decrease the rate of
dropping packet.
the algorithm has the ability of handling outburst streams, which could guarantee the stability of
network. When high weight service is transmitted at very high rate, it will occupy the free buffers of
the low weight services and augment its opportunity of being scheduled.
the system could adjust free spaces and bandwidth allocation automatically, but not drop new
packet arbitrarily. Moreover, SRPQ-CBWFQ sets the limit of occupying interval, in order to avoid
high weight services occupying the spaces of low weight services in excess, starvation. As figure 2
shown is the algorithm flow of entering queue. The algorithm of out-queue, which is simple, is the
scheduling rules described as above.
25. Related works: 3. A new class of priority based
weighted fair scheduling algorithm
26. Related works: 4. DDSS:Dynamic dedicated servers
scheduling for multi priority level classes in cloud servers
Narman HS, Hossain MS, Atiquzzaman M (2014) “DDSS: Dynamic dedicated servers scheduling for multi
priority level classes in cloud servers”. In: IEEE International Conference on Communications (ICC), Sydney,
Australia, June 10-14, pp 3082 – 3087:
Here, some servers are used for C1 customers while other servers are used for C2 customers. The customer arrival
rates of C1 and C2 are λ1 and λ2, respectively.
Each class of customers are queued in the corresponding queues (Q1 and Q2). A new arriving customer will be
dropped if buffers are full.
The service rate of each server is µ. Each class of traffic is solely assigned to each dedicated server as shown in Fig. 2.
27. Related works: 4. DDSS: Dynamic dedicated servers
scheduling for multi priority level classes in cloud servers
Narman HS, Hossain MS, Atiquzzaman M (2014) “DDSS:Dynamic dedicated servers scheduling for multi
priority level classes in cloud servers”. In: IEEE International Conference on Communications (ICC), Sydney,
Australia, June 10-14, pp 3082 – 3087:
proposed algorithm considers three crucial parameters that enable dynamic scheduling:
(i) the arrival rates of C1and C2 customers (λ1,λ2),
(ii) the priority levels of C1 and C2 customers (Ψ1,Ψ2), and
(iii) the total number of servers in the system (l). These three parameters are used to derive the number of servers (m
and k) assigned to each class of customers as follows:
m = lΨ1λ1 /(Ψ1λ1 + Ψ2λ2)l
k = l−m
Based on the results: (i) the class priority levels do not significantly affect the performance of classes when the system
is under low traffic for both DSS and DDSS architectures, (ii) under heavy traffic, the class priority levels have direct
impact on the class performances in DDSS architecture, (iii) the system can become more efficient based on the
selected class priority levels in DDSS than DSS, although assuming DSS can dynamically update the assigned number
of servers for each class based on arrival rates (Ψ1 = 1.5 and Ψ2 = 1 in Fig. 13).
28. Related works: 5. hDDSS: Heterogeneous dynamic
dedicated servers scheduling in cloud computing
Narman HS, Hossain MS, Atiquzzaman M (2014) “hDDSS: Heterogeneous dynamic dedicated servers
scheduling in cloud computing”. In: IEEE International Conference on Communications (ICC), Sydney,
Australia, June 10-14, pp 3475 – 3480 :
The objective of this work is to improve and realistically analyze performance of cloud systems in terms of
throughput, drop rate, and utilization by considering heterogeneous servers and different priority class of customers.
The contributions of this work are: (i) proposing h-DDSS to fulfill desired expectations for each level of priority class in
the system, (ii) developing an analytical model to evaluate the upper and lower bounds performances (average
occupancy, drop rate, average delay, and throughput for each class) of the proposed scheduling algorithm, (iii)
validating our analytical model by extensive simulations, and (iv) comparing the performances of DDSS with upper
and lower bounds performances of hDDSS.
Results show that h-DDSS architecture can provide improved customer throughput and drop rate over DDSS by using
appropriate priority levels for each class
29. Related works: 5. hDDSS: Heterogeneous dynamic
dedicated servers scheduling in cloud computing
Narman HS, Hossain MS, Atiquzzaman M (2014) “hDDSS: Heterogeneous dynamic dedicated servers
scheduling in cloud computing”. In: IEEE International Conference on Communications (ICC), Sydney,
Australia, June 10-14, pp 3475 – 3480 :
This algorithm considers three crucial parameters that enables a dynamic scheduling: (i) the arrival rates of C1 and C2
customers (λ1,λ2), (ii) the priority levels of C1 and C2 customers (Ψ1,Ψ2), and (iii) the total service rates in the system
(µtotal).
These three parameters can be used to derive total service rates (µtm and ηtk) assigned to each class of customers as
follows: µ´tm = l µtotal Ψ1λ1 Ψ1λ1 + Ψ2λ2 l
Based on the results: (i) class priority levels do not significantly affect the performance of classes when system is
under low traffic for both homogeneous and heterogeneous server systems, (ii) under heavy traffic, class priority
levels have a significant impact on the class performances in homogeneous server systems, (iii) under heavy traffic,
class performances of h-DDSS for FSF and SSF are same while FSF is better for low traffic arrivals, and (iv)
heterogeneous systems can become as efficient as or better than homogeneous systems for selected class priority
levels
30. Scheduling internet of things applications in
cloud computing
The objective of this work is to improve and analyze the performance of cloud systems in terms of
throughput, drop rate and utilization by considering homogeneous and heterogeneous servers, and
priority classes of IoT requests because some IoT requests such as a health application are delay
sensitive.
The scheduling algorithm and related analysis will help cloud service providers build efficient server scheduling
algorithms which are adaptable to homogeneous and heterogeneous server systems by considering the system
performance metrics, such as drop rate, throughput, and utilization.
The key contributions of this work can be summarized as follows:
Homogeneous dynamic dedicated server scheduling (DDSS) and heterogeneous dynamic dedicated server
scheduling (h-DDSS) are analysised.
The upper and lower bound performance metrics (average occupancy, drop rate, average delay, and throughput
for each class of application) of the proposed scheduling algorithm are analytically derived by using queuing
theory.
The performance of DDSS and h-DDSS [here, the performance of h-DDSS means as upper (fastest server first
(FSF) which is integrated into our h-DDSS method) and lower bound performances of h-DDSS (slowest server
first (SSF) which is integrated into our h-DDSS method)] are tested and compared with existing methods.
31. Scheduling internet of things applications in
cloud computing
Homogeneous dynamic dedicated server scheduling
(DDSS)
Heterogeneous dynamic dedicated server scheduling
(hDDSS)
32. Scheduling internet of things applications in
cloud computing
Two thread scheduling procedure is illustrated for two classes.
first thread duties are as follows:
The number of assigned servers to each class is updated regularly based on the Eqs. 1, 2, and3 (Time to arrange
servers means that is it time for periodic updating).
The scheduling algorithm classifies an arrived request. Here, we assume that each request has a flag and the flag
states that whether the request is C1 or C2) and enqueues the request to the corresponding queue of servers
that are capable of serving to the request.
Enqueuing process of requests continues while requests arrive.
second thread duties:
the scheduling algorithm dequeues a request from the queue, then the dequeued request is forwarded to one
of available servers which have less than active K number of VMs according to allocation policies.
If all active servers have already K number of active VMs, the scheduling algorithm wakes up a new server which
has capability to serve the request (if there is an available sleeping server). It is important to note that the
waking-up priority is given to servers which are capable of serving both classes.
The waking-up and sleeping process may increase the responding time of the request but reduces the number
of active servers to save energy.
Public clouds are owned and operated by companies that offer rapid access over a public network to affordable computing resources. With public cloud services, users don’t need to purchase hardware, software, or supporting infrastructure, which is owned and managed by providers.
*Key aspects of public cloud
Innovative SaaS business apps for applications ranging from customer resource management (CRM) to transaction management and data analytics
Flexible, scalable IaaS for storage and compute services on a moment’s notice
Powerful PaaS for cloud-based application development and deployment environments.
private cloud is infrastructure operated solely for a single organization, whether managed internally or by a third party, and hosted either internally or externally. Private clouds can take advantage of cloud’s efficiencies, while providing more control of resources and steering clear of multi-tenancy.
*Key aspects of private cloud
A self-service interface controls services, allowing IT staff to quickly provision, allocate, and deliver on-demand IT resources
Highly automated management of resource pools for everything from compute capability to storage, analytics, and middleware
Sophisticated security and governance designed for a company’s specific requirements
hybrid cloud uses a private cloud foundation combined with the strategic integration and use of public cloud services. The reality is a private cloud can’t exist in isolation from the rest of a company’s IT resources and the public cloud. Most companies with private clouds will evolve to manage workloads across data centers, private clouds, and public clouds—thereby creating hybrid clouds.
*Key aspects of hybrid cloud
Allows companies to keep the critical applications and sensitive data in a traditional data center environment or private cloud
Enables taking advantage of public cloud resources like SaaS, for the latest applications, and IaaS, for elastic virtual resources
Facilitates portability of data, apps and services and more choices for deployment models