The development in the field of cloud webserver management for the execution of the workflow and meeting the quality-of-service (QoS) prerequisites in a distributed cloud environment has been a challenging task. Though, internet of things (IoT) of work presented for the scheduling of the workflow in a heterogeneous cloud environment. Moreover, the rapid development in the field of cloud computing like edge-cloud computing creates new methods to schedule the workflow in a heterogenous cloud environment to process different tasks like IoT, event-driven applications, and different network applications. The current methods used for workflow scheduling have failed to provide better trade-offs to meet reliable performance with minimal delay. In this paper, a novel web server resource management framework is presented namely the reliable and efficient webserver management (REWM) framework for the edge-cloud environment. The experiment is conducted on complex bioinformatic workflows; the result shows the significant reduction of cost and energy by the proposed REWM in comparison with standard webserver management methodology.
Recently, lot of interest have been put forth by researchers to improve workload scheduling in cloud platform. However, execution of scientific workflow on cloud platform is time consuming and expensive. As users are charged based on hour of usage, lot of research work have been emphasized in minimizing processing time for reduction of cost. However, the processing cost can be reduced by minimizing energy consumption especially when resources are heterogeneous in nature; very limited work have been done considering optimizing cost with energy and processing time parameters together in meeting task quality of service (QoS) requirement. This paper presents cost and performance aware workload scheduling (CPA-WS) technique under heterogeneous cloud platform. This paper presents a cost optimization model through minimization of processing time and energy dissipation for execution of task. Experiments are conducted using two widely used workflow such as Inspiral and CyberShake. The outcome shows the CPA-WS significantly reduces energy, time, and cost in comparison with standard workload scheduling model.
Scientific workload execution on a distributed computing platform such as a
cloud environment is time-consuming and expensive. The scientific workload
has task dependencies with different service level agreement (SLA)
prerequisites at different levels. Existing workload scheduling (WS) designs
are not efficient in assuring SLA at the task level. Alongside, induces higher
costs as the majority of scheduling mechanisms reduce either time or energy.
In reducing, cost both energy and makespan must be optimized together for
allocating resources. No prior work has considered optimizing energy and
processing time together in meeting task level SLA requirements. This paper
presents task level energy and performance assurance-workload scheduling
(TLEPA-WS) algorithm for the distributed computing environment. The
TLEPA-WS guarantees energy minimization with the performance
requirement of the parallel application under a distributed computational
environment. Experiment results show a significant reduction in using energy
and makespan; thereby reducing the cost of workload execution in comparison
with various standard workload execution models.
An efficient cloudlet scheduling via bin packing in cloud computingIJECEIAES
In this ever-developing technological world, one way to manage and deliver services is through cloud computing, a massive web of heterogenous autonomous systems that comprise adaptable computational design. Cloud computing can be improved through task scheduling, albeit it being the most challenging aspect to be improved. Better task scheduling can improve response time, reduce power consumption and processing time, enhance makespan and throughput, and increase profit by reducing operating costs and raising the system reliability. This study aims to improve job scheduling by transferring the job scheduling problem into a bin packing problem. Three modifies implementations of bin packing algorithms were proposed to be used for task scheduling (MBPTS) based on the minimisation of makespan. The results, which were based on the open-source simulator CloudSim, demonstrated that the proposed MBPTS was adequate to optimise balance results, reduce waiting time and makespan, and improve the utilisation of the resource in comparison to the current scheduling algorithms such as the particle swarm optimisation (PSO) and first come first serve (FCFS).
REAL-TIME ADAPTIVE ENERGY-SCHEDULING ALGORITHM FOR VIRTUALIZED CLOUD COMPUTINGijdpsjournal
Cloud computing becomes an ideal computing paradigm for scientific and commercial applications. The increased availability of the cloud models and allied developing models creates easier computing cloud environment. Energy consumption and effective energy management are the two important challenges in virtualized computing platforms. Energy consumption can be minimized by allocating computationally
intensive tasks to a resource at a suitable frequency. An optimal Dynamic Voltage and Frequency Scaling (DVFS) based strategy of task allocation can minimize the overall consumption of energy and meet the required QoS. However, they do not control the internal and external switching to server frequencies,
which causes the degradation of performance. In this paper, we propose the Real Time Adaptive EnergyScheduling (RTAES) algorithm by manipulating the reconfiguring proficiency of Cloud ComputingVirtualized Data Centers (CCVDCs) for computationally intensive applications. The RTAES algorithm
minimizes consumption of energy and time during computation, reconfiguration and communication. Our proposed model confirms the effectiveness of its implementation, scalability, power consumption and execution time with respect to other existing approaches.
REAL-TIME ADAPTIVE ENERGY-SCHEDULING ALGORITHM FOR VIRTUALIZED CLOUD COMPUTINGijdpsjournal
Cloud computing becomes an ideal computing paradigm for scientific and commercial applications. The
increased availability of the cloud models and allied developing models creates easier computing cloud
environment. Energy consumption and effective energy management are the two important challenges in
virtualized computing platforms. Energy consumption can be minimized by allocating computationally
intensive tasks to a resource at a suitable frequency. An optimal Dynamic Voltage and Frequency Scaling
(DVFS) based strategy of task allocation can minimize the overall consumption of energy and meet the
required QoS. However, they do not control the internal and external switching to server frequencies,
which causes the degradation of performance. In this paper, we propose the Real Time Adaptive EnergyScheduling (RTAES) algorithm by manipulating the reconfiguring proficiency of Cloud ComputingVirtualized Data Centers (CCVDCs) for computationally intensive applications. The RTAES algorithm
minimizes consumption of energy and time during computation, reconfiguration and communication. Our
proposed model confirms the effectiveness of its implementation, scalability, power consumption and
execution time with respect to other existing approaches.
REAL-TIME ADAPTIVE ENERGY-SCHEDULING ALGORITHM FOR VIRTUALIZED CLOUD COMPUTINGijdpsjournal
Cloud computing becomes an ideal computing paradigm for scientific and commercial applications. The increased availability of the cloud models and allied developing models creates easier computing cloud environment. Energy consumption and effective energy management are the two important challenges in virtualized computing platforms. Energy consumption can be minimized by allocating computationally intensive tasks to a resource at a suitable frequency. An optimal Dynamic Voltage and Frequency Scaling (DVFS) based strategy of task allocation can minimize the overall consumption of energy and meet the required QoS. However, they do not control the internal and external switching to server frequencies, which causes the degradation of performance. In this paper, we propose the Real Time Adaptive EnergyScheduling (RTAES) algorithm by manipulating the reconfiguring proficiency of Cloud ComputingVirtualized Data Centers (CCVDCs) for computationally intensive applications. The RTAES algorithm minimizes consumption of energy and time during computation, reconfiguration and communication. Our proposed model confirms the effectiveness of its implementation, scalability, power consumption and execution time with respect to other existing approaches.
REAL-TIME ADAPTIVE ENERGY-SCHEDULING ALGORITHM FOR VIRTUALIZED CLOUD COMPUTINGijdpsjournal
Cloud computing becomes an ideal computing paradigm for scientific and commercial applications. The
increased availability of the cloud models and allied developing models creates easier computing cloud
environment. Energy consumption and effective energy management are the two important challenges in
virtualized computing platforms. Energy consumption can be minimized by allocating computationally
intensive tasks to a resource at a suitable frequency. An optimal Dynamic Voltage and Frequency Scaling
(DVFS) based strategy of task allocation can minimize the overall consumption of energy and meet the
required QoS. However, they do not control the internal and external switching to server frequencies,
which causes the degradation of performance. In this paper, we propose the Real Time Adaptive EnergyScheduling (RTAES) algorithm by manipulating the reconfiguring proficiency of Cloud ComputingVirtualized Data Centers (CCVDCs) for computationally intensive applications. The RTAES algorithm
minimizes consumption of energy and time during computation, reconfiguration and communication. Our
proposed model confirms the effectiveness of its implementation, scalability, power consumption and
execution time with respect to other existing approaches.
Multi-objective tasks scheduling using bee colony algorithm in cloud computingIJECEIAES
This document presents a new approach for scheduling multi-objective tasks in cloud computing using an artificial bee colony algorithm. The proposed algorithm aims to optimize response time, schedule length ratio, and efficiency. It models tasks as bees that are assigned to processing elements in data centers to minimize completion time while balancing resource loads. The results showed the bee colony algorithm achieved better performance than other scheduling methods in cloud computing environments.
Recently, lot of interest have been put forth by researchers to improve workload scheduling in cloud platform. However, execution of scientific workflow on cloud platform is time consuming and expensive. As users are charged based on hour of usage, lot of research work have been emphasized in minimizing processing time for reduction of cost. However, the processing cost can be reduced by minimizing energy consumption especially when resources are heterogeneous in nature; very limited work have been done considering optimizing cost with energy and processing time parameters together in meeting task quality of service (QoS) requirement. This paper presents cost and performance aware workload scheduling (CPA-WS) technique under heterogeneous cloud platform. This paper presents a cost optimization model through minimization of processing time and energy dissipation for execution of task. Experiments are conducted using two widely used workflow such as Inspiral and CyberShake. The outcome shows the CPA-WS significantly reduces energy, time, and cost in comparison with standard workload scheduling model.
Scientific workload execution on a distributed computing platform such as a
cloud environment is time-consuming and expensive. The scientific workload
has task dependencies with different service level agreement (SLA)
prerequisites at different levels. Existing workload scheduling (WS) designs
are not efficient in assuring SLA at the task level. Alongside, induces higher
costs as the majority of scheduling mechanisms reduce either time or energy.
In reducing, cost both energy and makespan must be optimized together for
allocating resources. No prior work has considered optimizing energy and
processing time together in meeting task level SLA requirements. This paper
presents task level energy and performance assurance-workload scheduling
(TLEPA-WS) algorithm for the distributed computing environment. The
TLEPA-WS guarantees energy minimization with the performance
requirement of the parallel application under a distributed computational
environment. Experiment results show a significant reduction in using energy
and makespan; thereby reducing the cost of workload execution in comparison
with various standard workload execution models.
An efficient cloudlet scheduling via bin packing in cloud computingIJECEIAES
In this ever-developing technological world, one way to manage and deliver services is through cloud computing, a massive web of heterogenous autonomous systems that comprise adaptable computational design. Cloud computing can be improved through task scheduling, albeit it being the most challenging aspect to be improved. Better task scheduling can improve response time, reduce power consumption and processing time, enhance makespan and throughput, and increase profit by reducing operating costs and raising the system reliability. This study aims to improve job scheduling by transferring the job scheduling problem into a bin packing problem. Three modifies implementations of bin packing algorithms were proposed to be used for task scheduling (MBPTS) based on the minimisation of makespan. The results, which were based on the open-source simulator CloudSim, demonstrated that the proposed MBPTS was adequate to optimise balance results, reduce waiting time and makespan, and improve the utilisation of the resource in comparison to the current scheduling algorithms such as the particle swarm optimisation (PSO) and first come first serve (FCFS).
REAL-TIME ADAPTIVE ENERGY-SCHEDULING ALGORITHM FOR VIRTUALIZED CLOUD COMPUTINGijdpsjournal
Cloud computing becomes an ideal computing paradigm for scientific and commercial applications. The increased availability of the cloud models and allied developing models creates easier computing cloud environment. Energy consumption and effective energy management are the two important challenges in virtualized computing platforms. Energy consumption can be minimized by allocating computationally
intensive tasks to a resource at a suitable frequency. An optimal Dynamic Voltage and Frequency Scaling (DVFS) based strategy of task allocation can minimize the overall consumption of energy and meet the required QoS. However, they do not control the internal and external switching to server frequencies,
which causes the degradation of performance. In this paper, we propose the Real Time Adaptive EnergyScheduling (RTAES) algorithm by manipulating the reconfiguring proficiency of Cloud ComputingVirtualized Data Centers (CCVDCs) for computationally intensive applications. The RTAES algorithm
minimizes consumption of energy and time during computation, reconfiguration and communication. Our proposed model confirms the effectiveness of its implementation, scalability, power consumption and execution time with respect to other existing approaches.
REAL-TIME ADAPTIVE ENERGY-SCHEDULING ALGORITHM FOR VIRTUALIZED CLOUD COMPUTINGijdpsjournal
Cloud computing becomes an ideal computing paradigm for scientific and commercial applications. The
increased availability of the cloud models and allied developing models creates easier computing cloud
environment. Energy consumption and effective energy management are the two important challenges in
virtualized computing platforms. Energy consumption can be minimized by allocating computationally
intensive tasks to a resource at a suitable frequency. An optimal Dynamic Voltage and Frequency Scaling
(DVFS) based strategy of task allocation can minimize the overall consumption of energy and meet the
required QoS. However, they do not control the internal and external switching to server frequencies,
which causes the degradation of performance. In this paper, we propose the Real Time Adaptive EnergyScheduling (RTAES) algorithm by manipulating the reconfiguring proficiency of Cloud ComputingVirtualized Data Centers (CCVDCs) for computationally intensive applications. The RTAES algorithm
minimizes consumption of energy and time during computation, reconfiguration and communication. Our
proposed model confirms the effectiveness of its implementation, scalability, power consumption and
execution time with respect to other existing approaches.
REAL-TIME ADAPTIVE ENERGY-SCHEDULING ALGORITHM FOR VIRTUALIZED CLOUD COMPUTINGijdpsjournal
Cloud computing becomes an ideal computing paradigm for scientific and commercial applications. The increased availability of the cloud models and allied developing models creates easier computing cloud environment. Energy consumption and effective energy management are the two important challenges in virtualized computing platforms. Energy consumption can be minimized by allocating computationally intensive tasks to a resource at a suitable frequency. An optimal Dynamic Voltage and Frequency Scaling (DVFS) based strategy of task allocation can minimize the overall consumption of energy and meet the required QoS. However, they do not control the internal and external switching to server frequencies, which causes the degradation of performance. In this paper, we propose the Real Time Adaptive EnergyScheduling (RTAES) algorithm by manipulating the reconfiguring proficiency of Cloud ComputingVirtualized Data Centers (CCVDCs) for computationally intensive applications. The RTAES algorithm minimizes consumption of energy and time during computation, reconfiguration and communication. Our proposed model confirms the effectiveness of its implementation, scalability, power consumption and execution time with respect to other existing approaches.
REAL-TIME ADAPTIVE ENERGY-SCHEDULING ALGORITHM FOR VIRTUALIZED CLOUD COMPUTINGijdpsjournal
Cloud computing becomes an ideal computing paradigm for scientific and commercial applications. The
increased availability of the cloud models and allied developing models creates easier computing cloud
environment. Energy consumption and effective energy management are the two important challenges in
virtualized computing platforms. Energy consumption can be minimized by allocating computationally
intensive tasks to a resource at a suitable frequency. An optimal Dynamic Voltage and Frequency Scaling
(DVFS) based strategy of task allocation can minimize the overall consumption of energy and meet the
required QoS. However, they do not control the internal and external switching to server frequencies,
which causes the degradation of performance. In this paper, we propose the Real Time Adaptive EnergyScheduling (RTAES) algorithm by manipulating the reconfiguring proficiency of Cloud ComputingVirtualized Data Centers (CCVDCs) for computationally intensive applications. The RTAES algorithm
minimizes consumption of energy and time during computation, reconfiguration and communication. Our
proposed model confirms the effectiveness of its implementation, scalability, power consumption and
execution time with respect to other existing approaches.
Multi-objective tasks scheduling using bee colony algorithm in cloud computingIJECEIAES
This document presents a new approach for scheduling multi-objective tasks in cloud computing using an artificial bee colony algorithm. The proposed algorithm aims to optimize response time, schedule length ratio, and efficiency. It models tasks as bees that are assigned to processing elements in data centers to minimize completion time while balancing resource loads. The results showed the bee colony algorithm achieved better performance than other scheduling methods in cloud computing environments.
Optimization of energy consumption in cloud computing datacenters IJECEIAES
Cloud computing has emerged as a practical paradigm for providing IT resources, infrastructure and services. This has led to the establishment of datacenters that have substantial energy demands for their operation. This work investigates the optimization of energy consumption in cloud datacenter using energy efficient allocation of tasks to resources. The work seeks to develop formal optimization models that minimize the energy consumption of computational resources and evaluates the use of existing optimization solvers in testing these models. Integer linear programming (ILP) techniques are used to model the scheduling problem. The objective is to minimize the total power consumed by the active and idle cores of the servers’ CPUs while meeting a set of constraints. Next, we use these models to carry out a detailed performance comparison between a selected set of Generic ILP and 0-1 Boolean satisfiability based solvers in solving the ILP formulations. Simulation results indicate that in some cases the developed models have saved up to 38% in energy consumption when compared to common techniques such as round robin. Furthermore, results also showed that generic ILP solvers had superior performance when compared to SAT-based ILP solvers especially as the number of tasks and resources grow in size.
A combined computing framework for load balancing in multi-tenant cloud eco-...IJECEIAES
Since the world is getting digitalized, cloud computing has become a core part of it. Massive data on a daily basis is processed, stored, and transferred over the internet. Cloud computing has become quite popular because of its superlative quality and enhanced capability to improvise data management, offering better computing resources and data to its user bases (UBs). However, there are many issues in the existing cloud traffic management approaches and how to manage data during service execution. The study introduces two distinct research models: data center virtualization framework under multi-tenant cloud-ecosystem (DCVF-MT) and collaborative workflow of multi-tenant load balancing (CW-MTLB) with analytical research modeling. The sequence of execution flow considers a set of algorithms for both models that address the core problem of load balancing and resource allocation in the cloud computing (CC) ecosystem. The research outcome illustrates that DCVF-MT, outperforms the one-toone approach by approximately 24.778% performance improvement in traffic scheduling. It also yields a 40.33% performance improvement in managing cloudlet handling time. Moreover, it attains an overall 8.5133% performance improvement in resource cost optimization, which is significant to ensure the adaptability of the frameworks into futuristic cloud applications where adequate virtualization and resource mapping will be required.
Cloud computing gives on-demand access to computing resources in
metered and powerfully adapted way; it empowers the client to get access to
fast and flexible resources through virtualization and widely adaptable for
various applications. Further, to provide assurance of productive
computation, scheduling of task is very much important in cloud
infrastructure environment. Moreover, the main aim of task execution
phenomena is to reduce the execution time and reserve infrastructure;
further, considering huge application, workflow scheduling has drawn fine
attention in business as well as scientific area. Hence, in this research work,
we design and develop an optimized load balancing in parallel computation
aka optimal load balancing in parallel computing (OLBP) mechanism to
distribute the load; at first different parameter in workload is computed and
then loads are distributed. Further OLBP mechanism considers makespan
time and energy as constraint and further task offloading is done considering
the server speed. This phenomenon provides the balancing of workflow;
further OLBP mechanism is evaluated using cyber shake workflow dataset
and outperforms the existing workflow mechanism.
Efficient fault tolerant cost optimized approach for scientific workflow via ...IAESIJAI
Cloud computing is one of the dispersed and effective computing models, which offers tremendous opportunity to address scientific issues with big scale characteristics. Despite having such a dynamic computing paradigm, it faces several difficulties and falls short of meeting the necessary quality of services (QoS) standards. For sustainable cloud computing workflow, QoS is very much required and need to be addressed. Recent studies looked on quantitative fault-tolerant programming to reduce the number of copies while still achieving the reliability necessity of a process on the heterogeneous infrastructure as a service (IaaS) cloud. In this study, we create an optimal replication technique (ORT) about fault tolerance as well as cost-driven mechanism and this is known as optimal replication technique with fault tolerance and cost minimization (ORT-FTC). Here ORT-FTC employs an iterative-based method that chooses the virtual machine and its copies that have the shortest makespan in the situation of specific tasks. By creating test cases, ORT-FTC is tested while taking into account scientific workflows like CyberShake, laser interferometer gravitational-wave observatory (LIGO), montage, and sipht. Additionally, ORT-FTC is shown to be only slightly improved over the current model in all cases.
A Review on Scheduling in Cloud Computingijujournal
Cloud computing is the requirement based on clients that this computing which provides software,
infrastructure and platform as a service as per pay for use norm. The scheduling main goal is to achieve
the accuracy and correctness on task completion. The scheduling in cloud environment which enables the
various cloud services to help framework implementation. Thus the far reaching way of different type of
scheduling algorithms in cloud computing environment surveyed which includes the workflow scheduling
and grid scheduling. The survey gives an elaborate idea about grid, cloud, workflow scheduling to
minimize the energy cost, efficiency and throughput of the system.
A Review on Scheduling in Cloud Computingijujournal
This document reviews scheduling techniques in cloud computing. It discusses key concepts like virtualization and different scheduling algorithms. The review surveys various scheduling algorithms for tasks, workflows, real-time applications and energy optimization. It analyzes algorithms for load balancing, fault tolerance and resource utilization to improve performance metrics like makespan, cost and energy consumption. The document concludes that effective scheduling is important in cloud computing to provide on-demand services and complete tasks accurately and on time.
A Review on Scheduling in Cloud Computingijujournal
This document reviews scheduling techniques in cloud computing. It discusses key concepts like virtualization and different scheduling algorithms. The review surveys various scheduling algorithms for tasks, workflows, real-time applications and energy efficiency. It analyzes algorithms based on parameters like makespan, cost, energy consumption and concludes many algorithms can improve resource utilization and performance while reducing energy costs.
A Review on Scheduling in Cloud Computingijujournal
Cloud computing is the requirement based on clients that this computing which provides software,
infrastructure and platform as a service as per pay for use norm. The scheduling main goal is to achieve
the accuracy and correctness on task completion. The scheduling in cloud environment which enables the
various cloud services to help framework implementation. Thus the far reaching way of different type of
scheduling algorithms in cloud computing environment surveyed which includes the workflow scheduling
and grid scheduling. The survey gives an elaborate idea about grid, cloud, workflow scheduling to
minimize the energy cost, efficiency and throughput of the system.
DYNAMIC TASK SCHEDULING BASED ON BURST TIME REQUIREMENT FOR CLOUD ENVIRONMENTIJCNCJournal
Cloud computing has an indispensable role in the modern digital scenario. The fundamental challenge of cloud systems is to accommodate user requirements which keep on varying. This dynamic cloud environment demands the necessity of complex algorithms to resolve the trouble of task allotment. The overall performance of cloud systems is rooted in the efficiency of task scheduling algorithms. The dynamic property of cloud systems makes it challenging to find an optimal solution satisfying all the evaluation metrics. The new approach is formulated on the Round Robin and the Shortest Job First algorithms. The Round Robin method reduces starvation, and the Shortest Job First decreases the average waiting time. In this work, the advantages of both algorithms are incorporated to improve the makespan of user tasks.
Dynamic Task Scheduling based on Burst Time Requirement for Cloud EnvironmentIJCNCJournal
Cloud computing has an indispensable role in the modern digital scenario. The fundamental challenge of cloud systems is to accommodate user requirements which keep on varying. This dynamic cloud environment demands the necessity of complex algorithms to resolve the trouble of task allotment. The overall performance of cloud systems is rooted in the efficiency of task scheduling algorithms. The dynamic property of cloud systems makes it challenging to find an optimal solution satisfying all the evaluation metrics. The new approach is formulated on the Round Robin and the Shortest Job First algorithms. The Round Robin method reduces starvation, and the Shortest Job First decreases the average waiting time. In this work, the advantages of both algorithms are incorporated to improve the makespan of user tasks.
Differentiating Algorithms of Cloud Task Scheduling Based on various Parametersiosrjce
Cloud computing is a new design structure for large, distributed data centers. Cloud computing
system promises to offer end user “pay as go” model. To meet the expected quality requirements of users, cloud
computing need to offer differentiated services to users. QoS differentiation is very important to satisfy
different users with different QoS requirements. In this paper, various QoS based scheduling algorithms,
scheduling parameters and the future scope of discussed algorithms have been studied. This paper summarizes
various cloud scheduling algorithms, findings of algorithms, scheduling factors, type of scheduling and
parameters considered
This document discusses different algorithms for task scheduling in cloud computing environments based on various quality of service (QoS) parameters. It summarizes several QoS-based scheduling algorithms including QDA, Improved Cost Based, PAPRIKA, ANT Colony, CMultiQoSSchedule, and SHEFT Workflow. It also provides a comparative table of these algorithms and discusses the various metrics considered by QoS-based scheduling algorithms like time, cost, makespan, trust, and resource utilization. The paper concludes that scheduling is an important factor for cloud environments and that existing algorithms can be improved by considering additional parameters like trust values, execution rates, and success rates.
An Energy Aware Resource Utilization Framework to Control Traffic in Cloud Ne...IJECEIAES
Energy consumption in cloud computing occur due to the unreasonable way in which tasks are scheduled. So energy aware task scheduling is a major concern in cloud computing as energy consumption results into significant waste of energy, reduce the profit margin and also high carbon emissions which is not environmentally sustainable. Hence, energy efficient task scheduling solutions are required to attain variable resource management, live migration, minimal virtual machine design, overall system efficiency, reduction in operating costs, increasing system reliability, and prompting environmental protection with minimal performance overhead. This paper provides a comprehensive overview of the energy efficient techniques and approaches and proposes the energy aware resource utilization framework to control traffic in cloud networks and overloads.
Hybrid Task Scheduling Approach using Gravitational and ACO Search AlgorithmIRJET Journal
The document proposes a hybrid task scheduling approach for cloud computing called ACGSA that combines ant colony optimization and gravitational search algorithms. It describes using the Cloudsim simulator to test the performance of ACGSA and comparing it to ant colony optimization. The results show that ACGSA achieves better performance than the basic ant colony approach on relevant parameters like task scheduling time and resource utilization.
Resource-efficient workload task scheduling for cloud-assisted internet of th...IJECEIAES
One of the most challenging tasks in the internet of things-cloud-based environment is the resource allocation for the tasks. The cloud provides various resources such as virtual machines, computational cores, networks, and other resources for the execution of the various tasks of the internet of things (IoT). Moreover, some methods are used for executing IoT tasks using an optimal resource management system but these methods are not efficient. Hence, in this research, we present a resource-efficient workload task scheduling (RWTS) model for a cloud-assisted IoT environment to execute the IoT task which utilizes few numbers of resources to bring a good tradeoff, achieve high performance using fewer resources of the cloud, compute the number of resources required for the execution of the IoT task such as bandwidth and computational core. Furthermore, this model mainly focuses to reduce energy consumption and also provides a task scheduling model to schedule the IoT tasks in an IoT-cloud-based environment. The experimentation has been done using the Montage workflow and the results have been obtained in terms of execution time, power sum, average power, and energy consumption. When compared with the existing model, the RWTS model performs better when the size of the tasks is increased.
Score based deadline constrained workflow scheduling algorithm for cloud systemsijccsa
Cloud Computing is the latest and emerging trend in information technology domain. It offers utility- based
IT services to user over the Internet. Workflow scheduling is one of the major problems in cloud systems. A
good scheduling algorithm must minimize the execution time and cost of workflow application along with
QoS requirements of the user. In this paper we consider deadline as the major constraint and propose a
score based deadline constrained workflow scheduling algorithm that executes workflow within
manageable cost while meeting user defined deadline constraint. The algorithm uses the concept of score
which represents the capabilities of hardware resources. This score value is used while allocating
resources to various tasks of workflow application. The algorithm allocates those resources to workflow
application which are reliable and reduce the execution cost and complete the workflow application within
user specified deadline. The experimental results show that score based algorithm exhibits less execution
time and also reduces the failure rate of workflow application within manageable cost. All the simulations
have been done using CloudSim toolkit.
IRJET- A Statistical Approach Towards Energy Saving in Cloud ComputingIRJET Journal
This document proposes a statistical approach to save energy in cloud computing through predictive monitoring and optimization techniques. It discusses using Gaussian process regression to predict infrastructure workload and then applying convex optimization to determine the optimal subset of physical machines needed. Virtual machines would be migrated to this subset and idle physical machines could then be powered off to reduce energy consumption while maintaining system performance. An evaluation using 29 days of Google trace data showed the potential for significant power savings without affecting quality of service.
Time and Reliability Optimization Bat Algorithm for Scheduling Workflow in CloudIRJET Journal
This document describes using a meta-heuristic optimization algorithm called the Bat Algorithm (BA) to schedule workflows in cloud computing environments. The BA is applied to optimize a multi-objective function that minimizes workflow execution time and maximizes reliability while keeping costs within a user-specified budget. The BA is compared to a basic randomized evolutionary algorithm (BREA) that uses greedy approaches. Experimental results show the BA performs better by finding schedules that have lower execution times and higher reliability within the given budget constraints. The BA is well-suited for this problem because it can efficiently search large solution spaces and automatically focus on optimal regions like other metaheuristics.
A cloud computing scheduling and its evolutionary approachesnooriasukmaningtyas
Despite the increasing use of cloud computing technology because it offers
unique features to serve its customers perfectly, exploiting the full potential
is very difficult due to the many problems and challenges. Therefore,
scheduling resources are one of these challenges. Researchers are still finding
it difficult to determine which of the scheduling algorithms are appropriate
and effective and that helps increases the performance of the system to
accomplish these tasks. This paper provides a broad and detailed examination
of resource scheduling algorithms in the environment of a cloud computing
environment and highlights the advantages and disadvantages of some
algorithms to help researchers in selecting the best algorithms to schedule a
particular workload to get a satisfy a quality of service, guarantee good
utilization of the cloud resources also minimizing the make-span.
Multi-objective load balancing in cloud infrastructure through fuzzy based de...IAESIJAI
Cloud computing became a popular technology which influence not only
product development but also made technology business easy. The services
like infrastructure, platform and software can reduce the complexity of
technology requirement for any ecosystem. As the users of cloud-based
services increases the complexity of back-end technologies also increased.
The heterogeneous requirement of users in terms for various configurations
creates different unbalancing issues related to load. Hence effective load
balancing in a cloud system with reference to time and space become crucial
as it adversely affect system performance. Since the user requirement and
expected performance is multi-objective use of decision-making tools like
fuzzy logic will yield good results as it uses human procedure knowledge in
decision making. The overall system performance can be further improved by
dynamic resource scheduling using optimization technique like genetic
algorithm.
Redefining brain tumor segmentation: a cutting-edge convolutional neural netw...IJECEIAES
Medical image analysis has witnessed significant advancements with deep learning techniques. In the domain of brain tumor segmentation, the ability to
precisely delineate tumor boundaries from magnetic resonance imaging (MRI)
scans holds profound implications for diagnosis. This study presents an ensemble convolutional neural network (CNN) with transfer learning, integrating
the state-of-the-art Deeplabv3+ architecture with the ResNet18 backbone. The
model is rigorously trained and evaluated, exhibiting remarkable performance
metrics, including an impressive global accuracy of 99.286%, a high-class accuracy of 82.191%, a mean intersection over union (IoU) of 79.900%, a weighted
IoU of 98.620%, and a Boundary F1 (BF) score of 83.303%. Notably, a detailed comparative analysis with existing methods showcases the superiority of
our proposed model. These findings underscore the model’s competence in precise brain tumor localization, underscoring its potential to revolutionize medical
image analysis and enhance healthcare outcomes. This research paves the way
for future exploration and optimization of advanced CNN models in medical
imaging, emphasizing addressing false positives and resource efficiency.
Embedded machine learning-based road conditions and driving behavior monitoringIJECEIAES
Car accident rates have increased in recent years, resulting in losses in human lives, properties, and other financial costs. An embedded machine learning-based system is developed to address this critical issue. The system can monitor road conditions, detect driving patterns, and identify aggressive driving behaviors. The system is based on neural networks trained on a comprehensive dataset of driving events, driving styles, and road conditions. The system effectively detects potential risks and helps mitigate the frequency and impact of accidents. The primary goal is to ensure the safety of drivers and vehicles. Collecting data involved gathering information on three key road events: normal street and normal drive, speed bumps, circular yellow speed bumps, and three aggressive driving actions: sudden start, sudden stop, and sudden entry. The gathered data is processed and analyzed using a machine learning system designed for limited power and memory devices. The developed system resulted in 91.9% accuracy, 93.6% precision, and 92% recall. The achieved inference time on an Arduino Nano 33 BLE Sense with a 32-bit CPU running at 64 MHz is 34 ms and requires 2.6 kB peak RAM and 139.9 kB program flash memory, making it suitable for resource-constrained embedded systems.
Contenu connexe
Similaire à Reliable and efficient webserver management for task scheduling in edge-cloud platform
Optimization of energy consumption in cloud computing datacenters IJECEIAES
Cloud computing has emerged as a practical paradigm for providing IT resources, infrastructure and services. This has led to the establishment of datacenters that have substantial energy demands for their operation. This work investigates the optimization of energy consumption in cloud datacenter using energy efficient allocation of tasks to resources. The work seeks to develop formal optimization models that minimize the energy consumption of computational resources and evaluates the use of existing optimization solvers in testing these models. Integer linear programming (ILP) techniques are used to model the scheduling problem. The objective is to minimize the total power consumed by the active and idle cores of the servers’ CPUs while meeting a set of constraints. Next, we use these models to carry out a detailed performance comparison between a selected set of Generic ILP and 0-1 Boolean satisfiability based solvers in solving the ILP formulations. Simulation results indicate that in some cases the developed models have saved up to 38% in energy consumption when compared to common techniques such as round robin. Furthermore, results also showed that generic ILP solvers had superior performance when compared to SAT-based ILP solvers especially as the number of tasks and resources grow in size.
A combined computing framework for load balancing in multi-tenant cloud eco-...IJECEIAES
Since the world is getting digitalized, cloud computing has become a core part of it. Massive data on a daily basis is processed, stored, and transferred over the internet. Cloud computing has become quite popular because of its superlative quality and enhanced capability to improvise data management, offering better computing resources and data to its user bases (UBs). However, there are many issues in the existing cloud traffic management approaches and how to manage data during service execution. The study introduces two distinct research models: data center virtualization framework under multi-tenant cloud-ecosystem (DCVF-MT) and collaborative workflow of multi-tenant load balancing (CW-MTLB) with analytical research modeling. The sequence of execution flow considers a set of algorithms for both models that address the core problem of load balancing and resource allocation in the cloud computing (CC) ecosystem. The research outcome illustrates that DCVF-MT, outperforms the one-toone approach by approximately 24.778% performance improvement in traffic scheduling. It also yields a 40.33% performance improvement in managing cloudlet handling time. Moreover, it attains an overall 8.5133% performance improvement in resource cost optimization, which is significant to ensure the adaptability of the frameworks into futuristic cloud applications where adequate virtualization and resource mapping will be required.
Cloud computing gives on-demand access to computing resources in
metered and powerfully adapted way; it empowers the client to get access to
fast and flexible resources through virtualization and widely adaptable for
various applications. Further, to provide assurance of productive
computation, scheduling of task is very much important in cloud
infrastructure environment. Moreover, the main aim of task execution
phenomena is to reduce the execution time and reserve infrastructure;
further, considering huge application, workflow scheduling has drawn fine
attention in business as well as scientific area. Hence, in this research work,
we design and develop an optimized load balancing in parallel computation
aka optimal load balancing in parallel computing (OLBP) mechanism to
distribute the load; at first different parameter in workload is computed and
then loads are distributed. Further OLBP mechanism considers makespan
time and energy as constraint and further task offloading is done considering
the server speed. This phenomenon provides the balancing of workflow;
further OLBP mechanism is evaluated using cyber shake workflow dataset
and outperforms the existing workflow mechanism.
Efficient fault tolerant cost optimized approach for scientific workflow via ...IAESIJAI
Cloud computing is one of the dispersed and effective computing models, which offers tremendous opportunity to address scientific issues with big scale characteristics. Despite having such a dynamic computing paradigm, it faces several difficulties and falls short of meeting the necessary quality of services (QoS) standards. For sustainable cloud computing workflow, QoS is very much required and need to be addressed. Recent studies looked on quantitative fault-tolerant programming to reduce the number of copies while still achieving the reliability necessity of a process on the heterogeneous infrastructure as a service (IaaS) cloud. In this study, we create an optimal replication technique (ORT) about fault tolerance as well as cost-driven mechanism and this is known as optimal replication technique with fault tolerance and cost minimization (ORT-FTC). Here ORT-FTC employs an iterative-based method that chooses the virtual machine and its copies that have the shortest makespan in the situation of specific tasks. By creating test cases, ORT-FTC is tested while taking into account scientific workflows like CyberShake, laser interferometer gravitational-wave observatory (LIGO), montage, and sipht. Additionally, ORT-FTC is shown to be only slightly improved over the current model in all cases.
A Review on Scheduling in Cloud Computingijujournal
Cloud computing is the requirement based on clients that this computing which provides software,
infrastructure and platform as a service as per pay for use norm. The scheduling main goal is to achieve
the accuracy and correctness on task completion. The scheduling in cloud environment which enables the
various cloud services to help framework implementation. Thus the far reaching way of different type of
scheduling algorithms in cloud computing environment surveyed which includes the workflow scheduling
and grid scheduling. The survey gives an elaborate idea about grid, cloud, workflow scheduling to
minimize the energy cost, efficiency and throughput of the system.
A Review on Scheduling in Cloud Computingijujournal
This document reviews scheduling techniques in cloud computing. It discusses key concepts like virtualization and different scheduling algorithms. The review surveys various scheduling algorithms for tasks, workflows, real-time applications and energy optimization. It analyzes algorithms for load balancing, fault tolerance and resource utilization to improve performance metrics like makespan, cost and energy consumption. The document concludes that effective scheduling is important in cloud computing to provide on-demand services and complete tasks accurately and on time.
A Review on Scheduling in Cloud Computingijujournal
This document reviews scheduling techniques in cloud computing. It discusses key concepts like virtualization and different scheduling algorithms. The review surveys various scheduling algorithms for tasks, workflows, real-time applications and energy efficiency. It analyzes algorithms based on parameters like makespan, cost, energy consumption and concludes many algorithms can improve resource utilization and performance while reducing energy costs.
A Review on Scheduling in Cloud Computingijujournal
Cloud computing is the requirement based on clients that this computing which provides software,
infrastructure and platform as a service as per pay for use norm. The scheduling main goal is to achieve
the accuracy and correctness on task completion. The scheduling in cloud environment which enables the
various cloud services to help framework implementation. Thus the far reaching way of different type of
scheduling algorithms in cloud computing environment surveyed which includes the workflow scheduling
and grid scheduling. The survey gives an elaborate idea about grid, cloud, workflow scheduling to
minimize the energy cost, efficiency and throughput of the system.
DYNAMIC TASK SCHEDULING BASED ON BURST TIME REQUIREMENT FOR CLOUD ENVIRONMENTIJCNCJournal
Cloud computing has an indispensable role in the modern digital scenario. The fundamental challenge of cloud systems is to accommodate user requirements which keep on varying. This dynamic cloud environment demands the necessity of complex algorithms to resolve the trouble of task allotment. The overall performance of cloud systems is rooted in the efficiency of task scheduling algorithms. The dynamic property of cloud systems makes it challenging to find an optimal solution satisfying all the evaluation metrics. The new approach is formulated on the Round Robin and the Shortest Job First algorithms. The Round Robin method reduces starvation, and the Shortest Job First decreases the average waiting time. In this work, the advantages of both algorithms are incorporated to improve the makespan of user tasks.
Dynamic Task Scheduling based on Burst Time Requirement for Cloud EnvironmentIJCNCJournal
Cloud computing has an indispensable role in the modern digital scenario. The fundamental challenge of cloud systems is to accommodate user requirements which keep on varying. This dynamic cloud environment demands the necessity of complex algorithms to resolve the trouble of task allotment. The overall performance of cloud systems is rooted in the efficiency of task scheduling algorithms. The dynamic property of cloud systems makes it challenging to find an optimal solution satisfying all the evaluation metrics. The new approach is formulated on the Round Robin and the Shortest Job First algorithms. The Round Robin method reduces starvation, and the Shortest Job First decreases the average waiting time. In this work, the advantages of both algorithms are incorporated to improve the makespan of user tasks.
Differentiating Algorithms of Cloud Task Scheduling Based on various Parametersiosrjce
Cloud computing is a new design structure for large, distributed data centers. Cloud computing
system promises to offer end user “pay as go” model. To meet the expected quality requirements of users, cloud
computing need to offer differentiated services to users. QoS differentiation is very important to satisfy
different users with different QoS requirements. In this paper, various QoS based scheduling algorithms,
scheduling parameters and the future scope of discussed algorithms have been studied. This paper summarizes
various cloud scheduling algorithms, findings of algorithms, scheduling factors, type of scheduling and
parameters considered
This document discusses different algorithms for task scheduling in cloud computing environments based on various quality of service (QoS) parameters. It summarizes several QoS-based scheduling algorithms including QDA, Improved Cost Based, PAPRIKA, ANT Colony, CMultiQoSSchedule, and SHEFT Workflow. It also provides a comparative table of these algorithms and discusses the various metrics considered by QoS-based scheduling algorithms like time, cost, makespan, trust, and resource utilization. The paper concludes that scheduling is an important factor for cloud environments and that existing algorithms can be improved by considering additional parameters like trust values, execution rates, and success rates.
An Energy Aware Resource Utilization Framework to Control Traffic in Cloud Ne...IJECEIAES
Energy consumption in cloud computing occur due to the unreasonable way in which tasks are scheduled. So energy aware task scheduling is a major concern in cloud computing as energy consumption results into significant waste of energy, reduce the profit margin and also high carbon emissions which is not environmentally sustainable. Hence, energy efficient task scheduling solutions are required to attain variable resource management, live migration, minimal virtual machine design, overall system efficiency, reduction in operating costs, increasing system reliability, and prompting environmental protection with minimal performance overhead. This paper provides a comprehensive overview of the energy efficient techniques and approaches and proposes the energy aware resource utilization framework to control traffic in cloud networks and overloads.
Hybrid Task Scheduling Approach using Gravitational and ACO Search AlgorithmIRJET Journal
The document proposes a hybrid task scheduling approach for cloud computing called ACGSA that combines ant colony optimization and gravitational search algorithms. It describes using the Cloudsim simulator to test the performance of ACGSA and comparing it to ant colony optimization. The results show that ACGSA achieves better performance than the basic ant colony approach on relevant parameters like task scheduling time and resource utilization.
Resource-efficient workload task scheduling for cloud-assisted internet of th...IJECEIAES
One of the most challenging tasks in the internet of things-cloud-based environment is the resource allocation for the tasks. The cloud provides various resources such as virtual machines, computational cores, networks, and other resources for the execution of the various tasks of the internet of things (IoT). Moreover, some methods are used for executing IoT tasks using an optimal resource management system but these methods are not efficient. Hence, in this research, we present a resource-efficient workload task scheduling (RWTS) model for a cloud-assisted IoT environment to execute the IoT task which utilizes few numbers of resources to bring a good tradeoff, achieve high performance using fewer resources of the cloud, compute the number of resources required for the execution of the IoT task such as bandwidth and computational core. Furthermore, this model mainly focuses to reduce energy consumption and also provides a task scheduling model to schedule the IoT tasks in an IoT-cloud-based environment. The experimentation has been done using the Montage workflow and the results have been obtained in terms of execution time, power sum, average power, and energy consumption. When compared with the existing model, the RWTS model performs better when the size of the tasks is increased.
Score based deadline constrained workflow scheduling algorithm for cloud systemsijccsa
Cloud Computing is the latest and emerging trend in information technology domain. It offers utility- based
IT services to user over the Internet. Workflow scheduling is one of the major problems in cloud systems. A
good scheduling algorithm must minimize the execution time and cost of workflow application along with
QoS requirements of the user. In this paper we consider deadline as the major constraint and propose a
score based deadline constrained workflow scheduling algorithm that executes workflow within
manageable cost while meeting user defined deadline constraint. The algorithm uses the concept of score
which represents the capabilities of hardware resources. This score value is used while allocating
resources to various tasks of workflow application. The algorithm allocates those resources to workflow
application which are reliable and reduce the execution cost and complete the workflow application within
user specified deadline. The experimental results show that score based algorithm exhibits less execution
time and also reduces the failure rate of workflow application within manageable cost. All the simulations
have been done using CloudSim toolkit.
IRJET- A Statistical Approach Towards Energy Saving in Cloud ComputingIRJET Journal
This document proposes a statistical approach to save energy in cloud computing through predictive monitoring and optimization techniques. It discusses using Gaussian process regression to predict infrastructure workload and then applying convex optimization to determine the optimal subset of physical machines needed. Virtual machines would be migrated to this subset and idle physical machines could then be powered off to reduce energy consumption while maintaining system performance. An evaluation using 29 days of Google trace data showed the potential for significant power savings without affecting quality of service.
Time and Reliability Optimization Bat Algorithm for Scheduling Workflow in CloudIRJET Journal
This document describes using a meta-heuristic optimization algorithm called the Bat Algorithm (BA) to schedule workflows in cloud computing environments. The BA is applied to optimize a multi-objective function that minimizes workflow execution time and maximizes reliability while keeping costs within a user-specified budget. The BA is compared to a basic randomized evolutionary algorithm (BREA) that uses greedy approaches. Experimental results show the BA performs better by finding schedules that have lower execution times and higher reliability within the given budget constraints. The BA is well-suited for this problem because it can efficiently search large solution spaces and automatically focus on optimal regions like other metaheuristics.
A cloud computing scheduling and its evolutionary approachesnooriasukmaningtyas
Despite the increasing use of cloud computing technology because it offers
unique features to serve its customers perfectly, exploiting the full potential
is very difficult due to the many problems and challenges. Therefore,
scheduling resources are one of these challenges. Researchers are still finding
it difficult to determine which of the scheduling algorithms are appropriate
and effective and that helps increases the performance of the system to
accomplish these tasks. This paper provides a broad and detailed examination
of resource scheduling algorithms in the environment of a cloud computing
environment and highlights the advantages and disadvantages of some
algorithms to help researchers in selecting the best algorithms to schedule a
particular workload to get a satisfy a quality of service, guarantee good
utilization of the cloud resources also minimizing the make-span.
Multi-objective load balancing in cloud infrastructure through fuzzy based de...IAESIJAI
Cloud computing became a popular technology which influence not only
product development but also made technology business easy. The services
like infrastructure, platform and software can reduce the complexity of
technology requirement for any ecosystem. As the users of cloud-based
services increases the complexity of back-end technologies also increased.
The heterogeneous requirement of users in terms for various configurations
creates different unbalancing issues related to load. Hence effective load
balancing in a cloud system with reference to time and space become crucial
as it adversely affect system performance. Since the user requirement and
expected performance is multi-objective use of decision-making tools like
fuzzy logic will yield good results as it uses human procedure knowledge in
decision making. The overall system performance can be further improved by
dynamic resource scheduling using optimization technique like genetic
algorithm.
Similaire à Reliable and efficient webserver management for task scheduling in edge-cloud platform (20)
Redefining brain tumor segmentation: a cutting-edge convolutional neural netw...IJECEIAES
Medical image analysis has witnessed significant advancements with deep learning techniques. In the domain of brain tumor segmentation, the ability to
precisely delineate tumor boundaries from magnetic resonance imaging (MRI)
scans holds profound implications for diagnosis. This study presents an ensemble convolutional neural network (CNN) with transfer learning, integrating
the state-of-the-art Deeplabv3+ architecture with the ResNet18 backbone. The
model is rigorously trained and evaluated, exhibiting remarkable performance
metrics, including an impressive global accuracy of 99.286%, a high-class accuracy of 82.191%, a mean intersection over union (IoU) of 79.900%, a weighted
IoU of 98.620%, and a Boundary F1 (BF) score of 83.303%. Notably, a detailed comparative analysis with existing methods showcases the superiority of
our proposed model. These findings underscore the model’s competence in precise brain tumor localization, underscoring its potential to revolutionize medical
image analysis and enhance healthcare outcomes. This research paves the way
for future exploration and optimization of advanced CNN models in medical
imaging, emphasizing addressing false positives and resource efficiency.
Embedded machine learning-based road conditions and driving behavior monitoringIJECEIAES
Car accident rates have increased in recent years, resulting in losses in human lives, properties, and other financial costs. An embedded machine learning-based system is developed to address this critical issue. The system can monitor road conditions, detect driving patterns, and identify aggressive driving behaviors. The system is based on neural networks trained on a comprehensive dataset of driving events, driving styles, and road conditions. The system effectively detects potential risks and helps mitigate the frequency and impact of accidents. The primary goal is to ensure the safety of drivers and vehicles. Collecting data involved gathering information on three key road events: normal street and normal drive, speed bumps, circular yellow speed bumps, and three aggressive driving actions: sudden start, sudden stop, and sudden entry. The gathered data is processed and analyzed using a machine learning system designed for limited power and memory devices. The developed system resulted in 91.9% accuracy, 93.6% precision, and 92% recall. The achieved inference time on an Arduino Nano 33 BLE Sense with a 32-bit CPU running at 64 MHz is 34 ms and requires 2.6 kB peak RAM and 139.9 kB program flash memory, making it suitable for resource-constrained embedded systems.
Advanced control scheme of doubly fed induction generator for wind turbine us...IJECEIAES
This paper describes a speed control device for generating electrical energy on an electricity network based on the doubly fed induction generator (DFIG) used for wind power conversion systems. At first, a double-fed induction generator model was constructed. A control law is formulated to govern the flow of energy between the stator of a DFIG and the energy network using three types of controllers: proportional integral (PI), sliding mode controller (SMC) and second order sliding mode controller (SOSMC). Their different results in terms of power reference tracking, reaction to unexpected speed fluctuations, sensitivity to perturbations, and resilience against machine parameter alterations are compared. MATLAB/Simulink was used to conduct the simulations for the preceding study. Multiple simulations have shown very satisfying results, and the investigations demonstrate the efficacy and power-enhancing capabilities of the suggested control system.
Neural network optimizer of proportional-integral-differential controller par...IJECEIAES
Wide application of proportional-integral-differential (PID)-regulator in industry requires constant improvement of methods of its parameters adjustment. The paper deals with the issues of optimization of PID-regulator parameters with the use of neural network technology methods. A methodology for choosing the architecture (structure) of neural network optimizer is proposed, which consists in determining the number of layers, the number of neurons in each layer, as well as the form and type of activation function. Algorithms of neural network training based on the application of the method of minimizing the mismatch between the regulated value and the target value are developed. The method of back propagation of gradients is proposed to select the optimal training rate of neurons of the neural network. The neural network optimizer, which is a superstructure of the linear PID controller, allows increasing the regulation accuracy from 0.23 to 0.09, thus reducing the power consumption from 65% to 53%. The results of the conducted experiments allow us to conclude that the created neural superstructure may well become a prototype of an automatic voltage regulator (AVR)-type industrial controller for tuning the parameters of the PID controller.
An improved modulation technique suitable for a three level flying capacitor ...IJECEIAES
This research paper introduces an innovative modulation technique for controlling a 3-level flying capacitor multilevel inverter (FCMLI), aiming to streamline the modulation process in contrast to conventional methods. The proposed
simplified modulation technique paves the way for more straightforward and
efficient control of multilevel inverters, enabling their widespread adoption and
integration into modern power electronic systems. Through the amalgamation of
sinusoidal pulse width modulation (SPWM) with a high-frequency square wave
pulse, this controlling technique attains energy equilibrium across the coupling
capacitor. The modulation scheme incorporates a simplified switching pattern
and a decreased count of voltage references, thereby simplifying the control
algorithm.
A review on features and methods of potential fishing zoneIJECEIAES
This review focuses on the importance of identifying potential fishing zones in seawater for sustainable fishing practices. It explores features like sea surface temperature (SST) and sea surface height (SSH), along with classification methods such as classifiers. The features like SST, SSH, and different classifiers used to classify the data, have been figured out in this review study. This study underscores the importance of examining potential fishing zones using advanced analytical techniques. It thoroughly explores the methodologies employed by researchers, covering both past and current approaches. The examination centers on data characteristics and the application of classification algorithms for classification of potential fishing zones. Furthermore, the prediction of potential fishing zones relies significantly on the effectiveness of classification algorithms. Previous research has assessed the performance of models like support vector machines, naïve Bayes, and artificial neural networks (ANN). In the previous result, the results of support vector machine (SVM) were 97.6% more accurate than naive Bayes's 94.2% to classify test data for fisheries classification. By considering the recent works in this area, several recommendations for future works are presented to further improve the performance of the potential fishing zone models, which is important to the fisheries community.
Electrical signal interference minimization using appropriate core material f...IJECEIAES
As demand for smaller, quicker, and more powerful devices rises, Moore's law is strictly followed. The industry has worked hard to make little devices that boost productivity. The goal is to optimize device density. Scientists are reducing connection delays to improve circuit performance. This helped them understand three-dimensional integrated circuit (3D IC) concepts, which stack active devices and create vertical connections to diminish latency and lower interconnects. Electrical involvement is a big worry with 3D integrates circuits. Researchers have developed and tested through silicon via (TSV) and substrates to decrease electrical wave involvement. This study illustrates a novel noise coupling reduction method using several electrical involvement models. A 22% drop in electrical involvement from wave-carrying to victim TSVs introduces this new paradigm and improves system performance even at higher THz frequencies.
Electric vehicle and photovoltaic advanced roles in enhancing the financial p...IJECEIAES
Climate change's impact on the planet forced the United Nations and governments to promote green energies and electric transportation. The deployments of photovoltaic (PV) and electric vehicle (EV) systems gained stronger momentum due to their numerous advantages over fossil fuel types. The advantages go beyond sustainability to reach financial support and stability. The work in this paper introduces the hybrid system between PV and EV to support industrial and commercial plants. This paper covers the theoretical framework of the proposed hybrid system including the required equation to complete the cost analysis when PV and EV are present. In addition, the proposed design diagram which sets the priorities and requirements of the system is presented. The proposed approach allows setup to advance their power stability, especially during power outages. The presented information supports researchers and plant owners to complete the necessary analysis while promoting the deployment of clean energy. The result of a case study that represents a dairy milk farmer supports the theoretical works and highlights its advanced benefits to existing plants. The short return on investment of the proposed approach supports the paper's novelty approach for the sustainable electrical system. In addition, the proposed system allows for an isolated power setup without the need for a transmission line which enhances the safety of the electrical network
Bibliometric analysis highlighting the role of women in addressing climate ch...IJECEIAES
Fossil fuel consumption increased quickly, contributing to climate change
that is evident in unusual flooding and draughts, and global warming. Over
the past ten years, women's involvement in society has grown dramatically,
and they succeeded in playing a noticeable role in reducing climate change.
A bibliometric analysis of data from the last ten years has been carried out to
examine the role of women in addressing the climate change. The analysis's
findings discussed the relevant to the sustainable development goals (SDGs),
particularly SDG 7 and SDG 13. The results considered contributions made
by women in the various sectors while taking geographic dispersion into
account. The bibliometric analysis delves into topics including women's
leadership in environmental groups, their involvement in policymaking, their
contributions to sustainable development projects, and the influence of
gender diversity on attempts to mitigate climate change. This study's results
highlight how women have influenced policies and actions related to climate
change, point out areas of research deficiency and recommendations on how
to increase role of the women in addressing the climate change and
achieving sustainability. To achieve more successful results, this initiative
aims to highlight the significance of gender equality and encourage
inclusivity in climate change decision-making processes.
Voltage and frequency control of microgrid in presence of micro-turbine inter...IJECEIAES
The active and reactive load changes have a significant impact on voltage
and frequency. In this paper, in order to stabilize the microgrid (MG) against
load variations in islanding mode, the active and reactive power of all
distributed generators (DGs), including energy storage (battery), diesel
generator, and micro-turbine, are controlled. The micro-turbine generator is
connected to MG through a three-phase to three-phase matrix converter, and
the droop control method is applied for controlling the voltage and
frequency of MG. In addition, a method is introduced for voltage and
frequency control of micro-turbines in the transition state from gridconnected mode to islanding mode. A novel switching strategy of the matrix
converter is used for converting the high-frequency output voltage of the
micro-turbine to the grid-side frequency of the utility system. Moreover,
using the switching strategy, the low-order harmonics in the output current
and voltage are not produced, and consequently, the size of the output filter
would be reduced. In fact, the suggested control strategy is load-independent
and has no frequency conversion restrictions. The proposed approach for
voltage and frequency regulation demonstrates exceptional performance and
favorable response across various load alteration scenarios. The suggested
strategy is examined in several scenarios in the MG test systems, and the
simulation results are addressed.
Enhancing battery system identification: nonlinear autoregressive modeling fo...IJECEIAES
Precisely characterizing Li-ion batteries is essential for optimizing their
performance, enhancing safety, and prolonging their lifespan across various
applications, such as electric vehicles and renewable energy systems. This
article introduces an innovative nonlinear methodology for system
identification of a Li-ion battery, employing a nonlinear autoregressive with
exogenous inputs (NARX) model. The proposed approach integrates the
benefits of nonlinear modeling with the adaptability of the NARX structure,
facilitating a more comprehensive representation of the intricate
electrochemical processes within the battery. Experimental data collected
from a Li-ion battery operating under diverse scenarios are employed to
validate the effectiveness of the proposed methodology. The identified
NARX model exhibits superior accuracy in predicting the battery's behavior
compared to traditional linear models. This study underscores the
importance of accounting for nonlinearities in battery modeling, providing
insights into the intricate relationships between state-of-charge, voltage, and
current under dynamic conditions.
Smart grid deployment: from a bibliometric analysis to a surveyIJECEIAES
Smart grids are one of the last decades' innovations in electrical energy.
They bring relevant advantages compared to the traditional grid and
significant interest from the research community. Assessing the field's
evolution is essential to propose guidelines for facing new and future smart
grid challenges. In addition, knowing the main technologies involved in the
deployment of smart grids (SGs) is important to highlight possible
shortcomings that can be mitigated by developing new tools. This paper
contributes to the research trends mentioned above by focusing on two
objectives. First, a bibliometric analysis is presented to give an overview of
the current research level about smart grid deployment. Second, a survey of
the main technological approaches used for smart grid implementation and
their contributions are highlighted. To that effect, we searched the Web of
Science (WoS), and the Scopus databases. We obtained 5,663 documents
from WoS and 7,215 from Scopus on smart grid implementation or
deployment. With the extraction limitation in the Scopus database, 5,872 of
the 7,215 documents were extracted using a multi-step process. These two
datasets have been analyzed using a bibliometric tool called bibliometrix.
The main outputs are presented with some recommendations for future
research.
Use of analytical hierarchy process for selecting and prioritizing islanding ...IJECEIAES
One of the problems that are associated to power systems is islanding
condition, which must be rapidly and properly detected to prevent any
negative consequences on the system's protection, stability, and security.
This paper offers a thorough overview of several islanding detection
strategies, which are divided into two categories: classic approaches,
including local and remote approaches, and modern techniques, including
techniques based on signal processing and computational intelligence.
Additionally, each approach is compared and assessed based on several
factors, including implementation costs, non-detected zones, declining
power quality, and response times using the analytical hierarchy process
(AHP). The multi-criteria decision-making analysis shows that the overall
weight of passive methods (24.7%), active methods (7.8%), hybrid methods
(5.6%), remote methods (14.5%), signal processing-based methods (26.6%),
and computational intelligent-based methods (20.8%) based on the
comparison of all criteria together. Thus, it can be seen from the total weight
that hybrid approaches are the least suitable to be chosen, while signal
processing-based methods are the most appropriate islanding detection
method to be selected and implemented in power system with respect to the
aforementioned factors. Using Expert Choice software, the proposed
hierarchy model is studied and examined.
Enhancing of single-stage grid-connected photovoltaic system using fuzzy logi...IJECEIAES
The power generated by photovoltaic (PV) systems is influenced by
environmental factors. This variability hampers the control and utilization of
solar cells' peak output. In this study, a single-stage grid-connected PV
system is designed to enhance power quality. Our approach employs fuzzy
logic in the direct power control (DPC) of a three-phase voltage source
inverter (VSI), enabling seamless integration of the PV connected to the
grid. Additionally, a fuzzy logic-based maximum power point tracking
(MPPT) controller is adopted, which outperforms traditional methods like
incremental conductance (INC) in enhancing solar cell efficiency and
minimizing the response time. Moreover, the inverter's real-time active and
reactive power is directly managed to achieve a unity power factor (UPF).
The system's performance is assessed through MATLAB/Simulink
implementation, showing marked improvement over conventional methods,
particularly in steady-state and varying weather conditions. For solar
irradiances of 500 and 1,000 W/m2
, the results show that the proposed
method reduces the total harmonic distortion (THD) of the injected current
to the grid by approximately 46% and 38% compared to conventional
methods, respectively. Furthermore, we compare the simulation results with
IEEE standards to evaluate the system's grid compatibility.
Enhancing photovoltaic system maximum power point tracking with fuzzy logic-b...IJECEIAES
Photovoltaic systems have emerged as a promising energy resource that
caters to the future needs of society, owing to their renewable, inexhaustible,
and cost-free nature. The power output of these systems relies on solar cell
radiation and temperature. In order to mitigate the dependence on
atmospheric conditions and enhance power tracking, a conventional
approach has been improved by integrating various methods. To optimize
the generation of electricity from solar systems, the maximum power point
tracking (MPPT) technique is employed. To overcome limitations such as
steady-state voltage oscillations and improve transient response, two
traditional MPPT methods, namely fuzzy logic controller (FLC) and perturb
and observe (P&O), have been modified. This research paper aims to
simulate and validate the step size of the proposed modified P&O and FLC
techniques within the MPPT algorithm using MATLAB/Simulink for
efficient power tracking in photovoltaic systems.
Adaptive synchronous sliding control for a robot manipulator based on neural ...IJECEIAES
Robot manipulators have become important equipment in production lines, medical fields, and transportation. Improving the quality of trajectory tracking for
robot hands is always an attractive topic in the research community. This is a
challenging problem because robot manipulators are complex nonlinear systems
and are often subject to fluctuations in loads and external disturbances. This
article proposes an adaptive synchronous sliding control scheme to improve trajectory tracking performance for a robot manipulator. The proposed controller
ensures that the positions of the joints track the desired trajectory, synchronize
the errors, and significantly reduces chattering. First, the synchronous tracking
errors and synchronous sliding surfaces are presented. Second, the synchronous
tracking error dynamics are determined. Third, a robust adaptive control law is
designed,the unknown components of the model are estimated online by the neural network, and the parameters of the switching elements are selected by fuzzy
logic. The built algorithm ensures that the tracking and approximation errors
are ultimately uniformly bounded (UUB). Finally, the effectiveness of the constructed algorithm is demonstrated through simulation and experimental results.
Simulation and experimental results show that the proposed controller is effective with small synchronous tracking errors, and the chattering phenomenon is
significantly reduced.
Remote field-programmable gate array laboratory for signal acquisition and de...IJECEIAES
A remote laboratory utilizing field-programmable gate array (FPGA) technologies enhances students’ learning experience anywhere and anytime in embedded system design. Existing remote laboratories prioritize hardware access and visual feedback for observing board behavior after programming, neglecting comprehensive debugging tools to resolve errors that require internal signal acquisition. This paper proposes a novel remote embeddedsystem design approach targeting FPGA technologies that are fully interactive via a web-based platform. Our solution provides FPGA board access and debugging capabilities beyond the visual feedback provided by existing remote laboratories. We implemented a lab module that allows users to seamlessly incorporate into their FPGA design. The module minimizes hardware resource utilization while enabling the acquisition of a large number of data samples from the signal during the experiments by adaptively compressing the signal prior to data transmission. The results demonstrate an average compression ratio of 2.90 across three benchmark signals, indicating efficient signal acquisition and effective debugging and analysis. This method allows users to acquire more data samples than conventional methods. The proposed lab allows students to remotely test and debug their designs, bridging the gap between theory and practice in embedded system design.
Detecting and resolving feature envy through automated machine learning and m...IJECEIAES
Efficiently identifying and resolving code smells enhances software project quality. This paper presents a novel solution, utilizing automated machine learning (AutoML) techniques, to detect code smells and apply move method refactoring. By evaluating code metrics before and after refactoring, we assessed its impact on coupling, complexity, and cohesion. Key contributions of this research include a unique dataset for code smell classification and the development of models using AutoGluon for optimal performance. Furthermore, the study identifies the top 20 influential features in classifying feature envy, a well-known code smell, stemming from excessive reliance on external classes. We also explored how move method refactoring addresses feature envy, revealing reduced coupling and complexity, and improved cohesion, ultimately enhancing code quality. In summary, this research offers an empirical, data-driven approach, integrating AutoML and move method refactoring to optimize software project quality. Insights gained shed light on the benefits of refactoring on code quality and the significance of specific features in detecting feature envy. Future research can expand to explore additional refactoring techniques and a broader range of code metrics, advancing software engineering practices and standards.
Smart monitoring technique for solar cell systems using internet of things ba...IJECEIAES
Rapidly and remotely monitoring and receiving the solar cell systems status parameters, solar irradiance, temperature, and humidity, are critical issues in enhancement their efficiency. Hence, in the present article an improved smart prototype of internet of things (IoT) technique based on embedded system through NodeMCU ESP8266 (ESP-12E) was carried out experimentally. Three different regions at Egypt; Luxor, Cairo, and El-Beheira cities were chosen to study their solar irradiance profile, temperature, and humidity by the proposed IoT system. The monitoring data of solar irradiance, temperature, and humidity were live visualized directly by Ubidots through hypertext transfer protocol (HTTP) protocol. The measured solar power radiation in Luxor, Cairo, and El-Beheira ranged between 216-1000, 245-958, and 187-692 W/m 2 respectively during the solar day. The accuracy and rapidity of obtaining monitoring results using the proposed IoT system made it a strong candidate for application in monitoring solar cell systems. On the other hand, the obtained solar power radiation results of the three considered regions strongly candidate Luxor and Cairo as suitable places to build up a solar cells system station rather than El-Beheira.
An efficient security framework for intrusion detection and prevention in int...IJECEIAES
Over the past few years, the internet of things (IoT) has advanced to connect billions of smart devices to improve quality of life. However, anomalies or malicious intrusions pose several security loopholes, leading to performance degradation and threat to data security in IoT operations. Thereby, IoT security systems must keep an eye on and restrict unwanted events from occurring in the IoT network. Recently, various technical solutions based on machine learning (ML) models have been derived towards identifying and restricting unwanted events in IoT. However, most ML-based approaches are prone to miss-classification due to inappropriate feature selection. Additionally, most ML approaches applied to intrusion detection and prevention consider supervised learning, which requires a large amount of labeled data to be trained. Consequently, such complex datasets are impossible to source in a large network like IoT. To address this problem, this proposed study introduces an efficient learning mechanism to strengthen the IoT security aspects. The proposed algorithm incorporates supervised and unsupervised approaches to improve the learning models for intrusion detection and mitigation. Compared with the related works, the experimental outcome shows that the model performs well in a benchmark dataset. It accomplishes an improved detection accuracy of approximately 99.21%.
Understanding Inductive Bias in Machine LearningSUTEJAS
This presentation explores the concept of inductive bias in machine learning. It explains how algorithms come with built-in assumptions and preferences that guide the learning process. You'll learn about the different types of inductive bias and how they can impact the performance and generalizability of machine learning models.
The presentation also covers the positive and negative aspects of inductive bias, along with strategies for mitigating potential drawbacks. We'll explore examples of how bias manifests in algorithms like neural networks and decision trees.
By understanding inductive bias, you can gain valuable insights into how machine learning models work and make informed decisions when building and deploying them.
Literature Review Basics and Understanding Reference Management.pptxDr Ramhari Poudyal
Three-day training on academic research focuses on analytical tools at United Technical College, supported by the University Grant Commission, Nepal. 24-26 May 2024
KuberTENes Birthday Bash Guadalajara - K8sGPT first impressionsVictor Morales
K8sGPT is a tool that analyzes and diagnoses Kubernetes clusters. This presentation was used to share the requirements and dependencies to deploy K8sGPT in a local environment.
The CBC machine is a common diagnostic tool used by doctors to measure a patient's red blood cell count, white blood cell count and platelet count. The machine uses a small sample of the patient's blood, which is then placed into special tubes and analyzed. The results of the analysis are then displayed on a screen for the doctor to review. The CBC machine is an important tool for diagnosing various conditions, such as anemia, infection and leukemia. It can also help to monitor a patient's response to treatment.
DEEP LEARNING FOR SMART GRID INTRUSION DETECTION: A HYBRID CNN-LSTM-BASED MODELgerogepatton
As digital technology becomes more deeply embedded in power systems, protecting the communication
networks of Smart Grids (SG) has emerged as a critical concern. Distributed Network Protocol 3 (DNP3)
represents a multi-tiered application layer protocol extensively utilized in Supervisory Control and Data
Acquisition (SCADA)-based smart grids to facilitate real-time data gathering and control functionalities.
Robust Intrusion Detection Systems (IDS) are necessary for early threat detection and mitigation because
of the interconnection of these networks, which makes them vulnerable to a variety of cyberattacks. To
solve this issue, this paper develops a hybrid Deep Learning (DL) model specifically designed for intrusion
detection in smart grids. The proposed approach is a combination of the Convolutional Neural Network
(CNN) and the Long-Short-Term Memory algorithms (LSTM). We employed a recent intrusion detection
dataset (DNP3), which focuses on unauthorized commands and Denial of Service (DoS) cyberattacks, to
train and test our model. The results of our experiments show that our CNN-LSTM method is much better
at finding smart grid intrusions than other deep learning algorithms used for classification. In addition,
our proposed approach improves accuracy, precision, recall, and F1 score, achieving a high detection
accuracy rate of 99.50%.
Comparative analysis between traditional aquaponics and reconstructed aquapon...bijceesjournal
The aquaponic system of planting is a method that does not require soil usage. It is a method that only needs water, fish, lava rocks (a substitute for soil), and plants. Aquaponic systems are sustainable and environmentally friendly. Its use not only helps to plant in small spaces but also helps reduce artificial chemical use and minimizes excess water use, as aquaponics consumes 90% less water than soil-based gardening. The study applied a descriptive and experimental design to assess and compare conventional and reconstructed aquaponic methods for reproducing tomatoes. The researchers created an observation checklist to determine the significant factors of the study. The study aims to determine the significant difference between traditional aquaponics and reconstructed aquaponics systems propagating tomatoes in terms of height, weight, girth, and number of fruits. The reconstructed aquaponics system’s higher growth yield results in a much more nourished crop than the traditional aquaponics system. It is superior in its number of fruits, height, weight, and girth measurement. Moreover, the reconstructed aquaponics system is proven to eliminate all the hindrances present in the traditional aquaponics system, which are overcrowding of fish, algae growth, pest problems, contaminated water, and dead fish.
2008 BUILDING CONSTRUCTION Illustrated - Ching Chapter 02 The Building.pdf
Reliable and efficient webserver management for task scheduling in edge-cloud platform
1. International Journal of Electrical and Computer Engineering (IJECE)
Vol. 13, No. 5, October 2023, pp. 5922~5931
ISSN: 2088-8708, DOI: 10.11591/ijece.v13i5.pp5922-5931 5922
Journal homepage: http://ijece.iaescore.com
Reliable and efficient webserver management for task
scheduling in edge-cloud platform
Sangeeta Sangani, Rudragoud Patil
Department of Computer Science and Engineering, K. L. S. Gogte Institute of Technology, Belgavi, Karnataka, India
Article Info ABSTRACT
Article history:
Received Nov 15, 2022
Revised Feb 7, 2023
Accepted Feb 10, 2023
The development in the field of cloud webserver management for the
execution of the workflow and meeting the quality-of-service (QoS)
prerequisites in a distributed cloud environment has been a challenging task.
Though, internet of things (IoT) of work presented for the scheduling of the
workflow in a heterogeneous cloud environment. Moreover, the rapid
development in the field of cloud computing like edge-cloud computing
creates new methods to schedule the workflow in a heterogenous cloud
environment to process different tasks like IoT, event-driven applications,
and different network applications. The current methods used for workflow
scheduling have failed to provide better trade-offs to meet reliable
performance with minimal delay. In this paper, a novel web server resource
management framework is presented namely the reliable and efficient
webserver management (REWM) framework for the edge-cloud
environment. The experiment is conducted on complex bioinformatic
workflows; the result shows the significant reduction of cost and energy by
the proposed REWM in comparison with standard webserver management
methodology.
Keywords:
Cloud computing
Edge-computing
Quality of service
Reliability
Scheduling
Webserver
Workflow
This is an open access article under the CC BY-SA license.
Corresponding Author:
Sangeeta Sangani
Department of Computer Science and Engineering, K. L. S. Gogte Institute of Technology
Belgavi, Karnataka, India
Email: gitsangeeta@gmail.com
1. INTRODUCTION
The cloud webservice providers (CWP) take the tasks execution data in the form of a directed
acyclic graph (DAG), which is known as a workflow. The scheduling of the workflow in the field of cloud
has been researched for a long time [1]–[3]. Though, a major problem in workflow scheduling is providing
better performance, reliability, energy efficiency, and fault tolerance in a large computing environment.
Some frameworks automatically assign required resources, energy constraints, and performance for the tasks
in the workflow for execution. Various researchers have proposed various methods and technologies which
can reduce the consumption energy rate to conserve energy in the cloud environment and provide better
performance in the cloud web server. Though, these methods need a large amount of communication cost
between the web servers. Furthermore, these methods provide inadequate results and the consumption of
energy cannot be decreased because of the high utilization of memory, and routing. The cost for
communication between the webservers is high using the existing methods, as these methods depend on the
traditional model which is not reasonable when applied to a hybrid cloud environment [3]–[5] for example,
edge-cloud server as shown in Figure 1. This motivates the proposed work to design a workload scheduling
model for edge-cloud platform that brings good tradeoffs between reducing and energy and cost and meets
internet of things (IoT) workflow applications reliability requirement. In meeting research issues this paper
2. Int J Elec & Comp Eng ISSN: 2088-8708
Reliable and efficient webserver management for task scheduling in edge-cloud platform (Sangeeta Sangani)
5923
present reliable and efficient web server resource management for workload execution in the hybrid cloud
platform. Reliable and efficient webserver management (REWM) is designed to reduce energy consumption,
reduce task failure, minimize delay, and provide high reliability. The research significance of REWM for task
scheduling in the edge-cloud platform is as: i) the proposed work presents a novel workload management
technique adopting an edge-cloud platform to provide efficiency and reliability; ii) the proposed REWM
reduces task failures and provides fault-tolerance task offloading with minimal energy consumption; iii) the
proposed REWM reduces energy consumption and cost by meeting efficiency and reliability constraints; and
iv) the experiment outcome shows a significant reduction in energy and costs in comparison with existing
workload management methodologies.
The paper is formatted in the given way. In section 2, literature survey on various existing methods
for workload scheduling has been presented. In section 3, the methodology of the model is provided. In
section 4, the results of the model using the epigenomics data set are presented and finally, in section 5, the
conclusion and future work for the model has been given.
Figure 1. Heterogenous web server architecture for workload execution
2. LITERATURE SURVEY
In [3], they have proposed a method using the heterogenous-earliest-finish-time (HEFT), fuzzy
dominance sort based heterogeneous earliest-finish-time (FDHEFT), to optimize the cost and makespan for
the execution of the workloads in a heterogenous cloud. This method uses fuzzy dominance regulation for the
execution of the workload. The experimentation performance has been evaluated using both the real-time and
synthetic workload which has a better makespan and good cost trade-offs when contrasted with the
traditional execution of the workload models. In [4], proposed a technique, days cash on hand (DCOH), for
the hybrid cloud environment to optimize and reduce the cost and the deadline for the given task. The
attained results give an outcome that the technique can efficiently reduce the trade-offs between the
performance of the makespan and cost. Moreover, by reducing the consumption of energy, the cost of the
execution of the workload can be reduced. In [5], they have proposed an energy-aware method to execute the
DAG workloads having deadline constraints to reduce the consumption of energy in hybrid clouds. This
model reduces the computational overhead of an energy-aware processor. In [6], a technique to reduce the
cost and to attain better trade-off performance between energy and cost using the energy-aware scheduling
method has been proposed. This technique comprises the given phases: slack resource optimization to save
energy, idle virtual machine (VM) resource reuse policies, VM selection, and task merging. This technique
attains good performance when compared with the existing workflow models. In [7], a model to optimize the
cost and energy using the scheduling method to execute a large number of scientific data in IoT devices
under a cloud network has been presented. This model improves performance, reduces energy consumption,
and decreases the cost of execution of the task by a given deadline. In [8], a model to address the problem of
reliability and to reduce the consumption of energy in the workload scheduling to provide better quality-of-
service (QoS) requirements has been proposed. The experimental results of the model showed better
efficiency and reliability for workload scheduling when compared with different models. Nowadays many
algorithms like genetic algorithm, swarm optimization, and other algorithms are currently being used to solve
the multi-objective workload scheduling (MOWS) issues in the cloud environment and to execute the real-
time-workloads [9], [10] and also to optimize energy aware multi-objective function [11] presented energy
minimized scheduling (EMS) [12]. Some other algorithms like reinforcement learning (RL) also have been
used to solve the MOWS problems in the cloud network [13]–[15]. In [16], an improved fuzzy logic rule with
GT to balance and control the load between the physical machines has been presented.
3. ISSN: 2088-8708
Int J Elec & Comp Eng, Vol. 13, No. 5, October 2023: 5922-5931
5924
In [17], a Q-learning model to optimize the deadline and balance load for a given task having a
weighted-objective function to schedule the workload has been presented. In [18], a technique, RADAR, to
allocate resources to a given task in the cloud environment has been presented. This technique handles
unpredicted failure and dynamic resource management considering the workload conditions. This technique
decreases the cost required for execution, time, and service level agreements (SLA) violation when compared
with the traditional techniques used for the execution of the workloads. In [19], they have proposed a model
for the workflow which consists of composite tasks (cWFS). In this model, they have used nested-particle
swarm optimization (N-PSO) method to execute the inner and outer population tasks. As this method is slow,
a faster version of N-PSO has been used which executes the tasks by a given deadline. In [20], they have
given an algorithm for the scheduling of the tasks named QL-HEFT which uses the Q-learning method and
HEFT method to decrease the makespan during the execution of the task. This algorithm first ranks the given
tasks on the given priority and then sorts them based on Q-learning and then gives an optimal resource
utilization for each task so that the task can be executed by a given deadline. In [21], they have also proposed
a scheduling method named as endpoint-communication contention-aware list-scheduling-heuristic (ELSH)
which decreases the makespan of the workflow. In [22], they have presented an algorithm, DMWHDBS,
which executes the task within the given deadline and is also cost-efficient. In this model, they have
implemented a judgment mechanism that provides a success rate for the scheduling of the task in a multi-
workflow environment. In [23], they have presented an allocation of resources technique for multi-cloud
scheduling which executes the workflow, provides better performance, and reduces the cost of execution. In
[24], they have designed a fault-tolerant workflow-scheduling model for the multi-cloud which reduces the
cost during the execution and provides better reliability. They have also given a billing method for the
resources utilized during the execution of the workflow. Finally, they have given an algorithm that reduces
the cost, and time and also provides a solution for fault-tolerant workflow scheduling. The comparative study
of two most recent workload scheduling is provided in Table 1. The proposed REWS work is focused to
address limitation of existing model and are given in Table 1.
Table 1. Comparative study
EMS [12], 2022 Reliability-aware cost-efficient scientific
(RACES) [24], 2022
Proposed REWS
Heterogeneous computing Yes Yes Yes
Edge-cloud No No Yes
Workload type Complex Complex Complex
Workload size Small to large Small to large Small to large
QoS Metrics Energy Cost and time Energy, Cost and Reliability
Optimization strategy Heuristic Weibull distribution Heuristic
Planning No No Yes
Reliability Yes Yes Yes
Availability Yes No Yes
3. METHOD
3.1. System and workflow execution on hybrid cloud environment
In this model, for the execution of the scientific workflow, a hybrid cloud environment has been
considered. Suppose there are two sensor devices connected and both devices are running through the DAG
application. The illustration of the workflow can be seen in Figure 1. The sensor devices are linked to an
edge server which computes various operations and the edge server is linked to the cloud data center. The
cloud data center comprises different hosts and 𝑛 number of virtual machines. This model presumes that for
the computation and communication process, there should be a stable connection between the sensor devices
and the edge server. The server has the capability for the computation of scientific workflows and to execute
the workflows within a given deadline.
3.2. Workload execution model for hybrid cloud environment
The execution of the workload is done either in the server or the cloud network when the sub-tasks
are offloaded to the cloud. The delay induced to execute of the sub-task 𝑢𝑗 in the server containing the
processing element 𝐺0 is defined using the (1). Further, the energy consumed to execute a given task of the
workload in the edge server is represented using the (2).
𝑀𝑗
𝑚
=
𝛼𝑗
𝐺0
(1)
𝜇𝑗
𝑚
= 𝜑𝑀𝑗
𝑚
(2)
4. Int J Elec & Comp Eng ISSN: 2088-8708
Reliable and efficient webserver management for task scheduling in edge-cloud platform (Sangeeta Sangani)
5925
In (2), the energy consumed by the processing element for a given deadline is represented using 𝜑. In the
same way, the delay used to execute the offloading of the workload in the cloud network is calculated. The
calculation of the capacity in the cloud is given using (3).
𝔾 = {𝐺1, 𝐺2, 𝐺3, … , 𝐺𝑛} (3)
The delay used to complete the execution of the workload comprises execution and communication delays.
Hence, the total delay to execute the complete workload in the cloud is given using (4). Moreover, the delay
used for the execution of the workload locally in the cloud or the edge server is calculated by the given using
(5).
𝑀𝑗𝑘
0
=
𝛼𝑗
𝐺𝑘+𝑀𝑗𝑘
𝑠 (4)
𝑀𝑗𝑘 =
𝛼𝑗
𝐺𝑘+𝑀𝑗𝑘
𝑠 (5)
In (5), the value of 𝑀𝑗𝑘
𝑠
= 0 when the value of k=0. The failure of the task in the sub-task of the workload 𝑢𝑘
is calculated using the poisson distribution on the processing element 𝑡𝑘 by the given (6). The efficiency of
the processing element for the execution of the DAG workload is given using the (7) [19]. In (7), the sub-task
index is defined using the yjk. The sub-task yjk can be described using (8).
𝑆𝑗𝑘 = 𝑓−𝜔𝑘𝛼𝑘 𝐺𝑘
⁄
(6)
𝑆(𝐻) = ∏ ∑ 𝑦𝑗𝑘𝑆𝑗𝑘
𝑡𝑘∈𝕋
𝑠𝑗∈𝕋 (7)
𝑦𝑗𝑘 = {
0, if a task 𝑢𝑗 is given to the processing element 𝑡𝑘
1, otherwise
(8)
In this model, the consumption of energy is reduced for the execution of the workload task 𝐻 by decreasing
the delay and increasing the efficiency of performance. The consumption of energy for the execution of the
workload in the edge server is calculated using the (9). After this, the main issue can be denoted using the
(10). In (10), the given constraint should be satisfied using the objective function to achieve an effective
workload execution result. In (11), the constraint for the overall execution of delay in the workload 𝐺 should
be less when compared with the other delay bound. In (12), the delay bound describes the performance
efficiency of the workload which must be greater than the specified performance efficiency bounds. The (13),
specifies that each sub-task either can be executed either in the cloud network or in the edge server.
𝜇𝑗 = 𝑦𝑗𝜇𝑗
𝑚
+ ∑ 𝑦𝑗𝑘𝜇𝑗𝑘
𝑠
𝑛
𝑘=1 (9)
min∑ 𝜇𝑗
𝑜
𝑗=1 (10)
𝑀(𝐻) ≤ 𝑀𝑝𝑟𝑒𝑞 (11)
𝑆(𝐻) ≤ 𝑆𝑝𝑟𝑒𝑞 (12)
∑ 𝑦𝑗𝑘 = 1
𝑛
𝑘=0 (13)
𝜏(𝑢𝑗) ≥ 𝜏(𝑢𝑙) + ∑ 𝑦𝑙𝑘𝑀𝑙𝑘, ∀𝑢𝑙 ∈ 𝑝𝑟𝑒𝑐(𝑢𝑗)
𝑛
𝑘=0 (14)
𝜏(𝑢𝑗) + ∑ 𝑦𝑗𝑘𝑀𝑗𝑘 ≤ 𝜏(𝑢𝑙), ∀𝑢𝑙 ∈ 𝑠𝑢𝑏𝑠𝑒𝑞(𝑢𝑗)
𝑛
𝑘=0 (15)
𝑦𝑗𝑘 ∈ {0,1} (16)
The (14) and (15), it specifies the subsequent sub-tasks have to wait until the previous sub-task has been
completely executed. In (14) and (15), the time for the execution of the sub-task 𝑢𝑗 is initialized using the
𝜏(𝑢𝑗). This model’s main aim is to reduce the delay in the execution of the task and to improve the
performance efficiency which depends on some of the constraints given in (11) to (16). In this model, the
5. ISSN: 2088-8708
Int J Elec & Comp Eng, Vol. 13, No. 5, October 2023: 5922-5931
5926
resource which consumes less energy is used to execute the workload in a hybrid cloud environment. In the
initial step, the sub-tasks are well-ordered in increasing order to generate a proper sequence set 𝕌
̂ for the
sub-tasks. In the next step, the delay bounds and processing efficiency for each sub-task having different
processing elements in the cloud network and edge server are obtained. Finally, the resources are allocated to
the sub-task 𝑢
̂𝑗 by finding an appropriate processing element that consumes less energy overhead and also
satisfies the bounds for the execution of the sub-task 𝑢
̂𝑗.
3.3. Task ordering webserver management
Figure 1 shows the existence of dependencies among preceding and subsequent tasks. As a result,
after completion of the preceding task, only the webserver assign the resource to subsequent tasks. On the
other side, if a task doesn’t have any dependencies the webserver allocates resources in parallel. Take
Figure 1 for example, the task 𝑢1 is executed first and 𝑢2 waits for the resource from the webserver to start
execution, while in the meantime the task 𝑢3 is executed in parallel to minimize the delay. As a result, the
web server needs to decide the order in which it executes the workload so that the parallel efficiency can be
maximized and delay can be reduced. In this work the task’s ascendent ordering outcome 𝑆𝑣 is used to
measure tasks selectivity using the (17).
𝑦𝑗𝑘 ∈ {0,1}
𝑆𝑣(𝑢𝑗) = 𝑀
̅ + max
𝑢
𝑙∈𝑠𝑢𝑏(𝑢𝑗),𝑡𝑗∈𝕋,𝑡𝑚∈𝕋
{𝑓𝑗𝑙(𝑡𝑘,𝑡𝑚) + 𝑆𝑣(𝑢𝑙)} (17)
where, 𝑀
̅ defines the mean delay induced to execute the task on different webservers in graph 𝐻. The
parameter 𝑀
̅ is computed using the (18). In this work, the delay induced for communicating the offloaded
tasks outcomes toward the edge server. Therefore, the parameter 𝑓𝑗𝑙(𝑡𝑘, 𝑡𝑚) defines the delay induced for
communicating from the edge-server to 𝑡𝑚 considering m=0rrr.
𝑀
̅ =
∑ ∑ (
𝜔𝑗
𝐺𝑘
)
𝑛
𝑘=0
𝑜
𝑗=1
(𝑛+1)𝑜
(18)
𝑓𝑗𝑙(𝑡𝑘,𝑡𝑚) = 0, (19)
If m≠0 then
𝑓𝑗𝑙(𝑡𝑘,𝑡𝑚) = 𝑀𝑙𝑚
𝑠
=
𝜇𝑙
𝑤𝑚
, (20)
before starting workload execution all the tasks are arranged in a descendent structure by 𝑆𝑣. If tasks 𝑢𝑗 and
𝑢𝑙 are to be allocated and ascendant order assures 𝑆𝑣(𝑢𝑗) > 𝑆𝑣(𝑢𝑙), in such cases the 𝑢𝑗 might be having
higher selectivity in comparison with 𝑢𝑙. Further, an important to be noted when 𝑢𝑗 is about to be processed,
it must wait till its preceding task is completed meeting the bounds defined in (14) and (15).
3.4. Reliable and efficient workload execution webserver management
The proposed work is focused on establishing the best reliable webserver server that reduces delay
and assures fault-tolerance of workload H. Let 𝕌
̂ define the assignment order of 𝕌 and the task that must be
allocated is defined as 𝑢
̂𝑗(𝑢
̂𝑗 ∈ 𝕌
̂ ). The task set that must be processed is defined as {𝑢
̂1, 𝑢
̂2, 𝑢
̂3, … 𝑢
̂𝑗−1} and
tasks that must be allocated to the web server are defined as {𝑢
̂𝑗+1, 𝑢
̂𝑗+2, 𝑢
̂𝑗+3, … 𝑢
̂𝑜}. In the work during
allocating 𝑢
̂𝑗 the reliability (i.e., fault-tolerance) outcome considered must be 𝑆(𝑢
̂𝑗). Therefore, the present
fault-tolerance of H is obtained through the (21).
𝑆(𝑢
̂𝑗, 𝐻) = ∑ 𝑆𝑏(𝑢
̂𝑙) ∗ 𝑆(𝑢
̂𝑗)
𝑗−1
𝑙=1
∑ 𝑆𝑣𝑏(𝑢
̂𝑙)
𝑜
𝑙=𝑗+1 (21)
where, 𝑆(𝑢
̂𝑗, 𝐻) represent the fault-tolerance outcomes of H when allocating 𝑢
̂𝑗. 𝑆𝑏(𝑢
̂𝑙) represent the
fault-tolerance outcome experienced by the allocated 𝑢
̂𝑙 obtains. 𝑆𝑣𝑏(𝑢
̂𝑙) defines the reliability outcome that
unallocated task 𝑢
̂𝑙 obtains. The fault-tolerance outcome of a random web server is either equal or lesser
than 1, follows bounds defined in (12), task 𝑢
̂𝑗 should satisfy the (22).
𝑆(𝑢
̂𝑗, 𝐻) ≥ 𝑆𝑝𝑟𝑒𝑞 (22)
6. Int J Elec & Comp Eng ISSN: 2088-8708
Reliable and efficient webserver management for task scheduling in edge-cloud platform (Sangeeta Sangani)
5927
Taking (13) into (14) and substituting the (23).
𝑆(𝑢
̂𝑗) ≥ 𝑆𝑝𝑟𝑒𝑞 (∑ 𝑆𝑏(𝑢
̂𝑙) ∗
𝑗−1
𝑙=1
∑ 𝑆𝑣𝑏(𝑢
̂𝑙)
𝑜
𝑙=𝑗+1 )
⁄
≥ 𝑆𝑝𝑟𝑒𝑞 (∑ 𝑆𝑏(𝑢
̂𝑙) ∗
𝑗−1
𝑙=1
∑ 𝑆𝑣𝑐(𝑢
̂𝑙)
𝑜
𝑙=𝑗+1 )
⁄ (23)
where,
𝑆(𝑢
̂𝑗) = ∑ 𝑥𝑗𝑘𝑆𝑗𝑘
𝑡𝑘∈𝕋 (24)
𝑆𝑣𝑐(𝑢
̂𝑗) depicts the upper limits of reliability outcomes of the task 𝑢
̂𝑙 can experience is given in the (25).
𝑆𝑣𝑐(𝑢
̂𝑗) = max
𝑡𝑘∈𝕋
{𝑆𝑙𝑘} (25)
In the proposed algorithm, we adopt an effective soft-computing-based searching strategy that presumptuous
every unallocated task must be allocated to either edge server or cloud webserver with the maximum
fault-tolerance outcome, and later establish obtainable yjk for assuring (23) in minimizing solution space size.
Further, the work aimed to meet efficiency requirements in getting the required webserver with minimal time
for workload execution. In this work, the fastest initialization time (FIT) and the recent completion time
(RCT) are used for reducing the task's execution time and sustaining the workload delay prerequisites. First,
the FIT of the incoming task Uinc on a different individual web server uk is given using (26).
𝐹𝐼𝑇(𝑢𝑖𝑛𝑐) = 0 (26)
Then, we can get another task 𝑢𝑗′𝑠 FIT on a different web server 𝑡𝑘 using the (27). The RCT of the departure
task 𝑢𝑑𝑒𝑝 is given using (28). In meantime, this work assumes the other task 𝑢𝑗′𝑠 RCT on a different web
server 𝑢𝑘 which is given using (29). Then, this work obtains every task 𝑢𝑗′𝑠 minimized execution delay
rather than the overall delay prerequisite of 𝐻 which is given using (30).
𝐹𝐼𝑇(𝑢𝑗, 𝑡𝑗) = max
𝑢𝑙∈𝑝𝑟𝑒𝑐(𝑢𝑗),𝑡𝑚∈𝕋
{𝐹𝐼𝑇(𝑢𝑙, 𝑡𝑚) +
𝜔𝑙
𝐺𝑚+𝑓𝑙𝑗(𝑡𝑚,𝑡𝑘)
} (27)
𝑅𝐶𝑇(𝑢𝑑𝑒𝑝, 𝑡𝑘) = 𝑀𝑝𝑟𝑒𝑞 (28)
𝑅𝐶𝑇(𝑢𝑗, 𝑡𝑗) = min
𝑢𝑙∈𝑠𝑢𝑏(𝑢𝑗),𝑡𝑚∈𝕋
{𝑅𝐶𝑇(𝑢𝑙, 𝑡𝑚) −
𝜔𝑙
𝐺𝑚+𝑓𝑗𝑙(𝑡𝑘,𝑡𝑚)
} (29)
∑ 𝑦𝑗𝑘 (𝑅𝐶𝑇(𝑢𝑗, 𝑡𝑗) − 𝐹𝐼𝑇(𝑢𝑗, 𝑡𝑗)) ≥ 𝑀𝑗𝑘
𝑑
=
𝜔𝑚
𝐺𝑘
𝑛
𝑘=0 (30)
The proposed REWM methodology working is given in algorithm 1 which is focused on reducing delay and
providing fault-tolerance assurance with high reliability meeting task QoS constraints of each task, and
assures the constraint are bounded with minimal energy dissipation. The proposed methodology is designed
considering the following three phases. First, use the task’s ascendent ordering outcome 𝑆𝑣 for generating 𝕌
̂.
Second, use (23) and (29) for obtaining the fault-tolerance and efficiency constraint of every individual task
on different web servers in the edge-server or cloud web servers. Lastly, when allocating a task 𝑢
̂𝑗 establish a
web server with the minimal energy-cost assuring bounds of 𝑢
̂𝑗. Hence, this model can reduce both the
consumption of energy and cost when compared with the existing web server management for task
scheduling which is discussed below in the results section.
Algorithm 1. Reliable and efficient webserver management (REWM)
Step 1. Start
Step 2. Deploy edge-cloud platform with physical machines and virtual machines.
Step 3. The user submits workflow task with deadline requirement to edge-cloud resource provider.
Step 4. The resource provider first arranges the task according to its selectivity and arranges it in ascendent order
Step 5. The resource provider uses (23) and (29) for obtaining the fault-tolerance and efficiency constraint, respectively.
Step 6. The resource provider finds edge-server that reduces energy meeting deadline prerequisite using (10) and (23), respectively.
Step 7. If resource provider doesn’t find any edge-server; then, the task is offloaded to cloud platform.
Step 8. The resource provider execute task in cloud platform that minimizing energy and meets efficiency and reliability constraint.
Step 9. Stop
7. ISSN: 2088-8708
Int J Elec & Comp Eng, Vol. 13, No. 5, October 2023: 5922-5931
5928
4. RESULTS AND DISCUSSION
In this section, experiments have been conducted to evaluate the performance of the proposed model
REWM over the existing reliability-aware cost-efficient scientific (RACES) workflows scheduling strategy
on the multi-cloud systems model [24]. Experiments have been conducted on the epigenomics and SIPHT
scientific workflow [25], [26]. The workflow has been discussed below in the next sub-section. Energy
consumption, computation cost, and reliability are the three factors that have been considered to evaluate the
performance of the model. The IoT-edge cloud server environment has been modeled using the SENSORIA
simulator and the cloud environment is modeled using CloudSim and is combined through object-oriented
programming language in building a hybrid cloud environment. The experiments have been conducted on an
Intel i5 processor with NVIDIA graphics and RAM of 8 GB.
4.1. Energy consumption performance
In this section, the experiments have been conducted on both the epigenomics and SIPHT workflow
by varying the size of the workload from 30 to 1,000 and the energy consumed for both the scientific
workflow has been evaluated using the proposed REWM and the existing RACES model. From Figure 2 it
can be seen that as the size of the workload of epigenomics workload increases the energy required for the
execution also increases slightly. However, using the REWM model, a significant reduction in energy
consumption can be seen in comparison to the existing RACES model. This shows that the REWM model
achieves a scalable performance considering both the smaller workload and as well significantly large
workload. Also, there is an average energy efficiency improvement of 16.63% which has been achieved by
the REWM model in comparison with the RACES model. Further, from Figure 3 it can be seen that as the
size of the workload of SIPHT workload increases the energy required for the execution also increases
slightly. However, using the REWM model, a significant reduction in energy consumption can be seen in
comparison to the existing RACES model. This shows that the REWM model achieves a scalable
performance considering both the smaller workload and as well significantly large workload. Also, there is
an average energy efficiency improvement of 6.009% which has been achieved by the REWM model in
comparison with the RACES model.
Figure 2. Energy consumption for epigenomics with
different workload sizes
Figure 3. Energy consumption for the SIPHT with
different workload sizes
4.2. Computational cost
In this section, the experiments have been conducted on both the epigenomics and SIPHT workflow
by varying the size of the workload from 30 to 1,000 and the cost consumption for both the scientific
workflow has been evaluated using the proposed REWM and the existing RACES model. The cost is
measured based on total time spent on respective server type on Azure cloud as defined using (31).
𝑡𝑜𝑡𝑎𝑙 𝑐𝑜𝑠𝑡 = 𝑋 ∗ 𝑇 (31)
where 𝑋 defines the total cost (measure in dollars) per second on respective instance type. More detail can be
obtained from [24], [27]. From Figure 4 it can be seen that as the size of the workload of epigenomics
workload increases the cost required for the execution also increases slightly. However, using the REWM
model, a significant reduction in the computational cost can be seen in comparison to the existing RACES
model. This shows that the REWM model achieves a scalable performance and reduces the computational
cost considering both the smaller workload and as well significantly large workload. Also, there is an average
energy efficiency improvement of 4.67% which has been achieved by the REWM model in comparison with
8. Int J Elec & Comp Eng ISSN: 2088-8708
Reliable and efficient webserver management for task scheduling in edge-cloud platform (Sangeeta Sangani)
5929
the RACES model. Further, from Figure 5 it can be seen that as the size of the workload of SIPHT workload
increases the computational cost required for the execution also increases slightly. However, using the
REWM model, a significant reduction in the computational cost can be seen in comparison to the existing
RACES model. This shows that the REWM model achieves a scalable performance and reduces the cost
considering both the smaller workload and as well significantly large workload. Also, there is an average
energy efficiency improvement of 6.44% which has been achieved by the REWM model in comparison with
the RACES model.
4.3. Reliability
In this section, the experiments have been conducted on both the Epigenomics and SIPHT workflow
by varying the size of the workload from 30 to 1,000 and the reliability of the model for both the scientific
workflow has been evaluated using the proposed REWM and the existing RACES model. From Figure 6 it
can be seen that as the size of the workload of epigenomics workload increases the reliability of the REWM
model increases in comparison with the existing RACES model. This shows that the REWM model achieves
a scalable performance and reduces the computational cost and is highly reliable considering both the smaller
workload and as well significantly large workload. Also, there is an average energy efficiency improvement
of 0.065% which has been achieved by the REWM model in comparison with the RACES model. Further,
From Figure 7 it can be seen that as the size of the workload of SIPHT workload increases the reliability of
the REWM model increases in comparison with the existing RACES model. This shows that the REWM
model achieves a scalable performance and reduces the computational cost and is highly reliable considering
both the smaller workload and as well significantly large workload. Also, there is an average energy
efficiency improvement of 0.115% which has been achieved by the REWM model in comparison with the
RACES model.
Figure 4. Computation cost for epigenomics with
different workload sizes
Figure 5. Computation cost for the SIPHT with
different workload sizes
Figure 6. Reliability for Epigenomics with different
workload sizes
Figure 7. Reliability for the SIPHT with different
workload sizes
5. CONCLUSION
In this paper, we have gone through various research presented on the scheduling of the workflow
and came across different algorithms used to reduce energy consumption and cost in a cloud environment and
9. ISSN: 2088-8708
Int J Elec & Comp Eng, Vol. 13, No. 5, October 2023: 5922-5931
5930
also to increase reliability. In various studies, it could be seen that less amount of work has been done on
workflow scheduling problems to reduce the cost and energy consumption in a heterogeneous cloud network.
The present models for workload scheduling have failed to bring out a good trade-off when meeting the
energy constraint and task deadline. In this model, we have presented an efficient method that can provide
good trade-offs to meet the energy constraint and task deadline in an edge-cloud environment for
provisioning complex IoT workflow. The REWM computes the delay for executing in the edge server and
the task failure rate for IoT workflow execution. Then, the benefit (i.e., performance efficiencies) of
minimizing delay and failure rate by offloading execution on the cloud platform is estimated. Finally, the
energy is optimized to meet task delay and processing efficiency requirements. In this way, the REWM
achieves much superior throughput, energy efficiency, and cost reduction for executing workflow on a
heterogeneous platform and an increase in reliability when compared with RACES and EMS which is proven
through experimental study for provisioning IoT workflows.
REFERENCES
[1] D. Poola, M. A. Salehi, K. Ramamohanarao, and R. Buyya, “A taxonomy and survey of fault-tolerant workflow management
systems in cloud and distributed computing environments,” in Software Architecture for Big Data and the Cloud, Elsevier, 2017,
pp. 285–320.
[2] R. Khorsand, F. Safi-Esfahani, N. Nematbakhsh, and M. Mohsenzade, “Taxonomy of workflow partitioning problems and
methods in distributed environments,” Journal of Systems and Software, vol. 132, pp. 253–271, Oct. 2017, doi:
10.1016/j.jss.2017.05.017.
[3] X. Zhou, G. Zhang, J. Sun, J. Zhou, T. Wei, and S. Hu, “Minimizing cost and makespan for workflow scheduling in cloud using
fuzzy dominance sort based HEFT,” Future Generation Computer Systems, vol. 93, pp. 278–289, Apr. 2019, doi:
10.1016/j.future.2018.10.046.
[4] J. Zhou, T. Wang, P. Cong, P. Lu, T. Wei, and M. Chen, “Cost and makespan-aware workflow scheduling in hybrid clouds,”
Journal of Systems Architecture, vol. 100, Nov. 2019, doi: 10.1016/j.sysarc.2019.08.004.
[5] G. Xie, G. Zeng, R. Li, and K. Li, “Energy-aware processor merging algorithms for deadline constrained parallel applications in
heterogeneous cloud computing,” IEEE Transactions on Sustainable Computing, vol. 2, no. 2, pp. 62–75, Apr. 2017, doi:
10.1109/TSUSC.2017.2705183.
[6] Z. Li, J. Ge, H. Hu, W. Song, H. Hu, and B. Luo, “Cost and energy aware scheduling algorithm for scientific workflows with
deadline constraint in clouds,” IEEE Transactions on Services Computing, vol. 11, no. 4, pp. 713–726, Jul. 2018, doi:
10.1109/TSC.2015.2466545.
[7] Y. Wen, Z. Wang, Y. Zhang, J. Liu, B. Cao, and J. Chen, “Energy and cost aware scheduling with batch processing for instance-
intensive IoT workflows in clouds,” Future Generation Computer Systems, vol. 101, pp. 39–50, Dec. 2019, doi:
10.1016/j.future.2019.05.046.
[8] R. Garg, M. Mittal, and L. H. Son, “Reliability and energy efficient workflow scheduling in cloud environment,” Cluster
Computing, vol. 22, no. 4, pp. 1283–1297, Dec. 2019, doi: 10.1007/s10586-019-02911-7.
[9] L. Chunlin, T. Jianhang, and L. Youlong, “Hybrid cloud adaptive scheduling strategy for heterogeneous workloads,” Journal of
Grid Computing, vol. 17, no. 3, pp. 419–446, Sep. 2019, doi: 10.1007/s10723-019-09481-3.
[10] C. Li, J. Tang, and Y. Luo, “Cost-aware scheduling for ensuring software performance and reliability under heterogeneous
workloads of hybrid cloud,” Automated Software Engineering, vol. 26, no. 1, pp. 125–159, Mar. 2019, doi: 10.1007/s10515-019-
00252-8.
[11] M. Sardaraz and M. Tahir, “A parallel multi-objective genetic algorithm for scheduling scientific workflows in cloud computing,”
International Journal of Distributed Sensor Networks, vol. 16, no. 8, Aug. 2020, doi: 10.1177/1550147720949142.
[12] B. Hu, Z. Cao, and M. Zhou, “Energy-minimized scheduling of real-time parallel workflows on heterogeneous distributed
computing systems,” IEEE Transactions on Services Computing, vol. 15, no. 5, pp. 2766–2779, Sep. 2022, doi:
10.1109/TSC.2021.3054754.
[13] D. P. Bertsekas, “Feature-based aggregation and deep reinforcement learning: a survey and some new implementations,”
IEEE/CAA Journal of Automatica Sinica, vol. 6, no. 1, pp. 1–31, Jan. 2019, doi: 10.1109/JAS.2018.7511249.
[14] L. Xue, C. Sun, D. Wunsch, Y. Zhou, and F. Yu, “An adaptive strategy via reinforcement learning for the prisoner dilemma
game,” IEEE/CAA Journal of Automatica Sinica, vol. 5, no. 1, pp. 301–310, Jan. 2018, doi: 10.1109/JAS.2017.7510466.
[15] H. Wang, T. Huang, X. Liao, H. Abu-Rub, and G. Chen, “Reinforcement learning for constrained energy trading games with
incomplete information,” IEEE Transactions on Cybernetics, vol. 47, no. 10, pp. 3404–3416, Oct. 2017, doi:
10.1109/TCYB.2016.2539300.
[16] E. Iranpour and S. Sharifian, “A distributed load balancing and admission control algorithm based on fuzzy type-2 and game
theory for large-scale SaaS cloud architectures,” Future Generation Computer Systems, vol. 86, pp. 81–98, Sep. 2018, doi:
10.1016/j.future.2018.03.045.
[17] W. Jiahao, P. Zhiping, C. Delong, L. Qirui, and H. Jieguang, “A multi-object optimization cloud workflow scheduling algorithm
based on reinforcement learning,” in Intelligent Computing Theories and Application, 2018, pp. 550–559.
[18] S. S. Gill, I. Chana, M. Singh, and R. Buyya, “RADAR: Self-configuring and self-healing in resource management for enhancing
quality of cloud services,” Concurrency and Computation: Practice and Experience, vol. 31, no. 1, Jan. 2019, doi:
10.1002/cpe.4834.
[19] A. Song, W.-N. Chen, X. Luo, Z.-H. Zhan, and J. Zhang, “Scheduling workflows with composite tasks: a nested particle swarm
optimization approach,” IEEE Transactions on Services Computing, vol. 15, no. 2, pp. 1074–1088, Mar. 2022, doi:
10.1109/TSC.2020.2975774.
[20] Z. Tong, X. Deng, H. Chen, J. Mei, and H. Liu, “QL-HEFT: a novel machine learning scheduling scheme base on cloud
computing environment,” Neural Computing and Applications, vol. 32, no. 10, pp. 5553–5570, May 2020, doi: 10.1007/s00521-
019-04118-8.
[21] Q. Wu, M. Zhou, and J. Wen, “Endpoint communication contention-aware cloud workflow scheduling,” IEEE Transactions on
Automation Science and Engineering, vol. 19, no. 2, pp. 1137–1150, Apr. 2022, doi: 10.1109/TASE.2020.3046673.
10. Int J Elec & Comp Eng ISSN: 2088-8708
Reliable and efficient webserver management for task scheduling in edge-cloud platform (Sangeeta Sangani)
5931
[22] G. Wang, Y. Wang, M. S. Obaidat, C. Lin, and H. Guo, “Dynamic multiworkflow deadline and budget constrained scheduling in
heterogeneous distributed systems,” IEEE Systems Journal, vol. 15, no. 4, pp. 4939–4949, Dec. 2021, doi:
10.1109/JSYST.2021.3087527.
[23] M. Barika, S. Garg, A. Chan, and R. N. Calheiros, “Scheduling algorithms for efficient execution of stream workflow applications
in multicloud environments,” IEEE Transactions on Services Computing, vol. 15, no. 2, pp. 860–875, Mar. 2022, doi:
10.1109/TSC.2019.2963382.
[24] X. Tang, “Reliability-aware cost-efficient scientific workflows scheduling strategy on multi-cloud systems,” IEEE Transactions
on Cloud Computing, vol. 10, no. 4, pp. 2909–2919, Oct. 2022, doi: 10.1109/TCC.2021.3057422.
[25] Pegasus, “Epigenomics,” Workflow gallery. https://pegasus.isi.edu/workflow_gallery/gallery/epigenomics/index.php (accessed
Feb. 06, 2023).
[26] Pegasus, “Sipht,” Workflow gallery. https://pegasus.isi.edu/workflow_gallery/gallery/sipht/index.php (accessed Feb. 06, 2023).
[27] Azure, “Cloud computing services: microsoft azure, cloud computing services,” Microsoft Azure. https://azure.microsoft.com./
(accessed Feb. 06, 2023).
BIOGRAPHIES OF AUTHORS
Sangeeta Sangani was born in 1982, in Karnataka. She received her B.E. Degree
in Computer Science and Engineering from VTU University, Belagavi in 2005. Completed her
post-graduation, M. Tech (CSE) in 2008 from VTU Belagavi. And currently pursuing a Ph.D.
from VTU Belagavi. She has a total experience of 14 years in teaching in Technical Institutes
like R.V. College of Engineering, Basveshwara Engineering College Bagalkot, and S. G.
Balekundri Institute of Technology, Belagavi. And now since 2010 working as an Assistant
Professor in KLS, Gogte Institute of Technology, Belagavi. In her total career, she has guided
more than 10 UG projects and 5 PG projects and published over 7 papers in International
Journals and Conferences. She can be contacted at email: gitsangeeta@gmail.com.
Rudragoud Patil currently working as an Associate Professor, at Dept of CSE,
KLS Gogte Institute of Technology, Belagavi. He has 12 years of Teaching Experience at
professional institutes across Karnataka. He published over 13 papers in International Journals,
Book Chapters, and Conferences of High Repute. His subjects of interest include cloud
computing, distributed computing, machine learning, and network security. He can be
contacted at email: rspatil@git.edu.