IJERT-A Survey of the Impact of Task Scheduling Algorithms on Energy-Efficiency in Cloud Computing (original) (raw)
Related papers
International Journal of Engineering Research and Technology (IJERT), 2014
https://www.ijert.org/task-scheduling-techniques-for-minimizing-energy-consumption-and-response-time-in-cloud-computing https://www.ijert.org/research/task-scheduling-techniques-for-minimizing-energy-consumption-and-response-time-in-cloud-computing-IJERTV3IS070485.pdf Cloud computing which provides services "on demand" is gaining increasing popularity. However, one of the most challenging problems in cloud computing is to minimize the power consumption in the data centers. The subject of Green Cloud Computing has emerged with the objective of reducing energy consumption. While reducing the energy consumption is of importance to Cloud Service Providers, performing the computation in minimum possible time (makespan) is of interest to users. In this paper we propose a technique to achieve the twin objective of minimizing the energy consumption as well as reducing the makespan of tasks. A method has been proposed and its effectiveness verified by simulating on CloudSim[21]. Results presented in this paper show the advantages of the proposed technique.
Task Scheduling Techniques for Energy Efficiency in the Cloud
EAI Endorsed Transactions on Energy Web
Energy efficiency is a key goal in cloud datacentre since it saves money and complies with green computing standards. When energy efficiency is taken into account, task scheduling becomes much more complicated and crucial. Execution overhead and scalability are major concerns in current research on energy-efficient task scheduling. Machine learning has been widely utilized to solve the problem of energy-efficient task scheduling, however, it is usually used to anticipate resource usage rather than selecting the schedule. The bulk of machine learning approaches are used to anticipate resource consumption, and heuristic or metaheuristic algorithms utilize these predictions to choose which computer resource should be assigned to a certain activity. As per the knowledge and research, none of the algorithms have independently used machine learning to make an energy-efficient scheduling decision. Heuristic or meta-heuristic approaches, as well as approximation algorithms, are frequently u...
Energy-Efficient Task Scheduling in Cloud Environment
IRJET, 2022
Cloud data centers consume large amounts of cooling power. Previous research was primarily based on the provision of mission plans, optimizing both computational power and overall performance gains. However, with the development of cloud facts, user needs are changing and cannot be met by traditional scheduling algorithms. One of the necessities is to maintain the value of cooling the facts in the middle as much as possible. According to literature reviews, the value of cooling power is half a million dollars. Executing value in a cloud environment requires a good mission planning approach. Temperature affects not only reliability, but also the overall performance, performance, and cost of embedded systems. This section describes the heat-aware task assignments and set of rules for cloud data centers. Mission plans are built in ways that not only reduce computational costs but also reduce cooling. Rigorous simulations were performed and compared to state-of-the-art graphics algorithms. Experimental results show that the rules of thermal task planning are superior to other strategies. This project aims to provide cloud data centers with a heat-aware task scheduling approach and improve data center performance by adopting task scheduling techniques. The basic motivation is to provide a valuable and powerful green algorithm through the fact center.
ENERGY-EFFICIENT TASK SCHEDULING ALGORITHMS FOR CLOUD DATA CENTERS
Cloud computing is a modern technology which contains a network of systems that form a cloud. Energy conservation is one of the major concern in cloud computing. Large amount of energy is wasted by the computers and other devices and the carbon dioxide gas is released into the atmosphere polluting the environment. Green computing is an emerging technology which focuses on preserving the environment by reducing various kinds of pollutions. Pollutions include excessive emission of greenhouse gas, disposal of e-waste and so on leading to greenhouse effect. So pollution needs to be reduced by lowering the energy usage. By doing this, utilization of resources should not be reduced. With less usage of energy, maximum resource utilization should be possible. For this purpose, many green task scheduling algorithms are used so that the energy consumption can be minimized in servers of cloud data centers. In this paper, ESF-ES algorithm is developed which focuses on minimizing energy consumption by minimizing the number of servers used. The comparison is made with hybrid algorithms and most-efficient-server first scheme.
IJERT-Efficient Task Scheduling for Cloud Computing
International Journal of Engineering Research and Technology (IJERT), 2014
https://www.ijert.org/efficient-task-scheduling-for-cloud-computing https://www.ijert.org/research/efficient-task-scheduling-for-cloud-computing-IJERTV3IS061073.pdf Cloud Computing is a computing platform for the next generation of the Internet.However, data centres hosting Cloud applications consume huge amounts of electrical energy, contributing to high operational costs and carbon footprints to the environment. Therefore, we need Green Cloud computing solutions[1] that can not only minimize operational costs but also reduce the environmental impact. Scheduling is one of the aspects which facilitate Green computing. Scheduling refers to the appropriate assignment of tasks to the resources available like CPU, memory and storage such that there is a maximum utilization of resources. Out of the various issues regarding a cloud, scheduling of user's jobs plays a very important role in determining the quality of service provided by the cloud to its customer. An effective scheduling policy is a necessity for quality service as well as providers. We have validated our approach by conducting a performance evaluation study with experimentation using the CloudSim toolkit. The results demonstrate that the proposed schedulingfor a cloud computing model has immense potential as it demonstrates high potential for the improvement of energy efficiency under workload.
Journal of Computer and Knowledge Engineering, 2022
In cloud computing, task scheduling is one of the most important issues that need to be considered for enhancing system performance and user satisfaction. Although there are many task scheduling strategies, available algorithms mainly focus on reducing the execution time while ignoring the profits of service providers. In order to improve provider profitability as well as meet the user requirements, tasks should be executed with minimal cost and without violating Quality of Service (QoS) restrictions. This study presents a Cost and Energy-aware Task Scheduling Algorithm (CETSA) intending to reduce makespan, energy consumption, and cost. The proposed algorithm considers the trade-off between cost, energy consumption, and makespan while considering the load on each virtual machine to prevent virtual machines from overloading. Experimental results with CloudSim show that the CETSA algorithm has better results in terms of energy consumption, waiting time, success rate, cost, improvement ratio, and degree of imbalance compared with MSDE, CPSO, CJS, and FUGE.
Energy Efficient Task Scheduling in Cloud Data Center
International Journal of Distributed and Cloud Computing, 2018
Cloud computing is emerging as a necessary need for the IT industry in order to reduce the setup and operational cost of its infrastructure. There is a huge requirement of computing resources to satisfy customer demands. A minute delay in a service may result in a measurable amount of loss for an organization. Response time is a major metric for evaluating performance of cloud applications. Cloud data centers form backbone of cloud computing. Data centers consume enormous amount of energy. Server racks have processing units, storage and network interface. Energy is dissipated at the server racks and cooling units. Various task scheduling algorithms and virtual machine scheduling algorithms have been proposed to measure the loss in performance but the energy loss is kept at the lowest priority. The paper is focused on discussing about the two techniques that maintain a scheduled routine for tasks arriving in a data center through a simulation scenario. VM-specific scheduling of tasks is done for assignment of the tasks to single or multiple virtual machines. Comparison of the two techniques, time-shared and space-shared technique is also done to give the reader a clear view about the situation in which both techniques are used. Future work is also discussed in the same context.
An Energy-aware Real-time Task Scheduling Approach in a Cloud Computing Environment
Journal of AI and Data Mining, 2021
Interest in cloud computing has grown considerably over recent years, primarily due to scalable virtualized resources. So, cloud computing has contributed to the advancement of real-time applications such as signal processing, environment surveillance and weather forecast where time and energy considerations to perform the tasks are critical. In real-time applications, missing the deadlines for the tasks will cause catastrophic consequences; thus, real-time task scheduling in cloud computing environment is an important and essential issue. Furthermore, energy-saving in cloud data center, regarding the benefits such as reduction of system operating costs and environmental protection is an important concern that is considered during recent years and is reducible with appropriate task scheduling. In this paper, we present an energy-aware task scheduling approach, namely EaRTs for real-time applications. We employ the virtualization and consolidation technique subject to minimizing the ...
Cloud computing is a technology that provides a platform for the sharing of resources such as software, infrastructure, application and other information. It brings a revolution in Information Technology industry by offering on-demand of resources. Clouds are basically virtualized datacenters and applications offered as services. Data center hosts hundreds or thousands of servers which comprised of software and hardware to respond the client request. A large amount of energy requires to perform the operation.. Cloud Computing is facing lot of challenges like Security of Data, Consumption of energy, Server Consolidation, etc. The research work focuses on the study of task scheduling management in a cloud environment. The main goal is to improve the performance (resource utilization and redeem the consumption of energy) in data centers. Energy-efficient scheduling of workloads helps to redeem the consumption of energy in data centers, thus helps in better usage of resource. This is further reducing operational costs and provides benefits to the clients and also to cloud service provider. In this abstract of paper, the task scheduling in data centers have been compared. Cloudsim a toolkit for modeling and simulation of cloud computing environment has been used to implement and demonstrate the experimental results. The results aimed at analyzing the energy consumed in data centers and shows that by having reduce the consumption of energy the cloud productivity can be improved.
EATS: Energy-Aware Tasks Scheduling in Cloud Computing Systems
Procedia Computer Science, 2016
The increasing cost in power consumption in data centers, and the corresponding environmental threats have raised a growing demand in energy-efficient computing. Despite its importance, little work was done on introducing models to manage the consumption efficiently. With the growing use of Cloud Computing, this issue becomes very crucial. In a Cloud Computing, the services run in a data center on a set of clusters that are managed by the Cloud computing environment. The services are provided in the form of a Software as a Service (SaaS), Platform as a Service (PaaS), and Infrastructure as a Service (IaaS). The amount of energy consumed by the underutilized and overloaded computing systems may be substantial. Therefore, there is a need for scheduling algorithms to take into account the power consumption of the Cloud for energy-efficient resource utilization. On the other hand, Cloud computing is seen as crucial for high performance computing; for instance for the purpose of Big Data processing, and that should not be much compromised for the sake of reducing energy consumption. In this work, we derive an energy-aware tasks scheduling (EATS) model, which divides and schedules a big data in the Cloud. The main goal of EATS is to increase the application efficiency and reduce the energy consumption of the underlying resources. The power consumption of a computing server was measured under different working load conditions. Experiments show that the ratio of energy consumption at peak performance compared to an idle state is 1.3. This shows that resources must be utilized correctly without scarifying performance. The results of the proposed approach are very promising and encouraging. Hence, the adoption of such strategies by the cloud providers result in energy saving for data centers.