Reducing Resource Over-Provisioning Using Workload Shaping for Energy Efficient Cloud Computing (original) (raw)

Reducing Resource Over-Provisioning Using Workload Shaping for Energy Efficient Cloud Computing- Woonsup and Jaha Mvulla

Using workload shaping technology, we present an approach to remove hardware over-provisioning implementing task buffers and scheduler, in terms of energy consumption. Task buffers reorder tasks with various priorities and routes them to appropriate virtual machines. Scheduler monitors the task buffering and hardware load status, and decides the optimal number of active physical and virtual machines. In addition, we designed a mechanism wherein tasks with fast executing are routed in fast and high energy consumption machines and slow tasks to slow and low energy consumption machines. As a result, our approach efficiently can shape workloads and manage the optimal number of active virtual machines and physical machines, in terms of energy consumption. To evaluate our approach, we generated synthetic workload data and evaluated it both in simulating and actual cloud environment. Our experimental results demonstrate our approach outperforms in terms of energy consumption to when not using no workload shaping methodology.

EATSVM: Energy-Aware Task Scheduling on Cloud Virtual Machines

Procedia Computer Science, 2018

The pervasive adoption of cloud computing services and applications at a rapid rate makes the underlying data centers exacerbate the problems like carbon footprint and the operational cost, caused by the energy consumption. Various hardware-centric and software-centric approaches are proposed in the literature to reduce the energy consumption of the cloud data centers. Task scheduling algorithms are software-centric approaches to reduce the energy consumption in cloud computing systems. The majority of these algorithms focus on server consolidation leading to idle servers that reduce energy efficiency optimization. In this paper, we propose an Energy-Aware Task Scheduling algorithm on cloud Virtual Machines (EATSVM) that assigns a task to the VM where the increase in energy consumption is the least, considering both active and idle VMs. The algorithm also takes into consideration the increase in the energy consumption of the already running tasks on the VM due to increase in their execution time, while assigning a new task to that VM. We analyze the performance of our algorithm in a heterogeneous cloud environment with increasing number of tasks and compare the energy-savings of our algorithm with that of Energy Conscious Task Consolidation (ECTC) algorithm. Our experimental results demonstrate that EATSVM achieves energy-saving in a heterogeneous cloud-computing environment.

EATS: Energy-Aware Tasks Scheduling in Cloud Computing Systems

Procedia Computer Science, 2016

The increasing cost in power consumption in data centers, and the corresponding environmental threats have raised a growing demand in energy-efficient computing. Despite its importance, little work was done on introducing models to manage the consumption efficiently. With the growing use of Cloud Computing, this issue becomes very crucial. In a Cloud Computing, the services run in a data center on a set of clusters that are managed by the Cloud computing environment. The services are provided in the form of a Software as a Service (SaaS), Platform as a Service (PaaS), and Infrastructure as a Service (IaaS). The amount of energy consumed by the underutilized and overloaded computing systems may be substantial. Therefore, there is a need for scheduling algorithms to take into account the power consumption of the Cloud for energy-efficient resource utilization. On the other hand, Cloud computing is seen as crucial for high performance computing; for instance for the purpose of Big Data processing, and that should not be much compromised for the sake of reducing energy consumption. In this work, we derive an energy-aware tasks scheduling (EATS) model, which divides and schedules a big data in the Cloud. The main goal of EATS is to increase the application efficiency and reduce the energy consumption of the underlying resources. The power consumption of a computing server was measured under different working load conditions. Experiments show that the ratio of energy consumption at peak performance compared to an idle state is 1.3. This shows that resources must be utilized correctly without scarifying performance. The results of the proposed approach are very promising and encouraging. Hence, the adoption of such strategies by the cloud providers result in energy saving for data centers.

Towards Energy Efficient Orchestration of Cloud Computing Infrastructure

Advances in Intelligent Systems and Computing, 2018

The emerging of new Cloud services and applications demanding for ever more performance (i.e., on one hand, the rapid growth of applications using deep learning-DL, on the other hand, HPC-oriented work-flows executed in Cloud) is continuously putting pressure on Cloud providers to increase capabilities of their large data centers, by embracing more advanced and heterogeneous devices [2, 3, 11]. Hardware heterogeneity also helps Cloud providers to improve energy efficiency of their infrastructures by using architectures dedicated to specific workloads. However, heterogeneity represents a challenge from the infrastructure management perspective. In this highly dynamic context, workload orchestration requires advanced algorithms to not defeat the efficiency provided by the hardware layer. Despite past works partially addressed the problem, a comprehensive solution is still missing. This paper presents the solution studied within the European H2020 project OPERA [1]. Our approach is intended for managing the workload in large infrastructures running heterogeneous systems, by using a two-steps approach. Whenever new jobs are submitted, an energy-aware allocation policy is used to select the most efficient nodes where to execute the incoming jobs. In a second step, the whole workload is consolidated by means of the optimization of a cost model. This paper focuses on an allocation algorithm aimed at reducing the overall energy consumption; it also presents the results of simulations on a State-of-the-Art framework. When compared with well-known and broadly adopted allocation strategies, the proposed approach results in a tangible energy-saving (up to 30% compared to First Fit allocation policy, and up to 45.2% compared to the Best Fit), thus demonstrating energy efficiency superiority.

Real-Time Tasks Oriented Energy-Aware Scheduling in Virtualized Clouds

—Energy conservation is a major concern in cloud computing systems because it can bring several important benefits such as reducing operating costs, increasing system reliability, and prompting environmental protection. Meanwhile, power-aware scheduling approach is a promising way to achieve that goal. At the same time, many real-time applications, e.g., signal processing, scientific computing have been deployed in clouds. Unfortunately, existing energy-aware scheduling algorithms developed for clouds are not real-time task oriented, thus lacking the ability of guaranteeing system schedulability. To address this issue, we first propose in this paper a novel rolling-horizon scheduling architecture for real-time task scheduling in virtualized clouds. Then a task-oriented energy consumption model is given and analyzed. Based on our scheduling architecture, we develop a novel energy-aware scheduling algorithm named EARH for real-time, aperiodic, independent tasks. The EARH employs a rolling-horizon optimization policy and can also be extended to integrate other energy-aware scheduling algorithms. Furthermore, we propose two strategies in terms of resource scaling up and scaling down to make a good trade-off between task's schedulability and energy conservation. Extensive simulation experiments injecting random synthetic tasks as well as tasks following the last version of the Google cloud tracelogs are conducted to validate the superiority of our EARH by comparing it with some baselines. The experimental results show that EARH significantly improves the scheduling quality of others and it is suitable for real-time task scheduling in virtualized clouds.

Fast and Energy-Aware Resource Provisioning and Task Scheduling for Cloud Systems

International Symposium on Quality Electronic Design (ISQED), 2017

—Cloud computing has become an attractive computing paradigm in recent years to offer on demand computing resources for users worldwide. Through Virtual Machine (VM) technologies, the cloud service providers (CSPs) can provide users the infrastructure, platform, and software with a quite low cost. With the drastically growing number of data centers, the energy efficiency has drawn a global attention as CSPs are faced with the high energy cost of data centers. Many previous works have contributed to improving the energy efficiency in data centers. However, the computational complexity may lead to unacceptable run time. In this paper, we propose a fast and energy-aware resource provisioning and task scheduling algorithm to achieve low energy cost with reduced computational complexity for CSPs. In our iterative algorithm, we divide the provisioning and scheduling to multiple steps which can effectively reduce the complexity and minimize the run time while achieving a reasonable energy cost. Experimental results demonstrate that compared to the baseline algorithm, the proposed algorithm can achieve up to 79.94% runtime improvement with an acceptable energy cost increase.

EPOBF: Energy Efficient Allocation of Virtual Machines in High Performance Computing Cloud

Lecture Notes in Computer Science, 2014

Cloud computing has become more popular in provision of computing resources under virtual machine (VM) abstraction for high performance computing (HPC) users to run their applications. A HPC cloud is such cloud computing environment. One of challenges of energyefficient resource allocation for VMs in HPC cloud is trade-off between minimizing total energy consumption of physical machines (PMs) and satisfying Quality of Service (e.g. performance). On one hand, cloud providers want to maximize their profit by reducing the power cost (e.g. using the smallest number of running PMs). On the other hand, cloud customers (users) want highest performance for their applications. In this paper, we focus on the scenario that scheduler does not know global information about user jobs and/or user applications in the future. Users will request short-term resources at fixed start-times and non-interrupted durations. We then propose a new allocation heuristic (named Energy-aware and Performance-per-watt oriented Best-fit (EPOBF)) that uses metric of performance-per-watt to choose which most energyefficient PM for mapping each VM (e.g. maximum of MIPS/Watt). Using information from Feitelson's Parallel Workload Archive to model HPC jobs, we compare the proposed EPOBF to state-of-the-art heuristics on heterogeneous PMs (each PM has multicore CPU). Simulations show that the EPOBF can reduce significant total energy consumption in comparison with stateof-the-art allocation heuristics.

IJERT-A Survey on Energetic Service Allocation using Virtual Machines for Cloud Computing Domain

International Journal of Engineering Research and Technology (IJERT), 2014

https://www.ijert.org/a-survey-on-energetic-service-allocation-using-virtual-machines-for-cloud-computing-domain https://www.ijert.org/research/a-survey-on-energetic-service-allocation-using-virtual-machines-for-cloud-computing-domain-IJERTV3IS070171.pdf The cloud user may scale up or down their resource needs from cloud provider depending on their requirements. In cloud environment preventing overloading of different virtual machines on physical machines and making efficient use of resources becomes major problem. In this paper we make use of virtualization technique for allocating resources to different business users depending on their needs i.e. for mapping different virtual machines to physical machines depending on their resource needs. And to avoid overloading we use load prediction algorithm, green computing technique for minimizing number of physical machines used as long as they can satisfy resource needs and making efficient use of PM's.

Energy-Aware Scheduling Framework for resource allocation in a virtualized cloud data centre

International Journal of Engineering and Technology, 2017

Cloud paradigm is an embryonic computing model that in its vicinity stresses on proficient utilization of computing resources. Data centers that host and service cloud applications ingest enormous amount of energy, leading to massive emission of carbon footprints to the atmosphere and high operational expenditures. Consequently, there is a need to establish synergy between data centre resources for optimum resource utilization and strategies needs to be devised that can considerably reduce energy consumption in cloud data center. This paper elucidates an architectural framework for computation of energy spent in scheduling resources on hosts. The framework has been implemented for bin-packing techniques and explicates minutiae about broker components involved in scheduling process. Keyword-Cloud Computing, Energy Consumption, VM Migration, Resource Scheduling. I. INTRODUCTION Contemporary [1,2] resource-intensive enterprises has engendered demands for high performance computing infrastructures. Proliferation of IT services to be used by diverse range of cloud users has led to construction of large-scale energy hungry data centers that can facilitate computing services. Despite of the improvisations introduced in energy consumption models, service providers are confronted with challenges of reduction in energy consumption and CO 2 emission. The rationale [3] in the wake of explosion of energy emission is increase in number of computer usage due to increase in number of IT practitioners. As an upshot, size of data centres has increased. Moreover, exploiting energy-aware resource provisioning to its fullest extent can subsequently provide a solution to the forefront issues. [4] Service virtualization and consolidation are acting as inherent practices that can escort energyefficient datacenter architectures. It has effectually led to efficient resource utilization. VM provisioning can be sighted as a multidimensional bin packing problem comprising of capricious bin configurations and cost parameters. Virtualization technique embraces server consolidation process and a VM live-migration technique that has validated to be efficient in drastic reduction in energy consumption in high-performance cloud datacenters. However, I/O virtualization has excavated grounds for performance degradation posed by overheads encountered in vm migrations and needs to be addressed urgently. The organization of paper is laid out as literature review in beginning followed by research focus. The IV section elucidates architectural framework trailed by working prototype. The last section illustrates conclusion and future work. II. LITERATURE REVIEW The research work presented in [5] explores method to manage data intensive distributed programming paradigms (like MapReduce and Dryad) that assists practitioners to effortlessly parallelize the processing of huge data sets. Deployment of such data intensive computing infrastructures is of significant concern due to rise in cost. The work carried out in the study dynamically adjusts the size of resource allocations to make it precise to suite parallelized tasks. The objective is to match the hardware configuration effectively with tasks requirements. This consents the system to amortize the cost endured due to idle power usage of servers transversely to the huge workloads with the aim to maximize energy efficiency and throughput. In research work of [6], the technique of task activity vectors have been explored and deployed for characterizing applications as per resource utilization. On top of it, migration and co-scheduling policies have been applied to perk up performance and energy efficiency by coalescing applications that use complementary resources and performs frequency scaling in situation of contention owing to discouraging workloads. The experimentation has been performed on KVM virtualization environment and Linux operating system kernel. The upshot provides evidence that resource-conscious scheduling can significantly reduce the energy delay product.

Task Scheduling and Server Provisioning for Energy-Efficient Cloud-Computing Data Centers

2013 IEEE 33rd International Conference on Distributed Computing Systems Workshops, 2013

In this paper, we present an optimization model for task scheduling for minimizing energy consumption in cloudcomputing data centers. The proposed approach was formulated as an integer programming problem to minimize the cloudcomputing data center energy consumption by scheduling tasks to a minimum numbers of servers while keeping the task response time constraints. We prove that the average task response time and the number of active servers needed to meet such time constraints are bounded through the use of a greedy taskscheduling scheme. In addition, we propose the most-efficientserver-first task-scheduling scheme to minimize energy expenditure as a practical scheduling scheme. We model and simulate the proposed scheduling scheme for a data center with heterogeneous tasks. The simulation results show that the proposed taskscheduling scheme reduces server energy consumption on average over 70 times when compared to the energy consumed under a (not-optimized) random-based task-scheduling scheme. We show that energy savings are achieved by minimizing the allocated number of servers.