Intelligent Decision-Making of Load Balancing Using Deep Reinforcement Learning and Parallel PSO in Cloud Environment (original) (raw)

Reinforcement Learning Approach for Optimizing Cloud Resource Utilization With Load Balancing

IEEE Access

Cloud computing is a technology that enables the delivery of various computing services over the Internet. The Resource Scheduling (RS) and Load Balancing (LB) mechanisms are essential for the cloud to provide consistent results. The submitted tasks by the users are computed on the cloud platform using its Virtual Machines (VMs). The cloud ensures an ideal LB mechanism, where no VMs will be overloaded or idle. This research paper focuses on this LB mechanism by experimenting in the WorkflowSim environment and computing tasks using the Sipht task dataset. The RS algorithms First Come First Serve (FCFS), Maximum-Minimum (Max-Min), Minimum Completion Time (MCT), Minimum-Minimum (Min-Min), and Round-Robin (RR) are utilized to balance the computational load of VMs. The experiment was conducted in four phases, where the Sipht task dataset varied in task length in each phase. Each phase included sixteen scenarios, where each scenario differed from another by the number of VMs used. The final results of this experiment convey that the load balanced by the algorithms FCFS, Max-Min, MCT, Min-Min, and RR were 51.98 %, 41.71 %, 51.98 %, 59.43 %, and 52.17 %, respectively, across all four phases. Lastly, the Reinforcement Learning (RL) model is suggested to add an intelligence mechanism to LB and optimize the cloud resource utilization using these RS algorithms to provide the best Quality of Service (QoS).

Load Balancing Optimization Based on Deep Learning Approach in Cloud Environment

International Journal of Information Technology and Computer Science

Load balancing is a significant aspect of cloud computing which is essential for identical load sharing among resources like servers, network interfaces, hard drives (storage) and virtual machines (VMs) hosted on physical servers. In cloud computing, Deep Learning (DL) techniques can be used to achieve QoS such as improve resource utilization and throughput; while reduce latency, response time and cost, balancing load across machines, thus, increasing the system reliability. DL results in effective and accurate decision making of intelligent resource allocation to the incoming requests, thereby, choosing the most suitable resource to complete them. However, in previous researches on load balancing, there is limited application of DL approaches. In this paper, the significance of DL approaches have been analysed in the area of cloud computing. A Framework for Workflow execution in cloud environment has been proposed and implemented, namely, Deep Learningbased Deadline-constrained, Dynamic VM Provisioning and Load Balancing (DLD-PLB). Optimal schedule for VMs has been generated using Deep Learning based technique. The Genome workflow tasks have been taken as input to the suggested framework. The results for makespan and cost has been computed for the proposed framework and has been compared with our earlier proposed framework for load balancing optimization-Hybrid approach based Deadline-constrained, Dynamic VM Provisioning and Load Balancing (HDD-PLB)" framework for Workflow execution. The earlier proposed approaches for load balancing were based on hybrid Predict-Earliest-Finish Time (PEFT) with ACO for underutilized VM optimization and hybrid PEFT-Bat approach for optimize the utilization of overflow VMs.

A load balancing and optimization strategy (LBOS) using reinforcement learning in fog computing environment

Fog computing (FC) can be considered as a computing paradigm which performs Internet of Things (IoT) applications at the edge of the network. Recently, there is a great growth of data requests and FC which lead to enhance data accessibility and adaptability. However, FC has been exposed to many challenges as load balancing (LB) and adaptation to failure. Many LB strategies have been proposed in cloud computing, but they are still not applied effectively in fog. LB is an important issue to achieve high resource utilization, avoid bottlenecks, avoid overload and low load, and reduce response time. In this paper, a LB and optimization strategy (LBOS) using dynamic resource allocation method based on Reinforcement learning and genetic algorithm is proposed. LBOS monitors the traffic in the network continuously, collects the information about each server load, handles the incoming requests, and distributes them between the available servers equally using dynamic resource allocation method. Hence, it enhances the performance even when it's the peak time. Accordingly, LBOS is simple and efficient in real-time systems in fog computing such as in the case of healthcare system. LBOS is concerned with designing an IoT-Fog based healthcare system. The proposed IoT-Fog system consists of three layers, namely: (1) IoT layer, (2) fog layer, and (3) cloud layer. Finally, the experiments are carried out and the results show that the proposed solution improves the quality-of-service in the cloud/fog computing environment in terms of the allocation cost and reduce the response time. Comparing the LBOS with the state-of-the-art algorithms, it achieved the best load balancing Level (85.71%). Hence, LBOS is an efficient way to establish the resource utilization and ensure the continuous service.

Action-Based Load Balancing Technique in Cloud Network Using Actor-Critic-Swarm Optimization

Wireless Communications and Mobile Computing

Increasing scale of task in cloud network leads to problem in load balancing and its improvement in parameters. In this paper, we proposed a hybrid scheduling policy which is hybrid of both Particle Swarm Optimization (PSO) algorithm and actor-critic algorithm named as Hybrid Particle Swarm Optimization Actor Critic (HPSOAC) to solve this issue. This hybrid scheduling policy helps to each agent to improve an individual learning as well as learning through exchanging information among other agents. An experiment is carried out by the help of Python simulator with TensorFlow. Outcome shows that our proposed scheduling policy reduces 5.16% and 10.86% in energy consumption, reduces 7.13% and 10.04% in makespan time, and has marginally better resource utilization over Deep Q-network (DQN) and Q-learning based on Modified Particle Swarm Optimization (QMPSO) algorithm, respectively.

DEEP LEARNING BASED LOAD BALANCING USING MULTIDIMENSIONAL QUEUING LOAD OPTIMIZATION ALGORITHM FOR CLOUD ENVIRONMENT

Cloud computing becoming one of the most advanced and promising technologies in these days for information technology era. It has also helped to reduce the cost of small and medium enterprises based on cloud provider services. Resource scheduling with load balancing is one of the primary and most important goals of the cloud computing scheduling process. Resource scheduling in cloud is a non-deterministic problem and is responsible for assigning tasks to virtual machines (VMs) by a servers or service providers in a way that increases the resource utilization and performance, reduces response time, and keeps the whole system balanced. So in this paper, we presented a model deep learning based resource scheduling and load balancing using multidimensional queuing load optimization (MQLO) algorithm with the concept of for cloud environment Multidimensional Resource Scheduling and Queuing Network (MRSQN) is used to detect the overloaded server and migrate them to VMs. Here, ANN is used as deep learning concept as a classifier that helps to identify the overloaded or under loaded servers or VMs and balanced them based on their basis parameters such as CPU, memory and bandwidth. In particular, the proposed ANN-based MQLO algorithm has improved the response time as well success rate. The simulation results show that the proposed ANN-based MQLO algorithm has improved the response time compared to the existing algorithms in terms of Average Success Rate, Resource Scheduling Efficiency, Energy Consumption and Response Time.