Comprehensive Analysis of Resource Allocation and Service Placement in Fog and Cloud Computing (original) (raw)
Related papers
IET Communications
Fog computing is a decentralised model which can help cloud computing for providing high quality-of-service (QoS) for the Internet of Things (IoT) application services. Service placement problem (SPP) is the mapping of services among fog and cloud resources. It plays a vital role in response time and energy consumption in fog-cloud environments. However, providing an efficient solution to this problem is a challenging task due to difficulties such as different requirements of services, limited computing resources, different delay, and power consumption profile of devices in fog domain. Motivated by this, in this study, we propose an efficient policy, called MinRE, for SPP in fog-cloud systems. To provide both QoS for IoT services and energy efficiency for fog service providers, we classify services into two categories: critical services and normal ones. For critical services, we propose MinRes, which aims to minimise response time, and for normal ones, we propose MinEng, whose goal is reducing the energy consumption of fog environment. Our extensive simulation experiments show that our policy improves the energy consumption up to 18%, the percentage of deadline satisfied services up to 14% and the average response time up to 10% in comparison with the second-best results.
Reinforcement Learning for value-based Placement of Fog Services
2021
Optimal service and resource management in Fog Computing is an active research area in academia. In fact, to fulfill the promise to enable a new generation of immersive, adaptive, and context-aware services, Fog Computing requires novel solutions capable of better exploiting the available computational and network resources at the edge. Resource management in Fog Computing could particularly benefit from self-* approaches capable of learning the best resource allocation strategies to adapt to the ever changing conditions. In this context, Reinforcement Learning (RL), a technique that allows to train software agents to learn which actions maximize a reward, represents a compelling solution to investigate. In this paper, we explore RL as an optimization method for the value-based management of Fog services over a pool of Fog nodes. More specifically, we propose FogReinForce, a solution based on Deep Q-Network (DQN) algorithm that learns to select the allocation for service components ...
Demand-Driven Deep Reinforcement Learning for Scalable Fog and Service Placement
IEEE Transactions on Services Computing, 2021
The increasing number of Internet of Things (IoT) devices necessitates the need for a more substantial fog computing infrastructure to support the users' demand for services. In this context, the placement problem consists of selecting fog resources and mapping services to these resources. This problem is particularly challenging due to the dynamic changes in both users' demand and available fog resources. Existing solutions utilize on-demand fog formation and periodic container placement using heuristics due to the NP-hardness of the problem. Unfortunately, constant updates of services are time consuming in terms of environment setup, especially when required services and available fog nodes are changing. Therefore, due to the need for fast and proactive service updates to meet users' demand, and the complexity of the container placement problem, we propose in this paper a Deep Reinforcement Learning (DRL) solution, named Intelligent Fog and Service Placement (IFSP), to perform instantaneous placement decisions proactively. By proactively, we mean making placement decisions before demands occur. The DRL-based IFSP is developed through a scalable Markov Decision Process (MDP) design. To address the long learning time for DRL to converge, and the high volume of errors needed to explore, we also propose a novel end-toend architecture utilizing a service scheduler and a bootstrapper. on the cloud. Our scheduler and bootstrapper perform offline learning on users' demand recorded in server logs. Through experiments and simulations performed on the NASA server logs and Google Cluster Trace datasets, we explore the ability of IFSP to perform efficient placement and overcome the above mentioned DRL limitations. We also show the ability of IFSP to adapt to changes in the environment and improve the Quality of Service (QoS) compared to state-of-the-art-heuristic and DRL solutions.
Fog Resource Allocation Through Machine Learning Algorithm
Advances in computer and electrical engineering book series, 2020
Internet of things (IoT) prevails in almost all the equipment of our daily lives including healthcare units, industrial productions, vehicle, banking or insurance. The unconnected dumb objects have started communicating with each other, thus generating a voluminous amount of data at a greater velocity that are handled by cloud. The requirements of IoT applications like heterogeneity, mobility support, and low latency form a big challenge to the cloud ecosystem. Hence, a decentralized and low latency-oriented computing paradigm like fog computing along with cloud provide better solution. The service quality of any computing model depends on resource management. The resources need to be agile by nature, which clearly demarks virtual container as the best choice. This chapter presents the federation of Fog-Cloud and the way it relates to the IoT requirements. Further, the chapter deals with autonomic resource management with reinforcement learning (RL), which will forward the fog computing paradigm to the future generation expectations.
Learning Based Task Placement Algorithm in the IoT Fog-Cloud Environment
International Journal of Computer Networks and Applications (IJCNA), 2021
Task scheduling means allocating resources to the tasks in such a way that processing can be accomplished in the most optimal way possible. Here the optimal strategy means processing all the tasks in such a way that it incur the least delay, hence the least response time can be achieved by all the tasks. This becomes a major concern when dealing with the Fog computing environment. Fog have limitations on storage capacity and processing power. So all the real time applications cannot be scheduled at the Fog environment. Also it is required to allocate these resources in the most optimal way possible. So it is best suggested to schedule latency critical applications on the fog and other applications to the cloud. This paper proposes a learning based task placement algorithm (LBTP) which used supervised feed forward neural network to recognize the latency critical applications. This algorithm executes in two phases. In the first phase, the features of the tasks serve as the input to this machine learning based framework for decision making regarding whether to schedule task at the fog environment or forward it to the cloud for execution. In the second phase if the tasks scheduled at fog, then tasks are rearranged in the fog queue based on the priority to achieve the most optimal resource utilization. The simulation results were evaluated using the Matlab 8.0 and Aneka 5.0 platform. The results revealed that the proposed method LBTP recorded the best response time, waiting time and resource utilization when compared with the task scheduling at the fog only and task scheduling at the Cloud only environment. LBTP also recorded better results on horizontal scaling by raising the number of virtual machines at the fog environment.
2021
Fog/Edge computing is a novel computing paradigm, harnessing resources in the proximity of users, which supports resource-constrained Internet of Things (IoT) devices through the placement of their tasks on the heterogeneous edge and/or cloud servers. Recently, many Deep Reinforcement Learning (DRL)-based approaches have been proposed in edge and fog computing environments to learn application placement policies. However, they lack generalizability and quick adaptability, thus failing to efficiently tackle application placement problems. This is mainly because the training of well-performed DRL agents requires a huge amount of training data with high diversity while obtaining this training data is costly. Moreover, many IoT applications are modeled as Directed Acyclic Graphs (DAGs) with different topologies. Satisfying dependencies among constituent parts of DAG-based IoT applications incurs additional constraints, and hence the application placement problem becomes more complex. To...
Resource Allocation in Fog RAN for Heterogeneous IoT Environments Based on Reinforcement Learning
ICC 2019 - 2019 IEEE International Conference on Communications (ICC), 2019
Fog radio access network (F-RAN) has been recently proposed to satisfy the low-latency communication requirements of Internet of Things (IoT) applications. We consider the problem of sequentially allocating the limited resources of a fog node to a heterogeneous population of IoT applications with varying latency requirements. Specifically, for each service request, the fog node needs to decide whether to serve that user locally to provide it with low-latency communication service or to refer it to the cloud control center to keep the limited fog resources available for future users. We formulate the problem as a Markov Decision Process (MDP), for which we present the optimal decision policy through Reinforcement Learning (RL). The proposed resource allocation method learns from the IoT environment how to strike the right balance between two conflicting objectives, maximizing the total served utility and minimizing the idle time of the fog node. Extensive simulation results for various IoT environments corroborate the theoretical underpinnings of the proposed RL-based resource allocation method.
A priority-based service placement policy for Fog-Cloud computing systems
Computational Methods for Differential Equations, 2019
Recent advances in the context of Internet of Things (IoT) have led to the emergence of many useful IoT applications with different Quality of Service (QoS) requirements. The fog-cloud computing systems offer a promising environment to provision resources for IoT application services. However, providing an efficient solution to service placement problem in such systems is a critical challenge. To address this challenge, in this paper, we propose a QoS-aware service placement policy for fog-cloud computing systems that places the most delay-sensitive application services as closer to the clients as possible. We validate our proposed algorithm in the iFogSim simulator. Results demonstrate that our algorithm achieves significant improvement in terms of service latency and execution cost compared to simulators built-in policies .
Resource Management and Allocation in Fog Computing
International Journal of Advanced Research in Computer Science
Smart objects are increasingly playing a crucial role in the daily operations of both industries and individuals. These devices collect data through various apps and sensors, leading to a significant accumulation of information across various sectors. The use of smart objects has grown exponentially with the advent of the Internet of Things (IoT). This has led to a significant increase in the amount of data being generated, including both structured and unstructured data. However, there are currently no effective ways to manage this data. Despite the significant advancements made in the field of IoT, incorporating cloud computing is still facing challenges such as latency, performance, network and security concerns of computing can address the challenges faced by cloud computing in the context of the Internet of Things (IoT) by bringing the cloud closer to the edge. The primary objective of fog computing is to process and store data collected by IoT devices locally on a fog node, ra...
QoS-aware service provisioning in fog computing
Journal of Network and Computer Applications, 2020
Fog computing has emerged as a complementary solution to address the issues faced in cloud computing. While fog computing allows us to better handle time/delay-sensitive Internet of Everything (IoE) applications (e.g. smart grids and adversarial environment), there are a number of operational challenges. For example, the resource-constrained nature of fog-nodes and heterogeneity of IoE jobs complicate efforts to schedule tasks efficiently. Thus, to better streamline time/delay-sensitive varied IoE requests, the authors contributes by introducing a smart layer between IoE devices and fog nodes to incorporate an intelligent and adaptive learning based task scheduling technique. Specifically, our approach analyzes the various service type of IoE requests and presents an optimal strategy to allocate the most suitable available fog resource accordingly. We rigorously evaluate the performance of the proposed approach using simulation, as well as its correctness using formal verification. The evaluation findings are promising, both in terms of energy consumption and Quality of Service (QoS).