The Greatest Teacher, Failure is: Using Reinforcement Learning for SFC Placement Based on Availability and Energy Consumption (original) (raw)
Related papers
A Reinforcement Learning Approach for Placement of Stateful Virtualized Network Functions
2021 IFIP/IEEE International Symposium on Integrated Network Management (IM), 2021
Network softwarization increases network flexibility by supporting the implementation of network functions such as firewalls as software modules. However, this creates new concerns on service reliability due to failures at both software and hardware level. The survivability of critical applications is commonly assured by deploying stand-by Virtual Network Functions (VNFs) to which the service is migrated upon failure of the primary VNFs. However, it is challenging to identify the optimal Data Centers (DCs) for hosting the active and stand-by VNF instances, not only to minimize their placement cost, but also the cost of a continuous state transfer between active and stand-by instances, since a number of VNFs are stateful. This paper proposes a reinforcement learning (RL) approach for the placement of stateful VNFs that considers a joint reservation of primary and backup resources with the objective of minimizing the overall placement cost. Simulation results show that the proposed al...
Heuristic and Reinforcement Learning Algorithms for Dynamic Service Placement on Mobile Edge Cloud
ArXiv, 2021
Edge computing hosts applications close to the end users and enables low-latency real-time applications. Modern applications inturn have adopted the microservices architecture which composes applications as loosely coupled smaller components, or services. This complements edge computing infrastructure that are often resource constrained and may not handle monolithic applications. Instead, edge servers can independently deploy application service components, although at the cost of communication overheads. Dynamic system load in mobile network cause like latency, jitter, and packet loss to fluctuate frequently. Consistently meeting application service level objectives while also optimizing application deployment (placement and migration of services) cost and communication overheads in mobile edge cloud environment is non-trivial. In this paper we propose and evaluate three dynamic placement strategies, two heuristic (greedy approximation based on set cover, and integer programming ba...
Reinforcement Learning for value-based Placement of Fog Services
2021
Optimal service and resource management in Fog Computing is an active research area in academia. In fact, to fulfill the promise to enable a new generation of immersive, adaptive, and context-aware services, Fog Computing requires novel solutions capable of better exploiting the available computational and network resources at the edge. Resource management in Fog Computing could particularly benefit from self-* approaches capable of learning the best resource allocation strategies to adapt to the ever changing conditions. In this context, Reinforcement Learning (RL), a technique that allows to train software agents to learn which actions maximize a reward, represents a compelling solution to investigate. In this paper, we explore RL as an optimization method for the value-based management of Fog services over a pool of Fog nodes. More specifically, we propose FogReinForce, a solution based on Deep Q-Network (DQN) algorithm that learns to select the allocation for service components ...