MUVINE: Multi-Stage Virtual Network Embedding in Cloud Data Centers Using Reinforcement Learning-Based Predictions (original) (raw)
Related papers
A Cluster-Oriented Policy for Virtual Network Embedding in SDN-Enabled Distributed Cloud
International Journal of Computing and Digital Systems, 2022
Resource allocation is a crucial challenge for network virtualization (NV) in a cloud environment. Virtual network embedding (VNE) approaches exemplify NV technologies' critical utility, which must efficiently deal with potential network issues. To promote cloud infrastructure flexibility, software-defined networking (SDN) has been adopted as a network practice to centralize the manageability of the data centre network (DCN) resources. This paper introduces a classification approach that ensures an accurate starting point for solving the VNE problem in a distributed system. The solution implementation is based on measuring the importance of each DCN using the spearman rank correlation coefficient. Afterward, we devise a constructive algorithm that classifies DCNs in clusters from unsupervised data learning. This DCN management allows us to direct the VNE process to a small number of DCNs, which will reduce the dimensionality of the search operation in a distributed environment. Ultimately, we adopt various metaheuristics as a VNE optimizer for the selected DCN. Numerical results verify that the Jenks natural breaks classification outperforms similar methods in terms of resource utilization and acceptance ratio.
Deep reinforcement learning for multi-objective placement of virtual machines in cloud datacenters
Soft Computing, 2020
The ubiquitous diffusion of cloud computing requires suitable management policies to face the workload while guaranteeing quality constraints and mitigating costs. The typical trade-off is between the used power and the adherence to a service-level metric subscribed by customers. To this aim, a possible idea is to use an optimization-based placement mechanism to select the servers where to deploy virtual machines. Unfortunately, high packing factors could lead to performance and security issues, e.g., virtual machines can compete for hardware resources or collude to leak data. Therefore, we introduce a multi-objective approach to compute optimal placement strategies considering different goals, such as the impact of hardware outages, the power required by the datacenter, and the performance perceived by users. Placement strategies are found by using a deep reinforcement learning framework to select the best placement heuristic for each virtual machine composing the workload. Results indicate that our method outperforms bin packing heuristics widely used in the literature when considering either synthetic or real workloads.
IEEE Transactions on Cloud Computing, 2016
Cloud computing built on virtualization technologies promises provisioning elastic computing and bandwidth resource services for enterprises that outsource their IT services as virtual networks. To share the cloud resources efficiently among different enterprise IT services, embedding their virtual networks into a distributed cloud that consists of multiple data centers, poses great challenges. Motivated by the fact that most virtual networks operate on long-term basis and have the characteristics of periodic resource demands, in this paper we study the virtual network embedding problem of embedding as many virtual networks as possible to a distributed cloud such that the revenue collected by the cloud service provider is maximized, while the service level agreements (SLAs) between enterprises and the cloud service provider are met. We first propose an efficient embedding algorithm for the problem, by incorporating a novel embedding metric that accurately models the dynamic workloads on both data centers and inter-data center links, provided that the periodic resource demands of each virtual network are given and all virtual networks have identical resource demand periods. We then show how to extend this algorithm for the problem when different virtual networks may have different resource demand periods. Furthermore, we also develop a prediction mechanism to predict the periodic resource demands of each virtual network if its resource demands are not given in advance. We finally evaluate the performance of the proposed algorithms through experimental simulation based on both synthetic and real network topologies. Experimental results demonstrate that the proposed algorithms outperform existing algorithms from 10 to 31 percent in terms of performance improvement.
A Reinforcement Learning Approach for Placement of Stateful Virtualized Network Functions
2021 IFIP/IEEE International Symposium on Integrated Network Management (IM), 2021
Network softwarization increases network flexibility by supporting the implementation of network functions such as firewalls as software modules. However, this creates new concerns on service reliability due to failures at both software and hardware level. The survivability of critical applications is commonly assured by deploying stand-by Virtual Network Functions (VNFs) to which the service is migrated upon failure of the primary VNFs. However, it is challenging to identify the optimal Data Centers (DCs) for hosting the active and stand-by VNF instances, not only to minimize their placement cost, but also the cost of a continuous state transfer between active and stand-by instances, since a number of VNFs are stateful. This paper proposes a reinforcement learning (RL) approach for the placement of stateful VNFs that considers a joint reservation of primary and backup resources with the objective of minimizing the overall placement cost. Simulation results show that the proposed al...
Performance evaluation of artificial intelligence algorithms for virtual network embedding
Engineering Applications of Artificial Intelligence, 2013
Network virtualization is not only regarded as a promising technology to create an ecosystem for cloud computing applications, but also considered a promising technology for the future Internet. One of the most important issues in network virtualization is the virtual network embedding (VNE) problem, which deals with the embedding of virtual network (VN) requests in an underlying physical (substrate network) infrastructure. When both the node and link constraints are considered, the VN embedding problem is NP-hard, even in an offline situation. Some Artificial Intelligence (AI) techniques have been applied to the VNE algorithm design and displayed their abilities. This paper aims to compare the computational effectiveness and efficiency of different AI techniques for handling the cost-aware VNE problem. We first propose two kinds of VNE algorithms, based on Ant Colony Optimization and genetic algorithm. Then we carry out extensive simulations to compare the proposed VNE algorithms with the existing AI-based VNE algorithms in terms of the VN Acceptance Ratio, the long-term revenue of the service provider, and the VN embedding cost.
Efficient virtual network embedding via exploring periodic resource demands
39th Annual IEEE Conference on Local Computer Networks, 2014
Cloud computing built on virtualization technologies promises provisioning elastic computing and communication re sources to enterprise users. To share cloud resources efficiently, embedding virtual networks of different users to a distributed cloud consisting of multiple data centers (a substrate network) poses great challenges. Motivated by the fact that most enterprise virtual networks usually operate on long-term basics and have the characteristics of periodic resource demands, in this paper we study the virtual network embedding problem by embedding as many virtual networks as possible to a substrate network such that the revenue of the service provider of the substrate network is maximized, while meeting various Service Level Agree ments (SLAs) between enterprise users and the cloud service provider. For this problem, we propose an efficient embedding algorithm by exploring periodic resource demands of virtual networks, and employing a novel embedding metric that models the workloads on both substrate nodes and communication links if the periodic resource demands of virtual networks are given; otherwise, we propose a prediction model to predict the periodic resource demands of these virtual networks based on their historic resource demands. We also evaluate the performance of the proposed algorithms by experimental simulation. Experimental results demonstrate that the proposed algorithms outperform existing algorithms, improving the revenue from 10% to 31 %.
Joint Policy for Virtual Network Embeddingin Distributed SDN-Enabled Cloud
Virtualization (NV) has been devised as one of the key bases of operativecloud systems. Commonly, Cloud Providers (CPs) seek to design their networkpolicy, especially in a distributed environment. Virtual Network Embedding(VNE) is a functional tool granted by the NV technologies that allow theCPs to manage their physical resources based on the received Virtual Networks(VNs). This paper focuses on the context when a given Virtual NetworkRequest (VNRs) needs to be shared among multiple Data Center Networks(DCNs). The proposed VNE solution executes a two-stage policy where at thefirst stage, the DCNs and VNRs have been managed through a greedy methodto solve the assignment problem. Then, we perform a greedy load-balancingalgorithm to accomplish the VNR mapping stage. The simulation results provedthat the proposed two methods outperformed the compared similar techniques.
VCE-PSO: Virtual Cloud Embedding through a Meta-heuristic Approach
Resource allocation, an integral and continuously evolving part of cloud computing, has been attracting a lot of researchers in recent years. However, most of current cloud systems consider resource allocation only as placement of independent virtual machines, ignoring the performance of a virtual machine is also depending on other cooperating virtual machines and also the net links utilization, which result in a poor efficient resource utilization. In this paper, we propose a novel model Virtual Cloud Embedding (VCE) to formulate the cloud resource allocation problem. VCE regards each resource request as an integral unit rather than independent virtual machines including their link constraints. To address the VCE problem, we develop a metaheuristic algorithm VCE-PSO, which is based on particle swarm optimization algorithm, to allocate multiple resources as a unit considering the heterogeneity of cloud infrastructure and variety of resource requirements. We exploit specific knowledge like the locations of virtual machines, inter-link distance, etc., to measure the fitness of different resource assignments, and utilize them to define the assignment update operation corresponding to the operations and steps of particle swarm optimization algorithm. Experiment results demonstrate that VCE-PSO can find an optimal resource assignment with 12% reduction of average linkmapped-path length than existing greedy algorithms.
2020
The high time needed to reconfigure cloud resources in Network Function Virtualization network environments has led to the proposal of solutions in which a prediction based-resource allocation is performed. All of them are based on traffic or needed resource prediction with the minimization of symmetric loss functions like Mean Squared Error. When inevitable prediction errors are made, the prediction methodologies are not able to differently weigh positive and negative prediction errors that could impact the total network cost. In fact if the predicted traffic is higher than the real one then an over allocation cost, referred to as over-provisioning cost, will be paid by the network operator; conversely, in the opposite case, Quality of Service degradation cost, referred to as under-provisioning cost, will be due to compensate the users because of the resource under allocation. In this paper we propose and investigate a resource allocation strategy based on a Long Short Term Memory ...