Cost-Efficient NFV-Enabled Mobile Edge-Cloud for Low Latency Mobile Applications (original) (raw)

Seamless Support of Low Latency Mobile Applications with NFV-Enabled Mobile Edge-Cloud

2016 5th IEEE International Conference on Cloud Networking (Cloudnet), 2016

Emerging mobile multimedia applications, such as augmented reality, have stringent latency requirements and high computational cost. To address this, mobile edge-cloud (MEC) has been proposed as an approach to bring resources closer to users. Recently, in contrast to conventional fixed cloud locations, the advent of network function virtualization (NFV) has, with some added cost due to the necessary decentralization, enhanced MEC with new flexibility in placing MEC services to any nodes capable of virtualizing their resources. In this work, we address the question on how to optimally place resources among NFVenabled nodes to support mobile multimedia applications with low latency requirement and when to adapt the current resource placements to address workload changes. We first show that the placement optimization problem is NP-hard and propose an online dynamic resource allocation scheme that consists of an adaptive greedy heuristic algorithm and a detection mechanism to identify the time when the system will no longer be able to satisfy the applications' delay requirement. Our scheme takes into account the effect of current existing techniques (i.e., autoscaling and load balancing). We design and implement a realistic NFV-enabled MEC simulated framework and show through extensive simulations that our proposal always manages to allocate sufficient resources on time to guarantee continuous satisfaction of the application latency requirements under changing workload while incurring up to 40% less cost in comparison to existing overprovisioning approaches. 1 We refer to such an offline formulation as static placement problem hereafter.

An Energy and Latency Trade-off for Resources Allocation in a MEC System

International Journal of Interactive Mobile Technologies (iJIM)

This paper addresses the issue of efficient resource allocation in a Mobile Edge Computing (MEC) system, taking into account the trade-off between energy consumption and operation latency. The increasing deployment of connected devices and data-intensive services in the Internet of Things (IoT) poses significant challenges in terms of managing computational resources. In this study, we propose a MEC system model that considers energy constraints and the need to minimize latency to ensure optimal performance. We formulate the resource allocation problem in terms of a trade-off between energy consumption and latency, and explore solutions based on heuristic task offloading techniques. Our experiments demonstrate that our approach achieves improved latency performance while reducing energy consumption. We also evaluate the impact of various parameters, such as workload and resource availability, on the energy-latency trade-off.

Let's Share the Resource When We're Co-Located: Colocation Edge Computing

Multi-access Edge Computing (MEC) is recently acknowledged as one of the key pillars for the next revolution of mobile communications area, where the convergence of IT and telecommunications network provides the low latency and computation capability for cellular base stations (BSs). As a result, this is a great opportunity for mobile network operators by deploying new services and applications at BSs. Nevertheless, huge capital and operational cost can challenge mobile operators for the deployment of new BSs and MEC micro-datacenters. Colocation Edge Computing (ColoMEC) is a new concept where multiple operators share not only the same BS tower but also their radio and computation resources colocated at the edge sites. In order to reduce the operational cost of a ColoMEC system, the limited bandwidth at over-utilized colocation BSs can be extended by sharing the bandwidth among BSs, while shared MEC micro-datacenters can be scaled based on the arrival traffic loads. Thus, sharing the BS infrastructure, bandwidth, and MEC micro-datacenters among the co-located mobile operators can be an economical solution to provide high-performance services with low expenses by exploiting the temporal and spatial difference in traffic loads. Turning this vision into reality, we study a joint bandwidth allocation sharing and MEC micro-datacenter scaling in ColoMEC management problem (ColoMEC − MP). To solve ColoMEC − MP problem, we propose an algorithm based on proximal block coordinate descent technique by iteratively solving the decoupled convex subproblems (i.e., user association, bandwidth allocation, and MEC microdatacenter scaling) with additional proximal terms. To improve the convergence of the proposed algorithm, we propose a greedy initialization for the user association which is based on the link capacity at each user. Our simulation demonstrates the superiority of the algorithm in terms of the operational cost compared with fixed service rate of shared MEC micro-datacenters strategies.

ShareOn: Shared Resource Dynamic Container Migration Framework for Real-Time Support in Mobile Edge Clouds

IEEE Access

Mobile Edge Cloud (MEC) technology is envisioned to play a key role in next generation mobile networks by supporting low-latency applications using geographically distributed local cloud clusters. However, MEC faces challenges of resource assignment and load balancing to support user mobility and latency-sensitive applications. Virtualized resource reallocation techniques including dynamic service migration are evolving to achieve load balance, fault tolerance and system maintenance objectives for resource constrained edge nodes. In this work, a compute and network-aware lightweight resource sharing framework with dynamic container migration, ShareOn, is proposed. The migration framework is validated using a set of heterogeneous edge cloud nodes distributed in San Francisco city, serving mobile taxicab users across that region. The end-to-end system is implemented using a container hypervisor called LXD (Linux Container Hypervisor) executing a real-time application to detect license number plates in automobiles. The system is evaluated based on key metrics associated with application quality-of-service (QoS) and network efficiency such as the average system response time and the migration cost for different combinations of load, compute resources, inter-edge cloud bandwidth, network and user latency. A detailed migration cost analysis enables evaluation of migration strategies to improve ShareOn's performance in comparison to alternative migration techniques, achieving a gain of 15-22% in system response time for highly loaded edge cloud nodes. INDEX TERMS Quality of experience (QoE), container migration, mobile edge computing, virtualization, real-time applications, edge cloud, migration cost, inter-edge cloud bandwidth. SHALINI CHOUDHURY received the B.Tech. degree in electrical engineering from Nagpur University, India, in 2011, and the M.S. degree from CUNY, The City College of New York, USA, in 2016. She is currently pursuing the Ph.D. degree in electrical and computer engineering with Rutgers University, New Jersey, USA. She has previously worked with the Interdigital's Research and Innovation Team and the Accenture Laboratory. Her research interests include wireless communications and networking specifically low latency mobility support for emerging 5G and beyond networks.

Mobile edge cloud architecture for future low-latency applications

2020

OF THE DISSERTATION Mobile Edge Cloud Architecture for Future Low-latency Applications by Sumit Maheshwari Dissertation Director: Dipankar Raychaudhuri This thesis presents the architecture, design, and evaluation of the mobile edge cloud (MEC) system aimed at supporting future low-latency applications. Mobile edge clouds have emerged as a solution for providing low latency services in future generations (5G and beyond) of mobile networks, which are expected to support a variety of realtime applications such as AR/VR (Augmented/Virtual Reality), autonomous vehicles and robotics. Conventional cloud computing implemented at distant large-scale data centers incurs irreducible propagation delays of the order of 50-100ms or more that may be acceptable for current applications but may not be able to support emerging real-time needs. Edge clouds considered here promise to meet the stringent latency requirements of emerging classes of real-time applications by bringing compute, storage, and...

Orchestration of MEC Computation Jobs and Energy Consumption Challenges in 5G and Beyond

IEEE Access, 2022

Mobile Edge Computing (MEC) technology philosophy inspires the next generation mobile networks to provide cloud computing capabilities in addition to a diverse range of Information Technology (IT) services with ultra-low latency and higher bandwidth at the edge. One of the most common challenges of 5G-MEC is the management and orchestration across all networks and infrastructure resources as well as end-to-end quality of experience. The decentralized architecture of MEC with independent and non-collaborative servers results in the situation of having underutilized servers with wasted energy. Moreover, the consequences of having highly utilized servers with highly consumed energy are not only the incapability to accommodate all the load of the computing jobs and the dramatic increase in the total OPEX cost, but it also creates some environmental problems. Orchestrating servers' workload and control offloading the computation jobs is one of the technical advantages of MEC since it satisfies the increasing requirements of modern mobile applications while optimizing the energy consumption and cost. In this work, we consider cluster-based energy-aware offloading framework. The proposed work consists of dual-tier domain divided into clusters of Edge Servers ES s. We have presented the results of our simulation as a proof of our concept that the formulated adaptive strategy to minimize the optimization problem calculation per cluster reduces the energy consumption and enhances the quality of experience while achieving the conservation of the related computing and storage resources cost. INDEX TERMS MEC, IT, 5G, OPEX (operating expense), edge servers.

Optimizing NFV placement for distributing micro-data centers in cellular networks

The Journal of Supercomputing, 2021

With the popularity of mobile devices, the next generation of mobile networks has faced several challenges. Different applications have been emerged, with different requirements. Offering an infrastructure that meets different types of applications with specific requirements is one of these issues. In addition, due to user mobility, the traffic generated by the mobile devices in a specific location is not constant, making it difficult to reach the optimal resource allocation. In this context, network function virtualization (NFV) can be used to deploy the telecommunication stacks as virtual functions running on commodity hardware to meet users' requirements such as performance and availability. However, the deployment of virtual functions can be a complex task. To select the best placement strategy that reduces the resource usage, at the same time keeps the performance and availability of network functions is a complex task, already proven to be an NP-hard problem. Therefore, in this paper, we formulate the NFV placement as a multi-objective problem, where the risk associated with the placement and energy consumption are taken into consideration. We propose the usage of two optimization algorithms, NSGA-II and GDE3, to solve this problem. These algorithms were taken into consideration because both work with multi-objective problems and present good performance. We consider a triathlon circuit scenario based on real data from the Ironman route as an use case to evaluate and compare the algorithms. The results show that GDE3 is able to attend both objectives (minimize failure and minimize energy consumption), while the NSGA-II prioritizes energy consumption.

Joint Resource Dimensioning and Placement for Dependable Virtualized Services in Mobile Edge Clouds

IEEE Transactions on Mobile Computing, 2021

Mobile edge computing (MEC) is an emerging architecture for accommodating latency sensitive virtualized services (VSs). Many of these VSs are expected to be safety critical, and will have some form of reliability requirements. In order to support provisioning reliability to such VSs in MEC in an efficient and confidentiality preserving manner, in this paper we consider the joint resource dimensioning and placement problem for VSs with diverse reliability requirements, with the objective of minimizing the energy consumption. We formulate the problem as an integer programming problem, and prove that it is NPhard. We propose a two-step approximation algorithm with bounded approximation ratio based on Lagrangian relaxation. We benchmark our algorithm against two greedy algorithms in realistic scenarios. The results show that the proposed solution is computationally efficient, scalable and can provide up to 30% reduction in energy consumption compared to greedy algorithms.

Multi-Objective Mobile Edge Provisioning in Small Cell Clouds

2019

In recent years, Mobile Cloud Computing (MCC) has been proposed as a solution to enhance the capabilities of user equipment (UE), such as smartphones, tablets and laptops. However, offloading to conventional Cloud introduces significant execution delays that are inconvenient in case of near real-time applications. Mobile Edge Computing (MEC) has been proposed as a solution to this problem. MEC brings computational and storage resources closer to the UE, enabling to offload near real-time applications from the UE while meeting strict latency requirements. However, it is very difficult for Edge providers to determine how many Edge nodes are required to provide MEC services, in order to guarantee a high QoS and to maximize their profit. In this paper, we investigate the static provisioning of Edge nodes in a area representing a cellular network in order to guarantee the required QoS to the user without affecting providers' profits. First, we design a model for MEC offloading consid...

Location-aware Resource Allocation in Mobile Edge Clouds

2021

Over the last decade, cloud computing has realized the long-held dream of computing as a utility, in which computational and storage services are made available via the Internet to anyone at any time and from anywhere. This has transformed Information Technology (IT) and given rise to new ways of designing and purchasing hardware and software. However, the rapid development of the Internet of Things (IoTs) and mobile technology has brought a new wave of disruptive applications and services whose performance requirements are stretching the limits of current cloud computing systems and platforms. In particular, novel large scale mission-critical IoT systems and latency-intolerant applications strictly require very low latency and strong guarantees of privacy, and can generate massive amounts of data that are only of local interest. These requirements are not readily satisfied using modern application deployment strategies that rely on resources from distant large cloud datacenters bec...