RELIABLE: Resource Allocation Mechanism for 5G Network using Mobile Edge Computing (original) (raw)

Dynamic Allocation of Processing Resources in Cloud-RAN for a Virtualised 5G Mobile Network

2018 26th European Signal Processing Conference (EUSIPCO), 2018

One of the main research directions for 5G mobile networks is resource virtualisation and slicing. Towards this goal, the Cloud Radio Access Network (C-RAN) architecture offers mobile operators a flexible and dynamic framework for managing resources and processing data. This paper proposes a dynamic allocation approach for processing resources in a C- RAN supported by the concept of Network Function Vitualisation (NFV). To achieve this objective, we virtualised the Baseband Unit (BBU) resources for Long Term Evolution (LTE) mobile network into a BBU pool supported by Linux Container (LXC) technology. We report on experiments conducted in the Iris testbed with high-definition video streaming by implementing Software-Defined Radio (SDR)-based LTE functionality with the virtualised BBU pool. Our results show a significant improvement in the quality of the video transmission with this dynamic allocation approach.

SPECIAL SECTION ON MOBILE EDGE COMPUTING AND MOBILE CLOUD COMPUTING: ADDRESSING HETEROGENEITY AND ENERGY ISSUES OF COMPUTE AND NETWORK RESOURCES Edge Computing in 5G: A Review

5G is the next generation cellular network that aspires to achieve substantial improvement on quality of service, such as higher throughput and lower latency. Edge computing is an emerging technology that enables the evolution to 5G by bringing cloud capabilities near to the end users (or user equipment, UEs) in order to overcome the intrinsic problems of the traditional cloud, such as high latency and the lack of security. In this paper, we establish a taxonomy of edge computing in 5G, which gives an overview of existing state-of-the-art solutions of edge computing in 5G on the basis of objectives, computational platforms, attributes, 5G functions, performance measures, and roles. We also present other important aspects, including the key requirements for its successful deployment in 5G and the applications of edge computing in 5G. Then, we explore, highlight, and categorize recent advancements in edge computing for 5G. By doing so, we reveal the salient features of different edge computing paradigms for 5G. Finally, open research issues are outlined.

A Hybrid Resource Allocation Approach for 5G Iot Applications

Journal of Engineering Research, 2021

5G cellular network expects to sustain various QoS (Quality of Service) requirements and provide customers with multiple services based on their requirements. Implementing 5G networks in an IoT (Internet of Things) infrastructure can help serving the requirements of IoT devices in a 100x faster and more efficient manner. This objective can be accomplished by applying the network slicing approach, where it partitions a single physical infrastructure into multiple virtual resources that can be distributed among different devices independently. This paper merges the benefits of both the static allocation and the network slicing approach to propose a mechanism that can allocate resources efficiently among multiple customers. The allocation mechanism based on a pre-defined policy between the slice provider and the customer is to specify the attributes that will be computed before any allocation process. Network slicing is the idiosyncratic latest 5G technology which produces diverse requ...

Cost-Effective Resource Allocation for Multitier Mobile Edge Computing in 5G Mobile Networks

IEEE Access, 2021

Mobile edge computing (MEC) is currently one of the key technologies that can facilitate the evolution of the future digitized economy. MEC can provide ubiquitous computational capabilities through the multitier deployment of servers to ensure lower latencies and tighter integration with 5G, the Internet of Things, blockchains and artificial intelligence. In this paper, we propose a new approach to optimizing hardware resource allocation for edge nodes in a multitier MEC hierarchy. In addition to a centralized unit, we consider active antenna units and distributed units equipped with edge nodes of different computational capacities. A parametric Bayesian optimizer is implemented for hardware resource allocation to increase the overall computational capacity of a 5G-based MEC system. Simulation results show that for given budget constraints, the proposed solution outperforms pseudorandom resource allocation in terms of the proportion of computational tasks completed. The achievable gains are in the range of 20-40 %, depending on the task complexity and selected budget threshold.

Exploiting Virtual Machine Commonality for Improved Resource Allocation in Edge Networks

Journal of Sensor and Actuator Networks, 2020

5G systems are putting increasing pressure on Telecom operators to enhance users’ experience, leading to the development of more techniques with the aim of improving service quality. However, it is essential to take into consideration not only users’ demands but also service providers’ interests. In this work, we explore policies that satisfy both views. We first formulate a mathematical model to compute End-to-End (E2E) delay experienced by mobile users in Multi-access Edge Computing (MEC) environments. Then, dynamic Virtual Machine (VM) allocation policies are presented, with the objective of satisfying mobile users Quality of Service (QoS) requirements, while optimally using the cloud resources by exploiting VM resource reuse.Thus, maximizing the service providers’ profit should be ensured while providing the service required by users. We further demonstrate the benefits of these policies in comparison with previous works.

NPRA: Novel Policy Framework for Resource Allocation in 5G Software Defined Networks

ICST Transactions on Mobile Communications and Applications

In cellular networks, physical resources are always limited, especially when shared among different contributors such as mobile network operator (MNO) or mobile virtual network operators (MVNO) etc. Software Defined Network (SDN) and Network Function Virtualization (NFV) is a Current research area. SDN-based cellular networks provide high Quality of Services (QoS) to the end-user and NFV provides isolation. The sharing of resources is often provided by leveraging virtualization. SDN can generate new forwarding rules and policies for dynamic routing decision based on the traffic classification. However, virtualization in cellular networks is still in infancy and many issues and challenges remain unaddressed. The queue-length problem for providing QoS is cellular network requires attention. The queue management requires separate management protocols for fair allocation of resources. In this research paper, we propose a novel framework for resource allocation and bandwidth management in the 5G cellular network. We are using two level of virtualization, i.e., implementing dynamic resource optimization at network slice manager and executing optimized policies at the wireless virtual manager.

Three-Tier Capacity and Traffic Allocation for Core, Edges, and Devices for Mobile Edge Computing

IEEE Transactions on Network and Service Management, 2018

In order to satisfy the 5G requirements of ultra-low latency, mobile edge computing (MEC)-based architecture, composed of three-tier nodes, core, edges, and devices, is proposed. In MEC-based architecture, previous studies focused on the controlplane issue, i.e., how to allocate traffic to be processed at different nodes to meet this ultra-low latency requirement. Also important is how to allocate the capacity to different nodes in the management plane so as to establish a minimal-capacity network. The objectives of this paper is to solve two problems: 1) to allocate the capacity of all nodes in MEC-based architecture so as to provide a minimal-capacity network and 2) to allocate the traffic to satisfy the latency percentage constraint, i.e., at least a percentage of traffic satisfying the latency constraint. In order to achieve these objectives, a two-phase iterative optimization (TPIO) method is proposed to try to optimize capacity and traffic allocation in MEC-based architecture. TPIO iteratively uses two phases to adjust capacity and traffic allocation respectively because they are tightly coupled. In the first phase, using queuing theory calculates the optimal traffic allocation under fixed allocated capacity, while in the second phase, allocated capacity is further reduced under fixed traffic allocation to satisfy the latency percentage constraint. Simulation results show that MEC-based architecture can save about 20.7% of capacity of two-tier architecture. Further, an extra 12.2% capacity must be forfeited when the percentage of satisfying latency is 90%, compared to 50%.

Joint User Association and VNF Placement for Latency Sensitive Applications in 5G Networks

2019 IEEE 8th International Conference on Cloud Networking (CloudNet), 2019

With the advent of 5G systems, telecommunication service providers (TSPs) have been facing a tremendous transition by the raised expectations of supporting billions of IoT devices and an unprecedented amount of generated data. This revolutionary transformation necessitates innovative approaches such as multi-access edge computing (MEC) to meet the requirements of many novel applications in terms of their high data rate and low latency. The idea behind MEC is to move data, virtualization, and processing capabilities from central data centers to the edge of the network. However, resources at the network edge are very scarce and costly to provision. Therefore, TSPs have to make smart decisions on how to utilize the network resources such as to make sure that the user service requirements (e.g., data rate, latency) are satisfied while the network resources are used most efficiently. In this paper, we study the problem of joint user association, VNF placement, and resource allocation, employing mixed-integer linear programming (MILP) technique. The objectives of the formulations are to minimize (i) the service provisioning cost, (ii) the number of VNF instances, and (iii) the transport network utilization, having an overarching goal of drawing a comparison between these different approaches.

Optimization of Resource Management for NFV-Enabled IoT Systems in Edge Cloud Computing

IEEE ACCESS, 2020

The Internet of Things (IoT) has been envisioned as an enabler of the digital transformation that can enhance different features of people's daily lives, such as healthcare, home automation, and smart transportation. The vast amount of data generated by a massive number of devices in an IoT system could lead to a severe performance problem. Edge cloud computing and network function virtualization (NFV) technologies are potential approaches to improve the efficiency of resource use and the flexibility of responsive services in an IoT system. In this paper, we consider the joint optimization problem of gateway placement and multihop routing in the IoT layer, the problem of service placement in the edge and cloud layers of an NFV-enabled IoT system in edge cloud computing (NIoT). We propose three optimization models (i.e., GMO, SP1O, SP2O) that allow an IoT service provider to find the optimal deployment of gateways, the optimal resource allocation for service functions, and the optimal routing according to a cost function with a performance constraint in a NIoT system. We then develop three approximation algorithms (i.e., GMA, SP1A, SP2A) for tackling the problems in a large-scale NIoT system. The evaluation results under a set of scenarios with various topologies and parameters show that the approximation algorithms can obtain results close to the optimal solution with a significant reduction in computation time. We also derive new insights into the strategy for an IoT provider to optimize its objectives. Specifically, the results suggest that an IoT provider should select an appropriate service placement strategy with regard to a charging agreement with an NFV infrastructure provider, and only deploy service functions with a strict delay requirement on the edge of networks for optimizing its cost.