Smart Application Division and Time Allocation Policy for Computational Offloading in Wireless Powered Mobile Edge Computing (original) (raw)
Related papers
Sensors, 2021
In mobile edge computing (MEC), partial computational offloading can be intelligently investigated to reduce the energy consumption and service delay of user equipment (UE) by dividing a single task into different components. Some of the components execute locally on the UE while the remaining are offloaded to a mobile edge server (MES). In this paper, we investigate the partial offloading technique in MEC using a supervised deep learning approach. The proposed technique, comprehensive and energy efficient deep learning-based offloading technique (CEDOT), intelligently selects the partial offloading policy and also the size of each component of a task to reduce the service delay and energy consumption of UEs. We use deep learning to find, simultaneously, the best partitioning of a single task with the best offloading policy. The deep neural network (DNN) is trained through a comprehensive dataset, generated from our mathematical model, which reduces the time delay and energy consump...
A Deep Learning Approach for Energy Efficient Computational Offloading in Mobile Edge Computing
IEEE Access, 2019
Mobile edge computing (MEC) has shown tremendous potential as a means for computationally intensive mobile applications by partially or entirely offloading computations to a nearby server to minimize the energy consumption of user equipment (UE). However, the task of selecting an optimal set of components to offload considering the amount of data transfer as well as the latency in communication is a complex problem. In this paper, we propose a novel energy-efficient deep learning based offloading scheme (EEDOS) to train a deep learning based smart decision-making algorithm that selects an optimal set of application components based on remaining energy of UEs, energy consumption by application components, network conditions, computational load, amount of data transfer, and delays in communication. We formulate the cost function involving all aforementioned factors, obtain the cost for all possible combinations of component offloading policies, select the optimal policies over an exhaustive dataset, and train a deep learning network as an alternative for the extensive computations involved. Simulation results show that our proposed model is promising in terms of accuracy and energy consumption of UEs. INDEX TERMS Computational offloading, deep learning, energy efficient offloading, mobile edge computing, user equipment.
Wireless Powered Mobile Edge Computing Systems: Simultaneous Time Allocation and Offloading Policies
2021
To improve the computational power and limited battery capacity of mobile devices (MDs), wireless powered mobile edge computing (MEC) systems are gaining much importance. In this paper, we consider a wireless powered MEC system composed of one MD and a hybrid access point (HAP) attached to MEC. Our objective is to achieve a joint time allocation and offloading policy simultaneously. We propose a cost function that considers both the energy consumption and the time delay of an MD. The proposed algorithm, joint time allocation and offload policy (JTAOP), is used to train a neural network for reducing the complexity of our algorithm that depends on the resolution of time and the number of components in a task. The numerical results are compared with three benchmark schemes, namely, total local computation, total offloading and partial offloading. Simulations show that the proposed algorithm performs better in producing the minimum cost and energy consumption as compared to the consider...
A Hybrid Artificial Neural Network for Task Offloading in Mobile Edge Computing
2022 IEEE 65th International Midwest Symposium on Circuits and Systems (MWSCAS)
Edge Computing (EC) is about remodeling the way data is handled, processed, and delivered within a vast heterogeneous network. One of the fundamental concepts of EC is to push the data processing near the edge by exploiting front-end devices with powerful computation capabilities. Thus, limiting the use of centralized architecture, such as cloud computing, to only when it is necessary. This paper proposes a novel edge computer offloading technique that assigns computational tasks generated by devices to potential edge computers with enough computational resources. The proposed approach clusters the edge computers based on their hardware specifications. Afterwards, the tasks generated by devices will be fed to a hybrid Artificial Neural Network (ANN) model that predicts, based on these tasks, the profiles, i.e., features, of the edge computers with enough computational resources to execute them. The predicted edge computers are then assigned to the cluster they belong to so that each task is assigned to a cluster of edge computers. Finally, we choose for each task the edge computer that is expected to provide the fastest response time. The experiment results show that our proposed approach outperforms other state-of-the-art machine learning approaches using real-world IoT dataset. Index Terms-Internet of Things (IoT), machine learning, edge computing, resource allocation, task offloading.
A Deep Learning Approach for Task Offloading in Multi-UAV Aided Mobile Edge Computing
IEEE Access
Computation offloading has proven to be an effective method for facilitating resourceintensive tasks on IoT mobile edge nodes with limited processing capabilities. Additionally, in the context of Mobile Edge Computing (MEC) systems, edge nodes can offload its computation-intensive tasks to a suitable edge server. Hence, they can reduce energy cost and speed up processing. Despite the numerous accomplished efforts in task offloading problems on the Internet of Things (IoT), this problem remains a research gap mainly because of its NP-hardness in addition to the unrealistic assumptions in many proposed solutions. In order to accurately extract information from raw sensor data from IoT devices deployed in complicated contexts, Deep Learning (DL) is a potential method. Therefore, in this paper, an approach based on Deep Reinforcement Learning (DRL) will be presented to optimize the offloading process for IoT in MEC environments. This approach can achieve the optimal offloading decision. A Markov Decision Problem (MDP) is used to formulate the offloading problem. Delay time and consumed energy are the main optimization targets in this work. The proposed approach has been verified using extensive simulations. Simulation results demonstrate that the proposed model can effectively improve the MEC system latency, energy consumption, and significantly outperforms the Deep Q Networks (DQNs) and Actor Critic (AC) approaches. INDEX TERMS Deep learning, deep reinforcement learning, Internet of Things, mobile edge computing, task offloading. I. INTRODUCTION 18 The 5G era networks has been realized based on networking 19 technologies, innovations, and the new computing and com-20 munication paradigms [1]. Mobile Edge Computing (MEC) is 21 one of the key technologies for computation distribution that 22 boosts the performance of 5G cellular networks [2]. The main 23 role of MEC is the minimization of communication latency 24 between the user and the server. This behavior has a great 25 importance for Internet of Things (IoT) environments. IoT 26 has become an important area of research due to its rapid 27 use in our daily lives and in industry. Therefore, It faces 28 numerous challenges, including latency reduction, storage 29 management, energy consumption, task offloading, etc [3].
International Journal of Electrical and Computer Engineering (IJECE), 2019
With the fifth-generation (5G) networks, Mobile edge computing (MEC) is a promising paradigm to provide near computing and storage capabilities to smart mobile devices. In addition, mobile devices are most of the time battery dependent and energy constrained while they are characterized by their limited processing and storage capacities. Accordingly, these devices must offload a part of their heavy tasks that require a lot of computation and are energy consuming. This choice remains the only option in some circumstances, especially when the battery drains off. Besides, the local CPU frequency allocated to processing has a huge impact on devices energy consumption. Additionally, when mobile devices handle many tasks, the decision of the part to offload becomes critical. Actually, we must consider the wireless network state, the available processing resources at both sides, and particularly the local available battery power. In this paper, we consider a single mobile device that is en...
Energy-Aware Multi-Server Mobile Edge Computing: A Deep Reinforcement Learning Approach
2019 53rd Asilomar Conference on Signals, Systems, and Computers, 2019
We investigate the problem of computation offloading in a mobile edge computing architecture, where multiple energy-constrained users compete to offload their computational tasks to multiple servers through a shared wireless medium. We propose a multi-agent deep reinforcement learning algorithm, where each server is equipped with an agent, observing the status of its associated users and selecting the best user for offloading at each step. We consider computation time (i.e., task completion time) and system lifetime as two key performance indicators, and we numerically demonstrate that our approach outperforms baseline algorithms in terms of the trade-off between computation time and system lifetime.
A Deep Learning Approach for Mobility-Aware and Energy-Efficient Resource Allocation in MEC
IEEE Access, 2020
Mobile Edge Computing (MEC) has emerged as an alternative to cloud computing to meet the latency and Quality-of-Service (QoS) requirements of mobile devices. In this paper, we address the problem of server resource allocation in MEC. Due to the dynamic load conditions on MEC servers, their resources need to be used intelligently to meet the QoS requirements of the users and to minimize server energy consumption. We present a novel resource allocation algorithm, called Power Migration Expand (PowMigExpand). Our algorithm assigns user requests to the optimal server and allocates optimal amount of resources to User Equipment (UE) based on our comprehensive utility function. PowMigExpand also migrates UE requests to new servers, when needed due to the mobility of users. We also present a low cost Energy Efficient Smart Allocator (EESA) algorithm that uses deep learning for energy efficient allocation of requests to optimal servers. The proposed algorithms consider varying load of incoming requests and their heterogeneous nature, energy efficient activation of servers, and Virtual Machine (VM) migration for smart resource allocation and, thus, is the first comprehensive approach to address the complex and multidimensional resource allocation problem using deep learning. We compare our proposed algorithms with other resource allocation approaches and show that our approach can handle the dynamic load conditions better. The proposed algorithms improve the service rate and the overall utility with minimum energy consumption. On average, it reduces 26% energy consumption of MESs and improves the service rate by 23%, compared with other algorithms. We also get more than 70% accuracy for EESA in allocating the resources of multiple servers to multiple users. INDEX TERMS Mobile edge computing, resource allocation, computational offloading, deep learning, energy efficient.
Near-optimal and learning-driven task offloading in a 5G multi-cell mobile edge cloud
Computer Networks, 2020
With development well underway, 5G is envisioned as an enabler of lighting fast mobile services, such as virtual reality, augmented reality, live video analytics, and etc. In particular, multi-cell Mobile Edge Clouds (MEC) with 5G base stations endowed with computing capability are able to promote the Quality of Services (QoS) of mobile users by executing tasks in the edge cloud. Due to the varying 5G network conditions and limited computation capacity of each base station in the multi-cell MEC, as well as the stringent QoS requirements, a fundamental and challenging problem is how to offload user tasks to the edge cloud, such that the energy consumption of mobile devices is minimized. In this paper, we first formulate the offline and online locationaware mobile task offloading problems in a multi-cell MEC. For the offline location-aware mobile task offloading problem, we then develop an exact solution and an approximation algorithm with an approximation ratio. For the online problem, we thirdly propose a novel deep reinforcement learning-based offloading algorithm for mobile users to obtain the optimal offloading policy. We finally conduct extensive experiments by simulations to evaluate the proposed algorithms against existing benchmarks. The experimental results show that the proposed algorithms are promising and outperform the benchmark algorithms by significantly reducing energy cost of mobile devices and delays experienced by mobile users.