Wireless Powered Mobile Edge Computing Systems: Simultaneous Time Allocation and Offloading Policies (original) (raw)

Smart Application Division and Time Allocation Policy for Computational Offloading in Wireless Powered Mobile Edge Computing

2021

Limited battery life and poor computational resources of mobile terminals are challenging problems for the present and future computation-intensive mobile applications. Wireless powered mobile edge computing is one of the solutions, in which wireless energy transfer technology and cloud server’s capabilities are brought to the edge of cellular networks. In wireless powered mobile edge computing systems, the mobile terminals charge their batteries through radio frequency signals and offload their applications to the nearby hybrid access point in the same time slot to minimize their energy consumption and ensure uninterrupted connectivity with hybrid access point. However, the smart division of application into k subtasks as well as intelligent partitioning of time slot for harvesting energy and offloading data is a complex problem. In this paper, we propose a novel deep-learning-based offloading and time allocation policy (DOTP) for training a deep neural network that divides the com...

Computational Offloading in Mobile Edge with Comprehensive and Energy Efficient Cost Function: A Deep Learning Approach

Sensors, 2021

In mobile edge computing (MEC), partial computational offloading can be intelligently investigated to reduce the energy consumption and service delay of user equipment (UE) by dividing a single task into different components. Some of the components execute locally on the UE while the remaining are offloaded to a mobile edge server (MES). In this paper, we investigate the partial offloading technique in MEC using a supervised deep learning approach. The proposed technique, comprehensive and energy efficient deep learning-based offloading technique (CEDOT), intelligently selects the partial offloading policy and also the size of each component of a task to reduce the service delay and energy consumption of UEs. We use deep learning to find, simultaneously, the best partitioning of a single task with the best offloading policy. The deep neural network (DNN) is trained through a comprehensive dataset, generated from our mathematical model, which reduces the time delay and energy consump...

A Deep Learning Approach for Energy Efficient Computational Offloading in Mobile Edge Computing

IEEE Access, 2019

Mobile edge computing (MEC) has shown tremendous potential as a means for computationally intensive mobile applications by partially or entirely offloading computations to a nearby server to minimize the energy consumption of user equipment (UE). However, the task of selecting an optimal set of components to offload considering the amount of data transfer as well as the latency in communication is a complex problem. In this paper, we propose a novel energy-efficient deep learning based offloading scheme (EEDOS) to train a deep learning based smart decision-making algorithm that selects an optimal set of application components based on remaining energy of UEs, energy consumption by application components, network conditions, computational load, amount of data transfer, and delays in communication. We formulate the cost function involving all aforementioned factors, obtain the cost for all possible combinations of component offloading policies, select the optimal policies over an exhaustive dataset, and train a deep learning network as an alternative for the extensive computations involved. Simulation results show that our proposed model is promising in terms of accuracy and energy consumption of UEs. INDEX TERMS Computational offloading, deep learning, energy efficient offloading, mobile edge computing, user equipment.

Enhanced Wireless Communication Optimization with Neural Networks, Proximal Policy Optimization and Edge Computing for Latency and Energy Efficiency

FOREX Publication, 2024

This research proposes a novel approach for efficient resource allocation in wireless communication systems. It combines dynamic neural networks, Proximal Policy Optimization (PPO), and Edge Computing Orchestrator (ECO) for latency-aware and energy-efficient resource allocation. The proposed system integrates multiple components, including a dynamic neural network, PPO, ECO, and a Mobile Edge Computing (MEC) server. The experimental methodology involves utilizing the NS-3 simulation platform to assess latency and energy efficiency in resource allocation within a wireless communication network, incorporating an ECO, MEC server, and dynamic task scheduling algorithms. It demonstrates a holistic and adaptable approach to resource allocation in dynamic environments, showcasing a notable reduction in latency for devices and tasks. Latency values range from 5 to 20 milliseconds, with corresponding resource utilization percentages varying between 80% and 95%. Additionally, energy-efficient resource allocation demonstrates a commendable reduction in energy consumption, with measured values ranging from 10 to 30 watts, coupled with efficient resource usage percentages ranging from 70% to 85%. These outcomes validate the efficacy of achieving both latency-aware and energy-efficient resource allocation for enhanced wireless communication systems. The proposed system has broad applications in healthcare, smart cities, IoT, real-time analytics, autonomous vehicles, and augmented reality, offering a valuable solution to optimize energy consumption, reduce latency, and enhance system efficiency in these industries.

Advanced Energy-Efficient Computation Offloading Using Deep Reinforcement Learning in MTC Edge Computing

IEEE Access, 2020

Mobile edge computing (MEC) supports the internet of things (IoT) by leveraging computation offloading. It minimizes the delay and consequently reduces the energy consumption of the IoT devices. However, the consideration of static communication mode in most of the recent work, despite varying network dynamics and resource diversity, is the main limitation. An energy-efficient computation offloading method using deep reinforcement learning (DRL) is proposed. Both delay-tolerant and non-delay tolerant scenarios are considered using capillary machine type communication (MTC). Depending upon the type of service, an intelligent MTC edge server using DRL decides either process the incoming request at the MTC edge server or sends it to the cloud server. To control communication, we draft a markov decision problem (MDP). This minimizes the long-term power consumption of the system. The formulation of the optimization problem is considered under the constraint of computing power resources and delays. Simulation results delineate the significant performance gain of 12% in computation offloading through the proposed DRL approach. The effectiveness and superiority of the proposed model are compared with other baselines and are demonstrated numerically. INDEX TERMS Machine type communication, mobile edge computing, computation offloading, deep reinforcement learning, energy efficiency.

A Hybrid Artificial Neural Network for Task Offloading in Mobile Edge Computing

2022 IEEE 65th International Midwest Symposium on Circuits and Systems (MWSCAS)

Edge Computing (EC) is about remodeling the way data is handled, processed, and delivered within a vast heterogeneous network. One of the fundamental concepts of EC is to push the data processing near the edge by exploiting front-end devices with powerful computation capabilities. Thus, limiting the use of centralized architecture, such as cloud computing, to only when it is necessary. This paper proposes a novel edge computer offloading technique that assigns computational tasks generated by devices to potential edge computers with enough computational resources. The proposed approach clusters the edge computers based on their hardware specifications. Afterwards, the tasks generated by devices will be fed to a hybrid Artificial Neural Network (ANN) model that predicts, based on these tasks, the profiles, i.e., features, of the edge computers with enough computational resources to execute them. The predicted edge computers are then assigned to the cluster they belong to so that each task is assigned to a cluster of edge computers. Finally, we choose for each task the edge computer that is expected to provide the fastest response time. The experiment results show that our proposed approach outperforms other state-of-the-art machine learning approaches using real-world IoT dataset. Index Terms-Internet of Things (IoT), machine learning, edge computing, resource allocation, task offloading.

A Deep Learning Approach for Task Offloading in Multi-UAV Aided Mobile Edge Computing

IEEE Access

Computation offloading has proven to be an effective method for facilitating resourceintensive tasks on IoT mobile edge nodes with limited processing capabilities. Additionally, in the context of Mobile Edge Computing (MEC) systems, edge nodes can offload its computation-intensive tasks to a suitable edge server. Hence, they can reduce energy cost and speed up processing. Despite the numerous accomplished efforts in task offloading problems on the Internet of Things (IoT), this problem remains a research gap mainly because of its NP-hardness in addition to the unrealistic assumptions in many proposed solutions. In order to accurately extract information from raw sensor data from IoT devices deployed in complicated contexts, Deep Learning (DL) is a potential method. Therefore, in this paper, an approach based on Deep Reinforcement Learning (DRL) will be presented to optimize the offloading process for IoT in MEC environments. This approach can achieve the optimal offloading decision. A Markov Decision Problem (MDP) is used to formulate the offloading problem. Delay time and consumed energy are the main optimization targets in this work. The proposed approach has been verified using extensive simulations. Simulation results demonstrate that the proposed model can effectively improve the MEC system latency, energy consumption, and significantly outperforms the Deep Q Networks (DQNs) and Actor Critic (AC) approaches. INDEX TERMS Deep learning, deep reinforcement learning, Internet of Things, mobile edge computing, task offloading. I. INTRODUCTION 18 The 5G era networks has been realized based on networking 19 technologies, innovations, and the new computing and com-20 munication paradigms [1]. Mobile Edge Computing (MEC) is 21 one of the key technologies for computation distribution that 22 boosts the performance of 5G cellular networks [2]. The main 23 role of MEC is the minimization of communication latency 24 between the user and the server. This behavior has a great 25 importance for Internet of Things (IoT) environments. IoT 26 has become an important area of research due to its rapid 27 use in our daily lives and in industry. Therefore, It faces 28 numerous challenges, including latency reduction, storage 29 management, energy consumption, task offloading, etc [3].

Energy and Processing Time Efficiency for an Optimal Offloading in a Mobile Edge Computing Node

International Journal of Communication Networks and Information Security (IJCNIS)

This article describes a processing time, energy and computing resources optimization in a Mobile Edge Computing (MEC). We consider a mobile user MEC system, where a smart mobile device (SMD) demand computation offloading to a MEC server. For that, we consider a SMD contains a set of heavy tasks that can be offloadable. The formulated optimization problem takes into account both the dedicated energy capacity and the processing times. We proposed a heuristic solution schema. To evaluate our solution, we realized a range of simulation experiments. The results obtained in terms of treatment time and energy consumption are very.

A Latency and Energy Trade-Off for Computation Offloading Within a Mobile Edge Computing Server

Lecture Notes in Mechanical Engineering, 2020

The Mobile Edge Computing (MEC) provides leading-edge services to multiple smart mobile devices (SMDs). Besides, computation offloading is a promising service in the 5G networks: it reduces battery drain and applications' execution time. These SMDs generally possess limited battery power and processing capacity. In addition, the local CPU frequency allocated to processing has a huge impact on SMDs energy consumption. In this paper, we consider a multiuser MEC system, where multiple SMDs demand computation offloading to a MEC server. The weighted sum of the overall energy consumptions and latencies represent the optimization problem's objective. In this problem, we jointly optimize offloading decisions, radio resource allocation and local computational resources allocation. The results obtained using our heuristic scheme show that it achieves good performance in terms of energy and latency. Accordingly, its achievement is encouraging compared to both cases where we perform local execution only or complete tasks offloading only.

A Deep Learning Approach for Mobility-Aware and Energy-Efficient Resource Allocation in MEC

IEEE Access, 2020

Mobile Edge Computing (MEC) has emerged as an alternative to cloud computing to meet the latency and Quality-of-Service (QoS) requirements of mobile devices. In this paper, we address the problem of server resource allocation in MEC. Due to the dynamic load conditions on MEC servers, their resources need to be used intelligently to meet the QoS requirements of the users and to minimize server energy consumption. We present a novel resource allocation algorithm, called Power Migration Expand (PowMigExpand). Our algorithm assigns user requests to the optimal server and allocates optimal amount of resources to User Equipment (UE) based on our comprehensive utility function. PowMigExpand also migrates UE requests to new servers, when needed due to the mobility of users. We also present a low cost Energy Efficient Smart Allocator (EESA) algorithm that uses deep learning for energy efficient allocation of requests to optimal servers. The proposed algorithms consider varying load of incoming requests and their heterogeneous nature, energy efficient activation of servers, and Virtual Machine (VM) migration for smart resource allocation and, thus, is the first comprehensive approach to address the complex and multidimensional resource allocation problem using deep learning. We compare our proposed algorithms with other resource allocation approaches and show that our approach can handle the dynamic load conditions better. The proposed algorithms improve the service rate and the overall utility with minimum energy consumption. On average, it reduces 26% energy consumption of MESs and improves the service rate by 23%, compared with other algorithms. We also get more than 70% accuracy for EESA in allocating the resources of multiple servers to multiple users. INDEX TERMS Mobile edge computing, resource allocation, computational offloading, deep learning, energy efficient.