Sumudu Samarakoon | University of Oulu (original) (raw)
Papers by Sumudu Samarakoon
arXiv (Cornell University), Aug 6, 2020
Machine learning (ML) is a promising enabler for the fifth generation (5G) communication systems ... more Machine learning (ML) is a promising enabler for the fifth generation (5G) communication systems and beyond. By imbuing intelligence into the network edge, edge nodes can proactively carry out decision-making, and thereby react to local environmental changes and disturbances while experiencing zero communication latency. To achieve this goal, it is essential to cater for high ML inference accuracy at scale under time-varying channel and network dynamics, by continuously exchanging fresh data and ML model updates in a distributed way. Taming this new kind of data traffic boils down to improving the communication efficiency of distributed learning by optimizing communication payload types, transmission techniques, and scheduling, as well as ML architectures, algorithms, and data processing methods. To this end, this article aims to provide a holistic overview of relevant communication and ML principles, and thereby present communication-efficient and distributed learning frameworks with selected use cases. 1 SIGNIFICANCE AND MOTIVATION The pursuit of extremely stringent latency and reliability guarantees is essential in the fifth generation (5G) communication system and beyond [1], [2]. In a wirelessly automated factory, the remote control of assembly robots should provision the same level of target latency and reliability offered by existing wired factory systems. To this end, for instance, control packets should be delivered within 1 ms with 99.99999% reliability [3]-[5]. In the emerging non-terrestrial communication enabled by a massive constellation of loworbit satellites [6]-[10], the orbiting speed is over 8 km per second, under which a single emergency control packet loss may incur collisions with other satellites and space debris. Unfortunately, traditional methods postulate known channel and network topological models while focusing primarily on maximizing data rates. Such model-based and best-effort solutions are far from enough to meet the challenging latency and reliability requirements under limited radio resources and randomness on wireless channels and network topologies in practice.
arXiv (Cornell University), Apr 29, 2016
In this paper, we develop novel two-tier interference management strategies that enable macrocell... more In this paper, we develop novel two-tier interference management strategies that enable macrocell users (MUEs) to improve their performance, with the help of open-access femtocells. To this end, we propose a rate-splitting technique using which the MUEs optimize their uplink transmissions by dividing their signals into two types: a coarse message that is intended for direct transmission to the macrocell base station and a fine message that is decoded by a neighboring femtocell and subsequently relayed over a heterogeneous (wireless/wired) backhaul. For deploying the proposed technique, we formulate a non-cooperative game between the MUEs in which each MUE can decide on its relaying femtocell while maximizing a utility function that captures both the achieved throughput and the expected backhaul delay. Simulation results show that the proposed approach yields up to 125% rate improvement and up to 2 times delay reduction with wired backhaul and, 150% rate improvement and up to 10 times delay reduction with wireless backhaul, relative to classical interference management approaches, with no cross-tier cooperation.
arXiv (Cornell University), Feb 3, 2020
In this work, we propose a novel joint client scheduling and resource block (RB) allocation polic... more In this work, we propose a novel joint client scheduling and resource block (RB) allocation policy to minimize the loss of accuracy in federated learning (FL) over wireless compared to a centralized training-based solution, under imperfect channel state information (CSI). First, the problem is cast as a stochastic optimization problem over a predefined training duration and solved using the Lyapunov optimization framework. In order to learn and track the wireless channel, a Gaussian process regression (GPR)-based channel prediction method is leveraged and incorporated into the scheduling decision. The proposed scheduling policies are evaluated via numerical simulations, under both perfect and imperfect CSI. Results show that the proposed method reduces the loss of accuracy up to 25.8 % compared to state-of-the-art client scheduling and RB allocation methods.
IEEE Transactions on Communications, Sep 1, 2021
The performance of federated learning (FL) over wireless networks depend on the reliability of th... more The performance of federated learning (FL) over wireless networks depend on the reliability of the client-server connectivity and clients' local computation capabilities. In this article we investigate the problem of client scheduling and resource block (RB) allocation to enhance the performance of model training using FL, over a pre-defined training duration under imperfect channel state information (CSI) and limited local computing resources. First, we analytically derive the gap between the training losses of FL with clients scheduling and a centralized training method for a given training duration. Then, we formulate the gap of the training loss minimization over client scheduling and RB allocation as a stochastic optimization problem and solve it using Lyapunov optimization. A Gaussian process regression-based channel prediction method is leveraged to learn and track the wireless channel, in which, the clients' CSI predictions and computing power are incorporated into the scheduling decision. Using an extensive set of simulations, we validate the robustness of the proposed method under both perfect and imperfect CSI over an array of diverse data distributions. Results show that the proposed method reduces the gap of the training accuracy loss by up to 40.7 % compared to state-of-theart client scheduling and RB allocation methods.
Vehicle-to-Everything (V2X) communication holds the promise for improving road safety and reducin... more Vehicle-to-Everything (V2X) communication holds the promise for improving road safety and reducing road accidents by enabling reliable and low latency services for vehicles. Vehicles are among the fastest growing type of connected devices. Therefore, there is a need for V2X communication, i.e., passing of information from Vehicle-to-Vehicle (V2V) or Vehicle-to-Infrastructure (V2I) and vice versa. In this paper, we focus on both V2I and V2V communication in a multi-lane freeway scenario, where coverage is provided by the Long Term Evolution Advanced (LTE-A) road side unit (RSU) network. Here, we propose a mechanism to offload vehicles with low signal-to-interference-plus-noise ratio (SINR) to be served by other vehicles, which have much higher quality link to the RSU. Furthermore, we analyze the improvements in the probabilities of achieving target throughputs and the performance is assessed through extensive system-level simulations. Results show that the proposed solution offloads low quality V2I links to stronger V2V links, and further increases successful transmission probability from 93% to 99.4%.
Proceedings of the IEEE, Aug 1, 2019
Edge computing is an emerging concept based on distributed computing, storage, and control servic... more Edge computing is an emerging concept based on distributed computing, storage, and control services closer to end network nodes. Edge computing lies at the heart of the fifth-generation (5G) wireless systems and beyond. While the current state-of-the-art networks communicate, compute, and process data in a centralized manner (at the cloud), for latency and compute-centric applications, both radio access and computational resources must be brought closer to the edge, harnessing the availability of computing and storageenabled small cell base stations in proximity to the end devices. Furthermore, the network infrastructure must enable a distributed edge decision-making service that learns to adapt to the network dynamics with minimal latency and optimize network deployment and operation accordingly. This paper will provide a fresh look to the concept of edge computing by first discussing the applications that the network Manuscript
arXiv (Cornell University), Jun 20, 2023
Cooperative multi-agent reinforcement learning (MARL) for navigation enables agents to cooperate ... more Cooperative multi-agent reinforcement learning (MARL) for navigation enables agents to cooperate to achieve their navigation goals. Using emergent communication, agents learn a communication protocol to coordinate and share information that is needed to achieve their navigation tasks. In emergent communication, symbols with no pre-specified usage rules are exchanged, in which the meaning and syntax emerge through training. Learning a navigation policy along with a communication protocol in a MARL environment is highly complex due to the huge state space to be explored. To cope with this complexity, this work proposes a novel neural network architecture, for jointly learning an adaptive state space abstraction and a communication protocol among agents participating in navigation tasks. The goal is to come up with an adaptive abstractor that significantly reduces the size of the state space to be explored, without degradation in the policy performance. Simulation results show that the proposed method reaches a better policy, in terms of achievable rewards, resulting in fewer training iterations compared to the case where raw states or fixed state abstraction are used. Moreover, it is shown that a communication protocol emerges during training which enables the agents to learn better policies within fewer training iterations.
arXiv (Cornell University), Apr 29, 2016
This paper presents the derivation for per-tier outage probability of a randomly deployed femtoce... more This paper presents the derivation for per-tier outage probability of a randomly deployed femtocell network over an existing macrocell network. The channel characteristics of macro user and femto user are addressed by considering different propagation modeling for outdoor and indoor links. Location based outage probability analysis and capacity of the system with outage constraints are used to analyze the system performance. To obtain the simplified expressions, approximations of ratios of Rayleigh random variables (RVs), Rayleigh to log normal RVs and their weighted summations, are derived with the verifications using simulations. Index Terms-Femtocell; outage probability; capacity; Rayleigh to Rayleigh probability density function (PDF); Rayleigh to log Normal PDF.
arXiv (Cornell University), Nov 9, 2020
This article deals with the problem of distributed machine learning, in which agents update their... more This article deals with the problem of distributed machine learning, in which agents update their models based on their local datasets, and aggregate the updated models collaboratively and in a fully decentralized manner. In this paper, we tackle the problem of information heterogeneity arising in multi-agent networks where the placement of informative agents plays a crucial role in the learning dynamics. Specifically, we propose BayGo, a novel fully decentralized joint Bayesian learning and graph optimization framework with proven fast convergence over a sparse graph. Under our framework, agents are able to learn and communicate with the most informative agent to their own learning. Unlike prior works, our framework assumes no prior knowledge of the data distribution across agents nor does it assume any knowledge of the true parameter of the system. The proposed alternating minimization based framework ensures global connectivity in a fully decentralized way while minimizing the number of communication links. We theoretically show that by optimizing the proposed objective function, the estimation error of the posterior probability distribution decreases exponentially at each iteration. Via extensive simulations, we show that our framework achieves faster convergence and higher accuracy compared to fully-connected and star topology graphs. 1 Eigenvector centrality of node i is a measure of social influence and it is proportional to agent i's number of neighbors.
Software-defined networking (SDN) is the concept of decoupling the control and data planes to cre... more Software-defined networking (SDN) is the concept of decoupling the control and data planes to create a flexible and agile network, assisted by a central controller. However, the performance of SDN highly depends on the limitations in the fronthaul which are inadequately discussed in the existing literature. In this paper, a fronthaul-aware software-defined resource allocation mechanism is proposed for 5G wireless networks with in-band wireless fronthaul constraints. Considering the fronthaul capacity, the controller maximizes the time-averaged network throughput by enforcing a coarse correlated equilibrium (CCE) and incentivizing base stations (BSs) to locally optimize their decisions to ensure mobile users' (MUs) quality-of-service (QoS) requirements. By marrying tools from Lyapunov stochastic optimization and game theory, we propose a two-timescale approach where the controller gives recommendations, i.e., subcarriers with low interference, in a long-timescale whereas BSs schedule their own MUs and allocate the available resources in every time slot. Numerical results show considerable throughput enhancements and delay reductions over a non-SDN network baseline.
IEEE Transactions on Wireless Communications, Nov 1, 2013
The design of distributed mechanisms for interference management is one of the key challenges in ... more The design of distributed mechanisms for interference management is one of the key challenges in emerging wireless small cell networks whose backhaul is capacity limited and heterogeneous (wired, wireless and a mix thereof). In this paper, a novel, backhaul-aware approach to interference management in wireless small cell networks is proposed. The proposed approach enables macrocell user equipments (MUEs) to optimize their uplink performance, by exploiting the presence of neighboring small cell base stations. The problem is formulated as a noncooperative game among the MUEs that seek to optimize their delay-rate tradeoff, given the conditions of both the radio access network and the-possibly heterogeneous-backhaul. To solve this game, a novel, distributed learning algorithm is proposed using which the MUEs autonomously choose their optimal uplink transmission strategies, given a limited amount of available information. The convergence of the proposed algorithm is shown and its properties are studied. Simulation results show that, under various types of backhauls, the proposed approach yields significant performance gains, in terms of both average throughput and delay for the MUEs, when compared to existing benchmark algorithms. Index Terms-heterogeneous networks; capacity-limited backhaul; wired and wireless backhaul; reinforcement learning; game theory. Sumudu Samarakoon received his B. Sc. degree in Electronic and Telecommunication Engineering from the University of Moratuwa , Sri Lanka in 2009 and the M. Eng. degree from the Asian Institute of Technology, Thailand in 2011. He is currently working Dr. Tech (Hons.) degree in Communications Engineering in University of Oulu, Finland. Sumudu is also a member of the research staff of the Centre for Wireless Communications (CWC), Oulu, Finlad. His main research interests are in heterogeneous networks, radio resource management and game theory.
Multiple-input multiple-output (MIMO) is a key for the fifth generation (5G) and beyond wireless ... more Multiple-input multiple-output (MIMO) is a key for the fifth generation (5G) and beyond wireless communication systems owing to higher spectrum efficiency, spatial gains, and energy efficiency. Reaping the benefits of MIMO transmission can be fully harnessed if the channel state information (CSI) is available at the transmitter side. However, the acquisition of transmitter side CSI entails many challenges. In this paper, we propose a deep learning assisted CSI estimation technique in highly mobile vehicular networks, based on the fact that the propagation environment (scatterers, reflectors) is almost identical thereby allowing a data driven deep neural network (DNN) to learn the non-linear CSI relations with negligible overhead. Moreover, we formulate and solve a dynamic network slicing based resource allocation problem for vehicular user equipments (VUEs) requesting enhanced mobile broadband (eMBB) and ultra-reliable low latency (URLLC) traffic slices. The formulation considers a threshold rate violation probability minimization for the eMBB slice while satisfying a probabilistic threshold rate criterion for the URLLC slice. Simulation result shows that an overhead reduction of 50% can be achieved with 12% increase in threshold violations compared to an ideal case with perfect CSI knowledge.
IEEE Transactions on Wireless Communications, Mar 1, 2016
In this paper, a novel cluster-based approach for maximizing the energy efficiency of wireless sm... more In this paper, a novel cluster-based approach for maximizing the energy efficiency of wireless small cell networks is proposed. A dynamic mechanism is proposed to group locallycoupled small cell base stations (SBSs) into clusters based on location and traffic load. Within each formed cluster, SBSs coordinate their transmission parameters to minimize a cost function which captures the tradeoffs between energy efficiency and flow level performance, while satisfying their users' quality-of-service requirements. Due to the lack of inter-cluster communications, clusters compete with one another in order to improve the overall network's energy efficiency. This inter-cluster competition is formulated as a noncooperative game between clusters that seek to minimize their respective cost functions. To solve this game, a distributed learning algorithm is proposed using which clusters autonomously choose their optimal transmission strategies based on local information. It is shown that the proposed algorithm converges to a stationary mixed-strategy distribution which constitutes an epsilon-coarse correlated equilibrium for the studied game. Simulation results show that the proposed approach yields significant performance gains reaching up to 36% of reduced energy expenditures and up to 41% of reduced fractional transfer time compared to conventional approaches.
arXiv (Cornell University), Jun 2, 2023
In this paper, we investigate the problem of robust Reconfigurable Intelligent Surface (RIS) phas... more In this paper, we investigate the problem of robust Reconfigurable Intelligent Surface (RIS) phase-shifts configuration over heterogeneous communication environments. The problem is formulated as a distributed learning problem over different environments in a Federated Learning (FL) setting. Equivalently, this corresponds to a game played between multiple RISs, as learning agents, in heterogeneous environments. Using Invariant Risk Minimization (IRM) and its FL equivalent, dubbed FL Games, we solve the RIS configuration problem by learning invariant causal representations across multiple environments and then predicting the phases. The solution corresponds to playing according to Best Response Dynamics (BRD) which yields the Nash Equilibrium of the FL game. The representation learner and the phase predictor are modeled by two neural networks, and their performance is validated via simulations against other benchmarks from the literature. Our results show that causalitybased learning yields a predictor that is 15% more accurate in unseen Out-of-Distribution (OoD) environments.
arXiv (Cornell University), Dec 12, 2020
This work studies a real-time environment monitoring scenario in the industrial Internet of thing... more This work studies a real-time environment monitoring scenario in the industrial Internet of things, where wireless sensors proactively collect environmental data and transmit it to the controller. We adopt the notion of risk-sensitivity in financial mathematics as the objective to jointly minimize the mean, variance, and other higher-order statistics of the network energy consumption subject to the constraints on the age of information (AoI) threshold violation probability and the AoI exceedances over a pre-defined threshold. We characterize the extreme AoI staleness using results in extreme value theory and propose a distributed power allocation approach by weaving in together principles of Lyapunov optimization and federated learning (FL). Simulation results demonstrate that the proposed FL-based distributed solution is on par with the centralized baseline while consuming 28.50% less system energy and outperforms the other baselines. Index Terms-5G and beyond, industrial IoT, smart factory, federated learning (FL), age of information (AoI), extreme value theory (EVT).
arXiv (Cornell University), Aug 20, 2021
In this article, we study the problem of robust reconfigurable intelligent surface (RIS)-aided do... more In this article, we study the problem of robust reconfigurable intelligent surface (RIS)-aided downlink communication over heterogeneous RIS types in the supervised learning setting. By modeling downlink communication over heterogeneous RIS designs as different workers that learn how to optimize phase configurations in a distributed manner, we solve this distributed learning problem using a distributionally robust formulation in a communication-efficient manner, while establishing its rate of convergence. By doing so, we ensure that the global model performance of the worst-case worker is close to the performance of other workers. Simulation results show that our proposed algorithm requires fewer communication rounds (about 50% lesser) to achieve the same worst-case distribution test accuracy compared to competitive baselines. Index Terms-Reconfigurable intelligent surface (RIS), federated learning, communication-efficiency, distributionally robust optimization (DRO).
IEEE Communications Letters, Apr 1, 2021
Ultra-reliable communication (URC) is a key enabler for supporting immersive and mission-critical... more Ultra-reliable communication (URC) is a key enabler for supporting immersive and mission-critical 5G applications. Meeting the strict reliability requirements of these applications is challenging due to the absence of accurate statistical models tailored to URC systems. In this letter, the wireless connectivity over dynamic channels is characterized via statistical learning methods. In particular, model-based and data-driven learning approaches are proposed to estimate the non-blocking connectivity statistics over a set of training samples with no knowledge on the dynamic channel statistics. Using principles of survival analysis, the reliability of wireless connectivity is measured in terms of the probability of channel blocking events. Moreover, the maximum transmission duration for a given reliable non-blocking connectivity is predicted in conjunction with the confidence of the inferred transmission duration. Results show that the accuracy of detecting channel blocking events is higher using the model-based method for low to moderate reliability targets requiring low sample complexity. In contrast, the data-driven method yields a higher detection accuracy for higher reliability targets at the cost of 100× sample complexity.
2015 IEEE Global Communications Conference (GLOBECOM), Dec 1, 2015
In this paper, a novel approach for joint power control and user scheduling is proposed for optim... more In this paper, a novel approach for joint power control and user scheduling is proposed for optimizing energy efficiency (EE), in terms of bits per unit power, in ultra dense small cell networks (UDNs). To address this problem, a dynamic stochastic game (DSG) is formulated between small cell base stations (SBSs). This game enables to capture the dynamics of both queues and channel states of the system. To solve this game, assuming a large homogeneous UDN deployment, the problem is cast as a mean field game (MFG) in which the MFG equilibrium is analyzed with the aid of two low-complexity tractable partial differential equations. User scheduling is formulated as a stochastic optimization problem and solved using the drift plus penalty (DPP) approach in the framework of Lyapunov optimization. Remarkably, it is shown that by weaving notions from Lyapunov optimization and mean field theory, the proposed solution yields an equilibrium control policy per SBS which maximizes the network utility while ensuring users' quality-ofservice. Simulation results show that the proposed approach achieves up to 18.1% gains in EE and 98.2% reductions in the network's outage probability compared to a baseline model.
IEEE Journal on Selected Areas in Communications, May 1, 2016
In this paper, a novel approach for joint power control and user scheduling is proposed for optim... more In this paper, a novel approach for joint power control and user scheduling is proposed for optimizing energy efficiency (EE), in terms of bits per unit energy, in ultra dense small cell networks (UDNs). Due to severe coupling in interference, this problem is formulated as a dynamic stochastic game (DSG) between small cell base stations (SBSs). This game enables to capture the dynamics of both the queues and channel states of the system. To solve this game, assuming a large homogeneous UDN deployment, the problem is cast as a mean-field game (MFG) in which the MFG equilibrium is analyzed with the aid of lowcomplexity tractable partial differential equations. Exploiting the stochastic nature of the problem, user scheduling is formulated as a stochastic optimization problem and solved using the drift plus penalty (DPP) approach in the framework of Lyapunov optimization. Remarkably, it is shown that by weaving notions from Lyapunov optimization and mean-field theory, the proposed solution yields an equilibrium control policy per SBS which maximizes the network utility while ensuring users' qualityof-service. Simulation results show that the proposed approach achieves up to 70.7% gains in EE and 99.5% reductions in the network's outage probabilities compared to a baseline model which focuses on improving EE while attempting to satisfy the users' instantaneous quality-of-service requirements.
arXiv (Cornell University), Aug 6, 2020
Machine learning (ML) is a promising enabler for the fifth generation (5G) communication systems ... more Machine learning (ML) is a promising enabler for the fifth generation (5G) communication systems and beyond. By imbuing intelligence into the network edge, edge nodes can proactively carry out decision-making, and thereby react to local environmental changes and disturbances while experiencing zero communication latency. To achieve this goal, it is essential to cater for high ML inference accuracy at scale under time-varying channel and network dynamics, by continuously exchanging fresh data and ML model updates in a distributed way. Taming this new kind of data traffic boils down to improving the communication efficiency of distributed learning by optimizing communication payload types, transmission techniques, and scheduling, as well as ML architectures, algorithms, and data processing methods. To this end, this article aims to provide a holistic overview of relevant communication and ML principles, and thereby present communication-efficient and distributed learning frameworks with selected use cases. 1 SIGNIFICANCE AND MOTIVATION The pursuit of extremely stringent latency and reliability guarantees is essential in the fifth generation (5G) communication system and beyond [1], [2]. In a wirelessly automated factory, the remote control of assembly robots should provision the same level of target latency and reliability offered by existing wired factory systems. To this end, for instance, control packets should be delivered within 1 ms with 99.99999% reliability [3]-[5]. In the emerging non-terrestrial communication enabled by a massive constellation of loworbit satellites [6]-[10], the orbiting speed is over 8 km per second, under which a single emergency control packet loss may incur collisions with other satellites and space debris. Unfortunately, traditional methods postulate known channel and network topological models while focusing primarily on maximizing data rates. Such model-based and best-effort solutions are far from enough to meet the challenging latency and reliability requirements under limited radio resources and randomness on wireless channels and network topologies in practice.
arXiv (Cornell University), Apr 29, 2016
In this paper, we develop novel two-tier interference management strategies that enable macrocell... more In this paper, we develop novel two-tier interference management strategies that enable macrocell users (MUEs) to improve their performance, with the help of open-access femtocells. To this end, we propose a rate-splitting technique using which the MUEs optimize their uplink transmissions by dividing their signals into two types: a coarse message that is intended for direct transmission to the macrocell base station and a fine message that is decoded by a neighboring femtocell and subsequently relayed over a heterogeneous (wireless/wired) backhaul. For deploying the proposed technique, we formulate a non-cooperative game between the MUEs in which each MUE can decide on its relaying femtocell while maximizing a utility function that captures both the achieved throughput and the expected backhaul delay. Simulation results show that the proposed approach yields up to 125% rate improvement and up to 2 times delay reduction with wired backhaul and, 150% rate improvement and up to 10 times delay reduction with wireless backhaul, relative to classical interference management approaches, with no cross-tier cooperation.
arXiv (Cornell University), Feb 3, 2020
In this work, we propose a novel joint client scheduling and resource block (RB) allocation polic... more In this work, we propose a novel joint client scheduling and resource block (RB) allocation policy to minimize the loss of accuracy in federated learning (FL) over wireless compared to a centralized training-based solution, under imperfect channel state information (CSI). First, the problem is cast as a stochastic optimization problem over a predefined training duration and solved using the Lyapunov optimization framework. In order to learn and track the wireless channel, a Gaussian process regression (GPR)-based channel prediction method is leveraged and incorporated into the scheduling decision. The proposed scheduling policies are evaluated via numerical simulations, under both perfect and imperfect CSI. Results show that the proposed method reduces the loss of accuracy up to 25.8 % compared to state-of-the-art client scheduling and RB allocation methods.
IEEE Transactions on Communications, Sep 1, 2021
The performance of federated learning (FL) over wireless networks depend on the reliability of th... more The performance of federated learning (FL) over wireless networks depend on the reliability of the client-server connectivity and clients' local computation capabilities. In this article we investigate the problem of client scheduling and resource block (RB) allocation to enhance the performance of model training using FL, over a pre-defined training duration under imperfect channel state information (CSI) and limited local computing resources. First, we analytically derive the gap between the training losses of FL with clients scheduling and a centralized training method for a given training duration. Then, we formulate the gap of the training loss minimization over client scheduling and RB allocation as a stochastic optimization problem and solve it using Lyapunov optimization. A Gaussian process regression-based channel prediction method is leveraged to learn and track the wireless channel, in which, the clients' CSI predictions and computing power are incorporated into the scheduling decision. Using an extensive set of simulations, we validate the robustness of the proposed method under both perfect and imperfect CSI over an array of diverse data distributions. Results show that the proposed method reduces the gap of the training accuracy loss by up to 40.7 % compared to state-of-theart client scheduling and RB allocation methods.
Vehicle-to-Everything (V2X) communication holds the promise for improving road safety and reducin... more Vehicle-to-Everything (V2X) communication holds the promise for improving road safety and reducing road accidents by enabling reliable and low latency services for vehicles. Vehicles are among the fastest growing type of connected devices. Therefore, there is a need for V2X communication, i.e., passing of information from Vehicle-to-Vehicle (V2V) or Vehicle-to-Infrastructure (V2I) and vice versa. In this paper, we focus on both V2I and V2V communication in a multi-lane freeway scenario, where coverage is provided by the Long Term Evolution Advanced (LTE-A) road side unit (RSU) network. Here, we propose a mechanism to offload vehicles with low signal-to-interference-plus-noise ratio (SINR) to be served by other vehicles, which have much higher quality link to the RSU. Furthermore, we analyze the improvements in the probabilities of achieving target throughputs and the performance is assessed through extensive system-level simulations. Results show that the proposed solution offloads low quality V2I links to stronger V2V links, and further increases successful transmission probability from 93% to 99.4%.
Proceedings of the IEEE, Aug 1, 2019
Edge computing is an emerging concept based on distributed computing, storage, and control servic... more Edge computing is an emerging concept based on distributed computing, storage, and control services closer to end network nodes. Edge computing lies at the heart of the fifth-generation (5G) wireless systems and beyond. While the current state-of-the-art networks communicate, compute, and process data in a centralized manner (at the cloud), for latency and compute-centric applications, both radio access and computational resources must be brought closer to the edge, harnessing the availability of computing and storageenabled small cell base stations in proximity to the end devices. Furthermore, the network infrastructure must enable a distributed edge decision-making service that learns to adapt to the network dynamics with minimal latency and optimize network deployment and operation accordingly. This paper will provide a fresh look to the concept of edge computing by first discussing the applications that the network Manuscript
arXiv (Cornell University), Jun 20, 2023
Cooperative multi-agent reinforcement learning (MARL) for navigation enables agents to cooperate ... more Cooperative multi-agent reinforcement learning (MARL) for navigation enables agents to cooperate to achieve their navigation goals. Using emergent communication, agents learn a communication protocol to coordinate and share information that is needed to achieve their navigation tasks. In emergent communication, symbols with no pre-specified usage rules are exchanged, in which the meaning and syntax emerge through training. Learning a navigation policy along with a communication protocol in a MARL environment is highly complex due to the huge state space to be explored. To cope with this complexity, this work proposes a novel neural network architecture, for jointly learning an adaptive state space abstraction and a communication protocol among agents participating in navigation tasks. The goal is to come up with an adaptive abstractor that significantly reduces the size of the state space to be explored, without degradation in the policy performance. Simulation results show that the proposed method reaches a better policy, in terms of achievable rewards, resulting in fewer training iterations compared to the case where raw states or fixed state abstraction are used. Moreover, it is shown that a communication protocol emerges during training which enables the agents to learn better policies within fewer training iterations.
arXiv (Cornell University), Apr 29, 2016
This paper presents the derivation for per-tier outage probability of a randomly deployed femtoce... more This paper presents the derivation for per-tier outage probability of a randomly deployed femtocell network over an existing macrocell network. The channel characteristics of macro user and femto user are addressed by considering different propagation modeling for outdoor and indoor links. Location based outage probability analysis and capacity of the system with outage constraints are used to analyze the system performance. To obtain the simplified expressions, approximations of ratios of Rayleigh random variables (RVs), Rayleigh to log normal RVs and their weighted summations, are derived with the verifications using simulations. Index Terms-Femtocell; outage probability; capacity; Rayleigh to Rayleigh probability density function (PDF); Rayleigh to log Normal PDF.
arXiv (Cornell University), Nov 9, 2020
This article deals with the problem of distributed machine learning, in which agents update their... more This article deals with the problem of distributed machine learning, in which agents update their models based on their local datasets, and aggregate the updated models collaboratively and in a fully decentralized manner. In this paper, we tackle the problem of information heterogeneity arising in multi-agent networks where the placement of informative agents plays a crucial role in the learning dynamics. Specifically, we propose BayGo, a novel fully decentralized joint Bayesian learning and graph optimization framework with proven fast convergence over a sparse graph. Under our framework, agents are able to learn and communicate with the most informative agent to their own learning. Unlike prior works, our framework assumes no prior knowledge of the data distribution across agents nor does it assume any knowledge of the true parameter of the system. The proposed alternating minimization based framework ensures global connectivity in a fully decentralized way while minimizing the number of communication links. We theoretically show that by optimizing the proposed objective function, the estimation error of the posterior probability distribution decreases exponentially at each iteration. Via extensive simulations, we show that our framework achieves faster convergence and higher accuracy compared to fully-connected and star topology graphs. 1 Eigenvector centrality of node i is a measure of social influence and it is proportional to agent i's number of neighbors.
Software-defined networking (SDN) is the concept of decoupling the control and data planes to cre... more Software-defined networking (SDN) is the concept of decoupling the control and data planes to create a flexible and agile network, assisted by a central controller. However, the performance of SDN highly depends on the limitations in the fronthaul which are inadequately discussed in the existing literature. In this paper, a fronthaul-aware software-defined resource allocation mechanism is proposed for 5G wireless networks with in-band wireless fronthaul constraints. Considering the fronthaul capacity, the controller maximizes the time-averaged network throughput by enforcing a coarse correlated equilibrium (CCE) and incentivizing base stations (BSs) to locally optimize their decisions to ensure mobile users' (MUs) quality-of-service (QoS) requirements. By marrying tools from Lyapunov stochastic optimization and game theory, we propose a two-timescale approach where the controller gives recommendations, i.e., subcarriers with low interference, in a long-timescale whereas BSs schedule their own MUs and allocate the available resources in every time slot. Numerical results show considerable throughput enhancements and delay reductions over a non-SDN network baseline.
IEEE Transactions on Wireless Communications, Nov 1, 2013
The design of distributed mechanisms for interference management is one of the key challenges in ... more The design of distributed mechanisms for interference management is one of the key challenges in emerging wireless small cell networks whose backhaul is capacity limited and heterogeneous (wired, wireless and a mix thereof). In this paper, a novel, backhaul-aware approach to interference management in wireless small cell networks is proposed. The proposed approach enables macrocell user equipments (MUEs) to optimize their uplink performance, by exploiting the presence of neighboring small cell base stations. The problem is formulated as a noncooperative game among the MUEs that seek to optimize their delay-rate tradeoff, given the conditions of both the radio access network and the-possibly heterogeneous-backhaul. To solve this game, a novel, distributed learning algorithm is proposed using which the MUEs autonomously choose their optimal uplink transmission strategies, given a limited amount of available information. The convergence of the proposed algorithm is shown and its properties are studied. Simulation results show that, under various types of backhauls, the proposed approach yields significant performance gains, in terms of both average throughput and delay for the MUEs, when compared to existing benchmark algorithms. Index Terms-heterogeneous networks; capacity-limited backhaul; wired and wireless backhaul; reinforcement learning; game theory. Sumudu Samarakoon received his B. Sc. degree in Electronic and Telecommunication Engineering from the University of Moratuwa , Sri Lanka in 2009 and the M. Eng. degree from the Asian Institute of Technology, Thailand in 2011. He is currently working Dr. Tech (Hons.) degree in Communications Engineering in University of Oulu, Finland. Sumudu is also a member of the research staff of the Centre for Wireless Communications (CWC), Oulu, Finlad. His main research interests are in heterogeneous networks, radio resource management and game theory.
Multiple-input multiple-output (MIMO) is a key for the fifth generation (5G) and beyond wireless ... more Multiple-input multiple-output (MIMO) is a key for the fifth generation (5G) and beyond wireless communication systems owing to higher spectrum efficiency, spatial gains, and energy efficiency. Reaping the benefits of MIMO transmission can be fully harnessed if the channel state information (CSI) is available at the transmitter side. However, the acquisition of transmitter side CSI entails many challenges. In this paper, we propose a deep learning assisted CSI estimation technique in highly mobile vehicular networks, based on the fact that the propagation environment (scatterers, reflectors) is almost identical thereby allowing a data driven deep neural network (DNN) to learn the non-linear CSI relations with negligible overhead. Moreover, we formulate and solve a dynamic network slicing based resource allocation problem for vehicular user equipments (VUEs) requesting enhanced mobile broadband (eMBB) and ultra-reliable low latency (URLLC) traffic slices. The formulation considers a threshold rate violation probability minimization for the eMBB slice while satisfying a probabilistic threshold rate criterion for the URLLC slice. Simulation result shows that an overhead reduction of 50% can be achieved with 12% increase in threshold violations compared to an ideal case with perfect CSI knowledge.
IEEE Transactions on Wireless Communications, Mar 1, 2016
In this paper, a novel cluster-based approach for maximizing the energy efficiency of wireless sm... more In this paper, a novel cluster-based approach for maximizing the energy efficiency of wireless small cell networks is proposed. A dynamic mechanism is proposed to group locallycoupled small cell base stations (SBSs) into clusters based on location and traffic load. Within each formed cluster, SBSs coordinate their transmission parameters to minimize a cost function which captures the tradeoffs between energy efficiency and flow level performance, while satisfying their users' quality-of-service requirements. Due to the lack of inter-cluster communications, clusters compete with one another in order to improve the overall network's energy efficiency. This inter-cluster competition is formulated as a noncooperative game between clusters that seek to minimize their respective cost functions. To solve this game, a distributed learning algorithm is proposed using which clusters autonomously choose their optimal transmission strategies based on local information. It is shown that the proposed algorithm converges to a stationary mixed-strategy distribution which constitutes an epsilon-coarse correlated equilibrium for the studied game. Simulation results show that the proposed approach yields significant performance gains reaching up to 36% of reduced energy expenditures and up to 41% of reduced fractional transfer time compared to conventional approaches.
arXiv (Cornell University), Jun 2, 2023
In this paper, we investigate the problem of robust Reconfigurable Intelligent Surface (RIS) phas... more In this paper, we investigate the problem of robust Reconfigurable Intelligent Surface (RIS) phase-shifts configuration over heterogeneous communication environments. The problem is formulated as a distributed learning problem over different environments in a Federated Learning (FL) setting. Equivalently, this corresponds to a game played between multiple RISs, as learning agents, in heterogeneous environments. Using Invariant Risk Minimization (IRM) and its FL equivalent, dubbed FL Games, we solve the RIS configuration problem by learning invariant causal representations across multiple environments and then predicting the phases. The solution corresponds to playing according to Best Response Dynamics (BRD) which yields the Nash Equilibrium of the FL game. The representation learner and the phase predictor are modeled by two neural networks, and their performance is validated via simulations against other benchmarks from the literature. Our results show that causalitybased learning yields a predictor that is 15% more accurate in unseen Out-of-Distribution (OoD) environments.
arXiv (Cornell University), Dec 12, 2020
This work studies a real-time environment monitoring scenario in the industrial Internet of thing... more This work studies a real-time environment monitoring scenario in the industrial Internet of things, where wireless sensors proactively collect environmental data and transmit it to the controller. We adopt the notion of risk-sensitivity in financial mathematics as the objective to jointly minimize the mean, variance, and other higher-order statistics of the network energy consumption subject to the constraints on the age of information (AoI) threshold violation probability and the AoI exceedances over a pre-defined threshold. We characterize the extreme AoI staleness using results in extreme value theory and propose a distributed power allocation approach by weaving in together principles of Lyapunov optimization and federated learning (FL). Simulation results demonstrate that the proposed FL-based distributed solution is on par with the centralized baseline while consuming 28.50% less system energy and outperforms the other baselines. Index Terms-5G and beyond, industrial IoT, smart factory, federated learning (FL), age of information (AoI), extreme value theory (EVT).
arXiv (Cornell University), Aug 20, 2021
In this article, we study the problem of robust reconfigurable intelligent surface (RIS)-aided do... more In this article, we study the problem of robust reconfigurable intelligent surface (RIS)-aided downlink communication over heterogeneous RIS types in the supervised learning setting. By modeling downlink communication over heterogeneous RIS designs as different workers that learn how to optimize phase configurations in a distributed manner, we solve this distributed learning problem using a distributionally robust formulation in a communication-efficient manner, while establishing its rate of convergence. By doing so, we ensure that the global model performance of the worst-case worker is close to the performance of other workers. Simulation results show that our proposed algorithm requires fewer communication rounds (about 50% lesser) to achieve the same worst-case distribution test accuracy compared to competitive baselines. Index Terms-Reconfigurable intelligent surface (RIS), federated learning, communication-efficiency, distributionally robust optimization (DRO).
IEEE Communications Letters, Apr 1, 2021
Ultra-reliable communication (URC) is a key enabler for supporting immersive and mission-critical... more Ultra-reliable communication (URC) is a key enabler for supporting immersive and mission-critical 5G applications. Meeting the strict reliability requirements of these applications is challenging due to the absence of accurate statistical models tailored to URC systems. In this letter, the wireless connectivity over dynamic channels is characterized via statistical learning methods. In particular, model-based and data-driven learning approaches are proposed to estimate the non-blocking connectivity statistics over a set of training samples with no knowledge on the dynamic channel statistics. Using principles of survival analysis, the reliability of wireless connectivity is measured in terms of the probability of channel blocking events. Moreover, the maximum transmission duration for a given reliable non-blocking connectivity is predicted in conjunction with the confidence of the inferred transmission duration. Results show that the accuracy of detecting channel blocking events is higher using the model-based method for low to moderate reliability targets requiring low sample complexity. In contrast, the data-driven method yields a higher detection accuracy for higher reliability targets at the cost of 100× sample complexity.
2015 IEEE Global Communications Conference (GLOBECOM), Dec 1, 2015
In this paper, a novel approach for joint power control and user scheduling is proposed for optim... more In this paper, a novel approach for joint power control and user scheduling is proposed for optimizing energy efficiency (EE), in terms of bits per unit power, in ultra dense small cell networks (UDNs). To address this problem, a dynamic stochastic game (DSG) is formulated between small cell base stations (SBSs). This game enables to capture the dynamics of both queues and channel states of the system. To solve this game, assuming a large homogeneous UDN deployment, the problem is cast as a mean field game (MFG) in which the MFG equilibrium is analyzed with the aid of two low-complexity tractable partial differential equations. User scheduling is formulated as a stochastic optimization problem and solved using the drift plus penalty (DPP) approach in the framework of Lyapunov optimization. Remarkably, it is shown that by weaving notions from Lyapunov optimization and mean field theory, the proposed solution yields an equilibrium control policy per SBS which maximizes the network utility while ensuring users' quality-ofservice. Simulation results show that the proposed approach achieves up to 18.1% gains in EE and 98.2% reductions in the network's outage probability compared to a baseline model.
IEEE Journal on Selected Areas in Communications, May 1, 2016
In this paper, a novel approach for joint power control and user scheduling is proposed for optim... more In this paper, a novel approach for joint power control and user scheduling is proposed for optimizing energy efficiency (EE), in terms of bits per unit energy, in ultra dense small cell networks (UDNs). Due to severe coupling in interference, this problem is formulated as a dynamic stochastic game (DSG) between small cell base stations (SBSs). This game enables to capture the dynamics of both the queues and channel states of the system. To solve this game, assuming a large homogeneous UDN deployment, the problem is cast as a mean-field game (MFG) in which the MFG equilibrium is analyzed with the aid of lowcomplexity tractable partial differential equations. Exploiting the stochastic nature of the problem, user scheduling is formulated as a stochastic optimization problem and solved using the drift plus penalty (DPP) approach in the framework of Lyapunov optimization. Remarkably, it is shown that by weaving notions from Lyapunov optimization and mean-field theory, the proposed solution yields an equilibrium control policy per SBS which maximizes the network utility while ensuring users' qualityof-service. Simulation results show that the proposed approach achieves up to 70.7% gains in EE and 99.5% reductions in the network's outage probabilities compared to a baseline model which focuses on improving EE while attempting to satisfy the users' instantaneous quality-of-service requirements.