Domenico Ciuonzo | Università degli Studi di Napoli "Federico II" (original) (raw)

Journal papers by Domenico Ciuonzo

Research paper thumbnail of Suppressing Interrupted Sampling Repeater Jamming by Transceiver Design of Fully-polarimetric Wideband Radars

IEEE Sensors Journal, 2024

The Interrupted Sampling Repeater Jamming (ISRJ) is adept at generating multiple false targets wi... more The Interrupted Sampling Repeater Jamming (ISRJ) is adept at generating multiple false targets with high fidelity at radar receivers through sub-sampling, leading to significant challenges in detecting actual targets. This paper presents a novel approach to mitigate such jamming by jointly designing the transmit waveform and receive filter of a fully-polarimetric wideband radar system. In this study, we aim to minimize the sum of the target's Integral SideLobe (ISL) energy and the jamming's total energy at the filter output. To ensure effective control over the mainlobe energy levels, we impose equality constraints on the peak values of both the target and jamming signals. Additionally, a constant-module constraint is applied to the transmit signal to prevent distortion at the transmitter. We incorporate the modulation of the Target Impulse Response Matrix (TIRM) to align with wideband illumination scenarios, utilizing the average TIRM over a specific Target-Aspect-Angle (TAA) interval to mitigate sensitivity in the Signal-to-Interference plus Noise Ratio (SINR) related to TAA variations. To address this nonconvex optimization problem, we propose an efficient algorithm based on an alternating optimization framework. Within this framework, the alternating direction method of multipliers (ADMM) is employed to tackle the inner subproblems, yielding closed-form solutions at each iteration. Experimental results demonstrate the effectiveness of the proposed algorithm, highlighting the benefits of wideband radar illumination, the resilience of output SINR to TAA uncertainty, and the enhanced jamming suppression capabilities of the fully-polarimetric system.

Research paper thumbnail of MEMENTO: A Novel Approach for Class Incremental Learning of Encrypted Traffic

Elsevier Computer Networks, 2024

In the ever-changing digital environment, ensuring the ongoing effectiveness of traffic analysis ... more In the ever-changing digital environment, ensuring the ongoing effectiveness of traffic analysis and security measures is crucial. Therefore, Class Incremental Learning (CIL) in encrypted Traffic Classification (TC) is essential for adapting to evolving network behaviors and the rapid development of new applications. However, the application of CIL techniques in the TC domain is not straightforward, usually leading to unsatisfactory performance figures. Specifically, the improvement goal is to reduce forgetting on old apps and increase the capacity in learning new ones, in order to improve overall classification performance-reducing the drop from a model "trained-from-scratch". The contribution of this work is the design of a novel fine-tuning approach called MEMENTO, which is obtained through the careful design of different building blocks: memory management, model training, and rectification strategies. In detail, we propose the application of traffic biflows augmentation strategies to better capitalize on old apps biflows, we introduce improvements in the distillation stage, and we design a general rectification strategy that includes several existing proposals. To assess our proposal, we leverage two publicly-available encrypted network traffic datasets, i.e., MIRAGE19 and CESNET-TLS22. As a result, on both datasets MEMENTO achieves a significant improvement in classifying new apps (w.r.t. the best-performing alternative, i.e., BiC) while maintaining stable performance on old ones. Equally important, MEMENTO achieves satisfactory overall TC performance, filling the gap toward a trained-from-scratch model and offering a considerable gain in terms of time (up to 10× speed-up) to obtain up-to-date and running classifiers. The experimental evaluation relies on a comprehensive performance evaluation workbench for CIL proposals, which is based on a wider set of metrics (as opposed to the existing literature in TC).

Research paper thumbnail of Explainable Deep-Learning Approaches for Packet-level Traffic Prediction of Collaboration and Communication Mobile Apps

IEEE Open Journal of the Communications Society, 2024

Significant transformations in lifestyle have reshaped the Internet landscape, resulting in notab... more Significant transformations in lifestyle have reshaped the Internet landscape, resulting in notable shifts in both the magnitude of Internet traffic and the diversity of apps utilized. The increased adoption of communication-and-collaboration apps, also fueled by lockdowns in the COVID pandemic years, has heavily impacted the management of network infrastructures and their traffic. A notable characteristic of these apps is their multi-activity nature, e.g., they can be used for chat and (interactive) audio/video in the same usage session: predicting and managing the traffic they generate is an important but especially challenging task. In this study, we focus on real data from four popular apps belonging to the aforementioned category: Skype, Teams, Webex, and Zoom. First, we collect traffic data from these apps, reliably label it with both the app and the specific user activity and analyze it from the perspective of traffic prediction. Second, we design data-driven models to predict this traffic at the finest granularity (i.e. at packet level) employing four advanced multitask deep learning architectures and investigating three different training strategies. The trade-off between performance and complexity is explored as well. We publish the dataset and release our code as open source to foster the replicability of our analysis. Third, we leverage the packet-level prediction approach to perform aggregate prediction at different timescales. Fourth, our study pioneers the trustworthiness analysis of these predictors via the application of eXplainable Artificial Intelligence to (a) interpret their forecasting results and (b) evaluate their reliability, highlighting the relative importance of different parts of observed traffic and thus offering insights for future analyses and applications. The insights gained from the analysis provided with this work have implications for various network management tasks, including monitoring, planning, resource allocation, and enforcing security policies.

Research paper thumbnail of Bayesian Fault Detection and Localization Through Wireless Sensor Networks in Industrial Plants

IEEE Internet of Things, 2024

This work proposes a data fusion approach for quickest fault detection and localization within in... more This work proposes a data fusion approach for quickest fault detection and localization within industrial plants via wireless sensor networks. Two approaches are proposed, each exploiting different network architectures. In the first approach, multiple sensors monitor a plant section and individually report their local decisions to a fusion center. The fusion center provides a global decision after spatial aggregation of the local decisions. A post-processing center subsequently processes these global decisions in time, which performs quick detection and localization. Alternatively, the fusion center directly performs a spatio-temporal aggregation directed at quickest detection, together with a possible estimation of the faulty item. Both architectures are provided with a feedback system where the network's highest hierarchical level transmits parameters to the lower levels. The two proposed approaches model the faults according to a Bayesian criterion and exploit the knowledge of the reliability model of the plant under monitoring. Moreover, adaptations of the well-known Shewhart and CUSUM charts are provided to fit the different architectures and are used for comparison purposes. Finally, the algorithms are tested via simulation on an active Oil and Gas subsea production system, and performances are provided.

Research paper thumbnail of AI-powered Internet Traffic Classification: Past, Present, and Future

IEEE Communications Magazine, 2023

Traffic classification (TC) is pivotal for network traffic management and security. Over time, TC... more Traffic classification (TC) is pivotal for network traffic management and security. Over time, TC solutions leveraging Artificial Intelligence (AI) have undergone significant advancements, primarily fueled by Machine Learning (ML). This paper analyzes the history and current state of AI-powered TC on the Internet, highlighting unresolved research questions. Indeed, despite extensive research, key desiderata goals to product-line implementations remain. AI presents untapped potential for addressing the complex and evolving challenges of TC, drawing from successful applications in other domains. We identify novel ML topics and solutions that address unmet TC requirements, shaping a comprehensive research landscape for the TC future. We also discuss the interdependence of TC desiderata and identify obstacles hindering AI-powered next-generation solutions. Overcoming these roadblocks will unlock two intertwined visions for future networks: self-managed and human-centered networks.

Research paper thumbnail of Deep Recurrent Graph Convolutional Architecture for Sensor Fault Detection, Isolation and Accommodation in Digital Twins

IEEE Sensors Journal, 2023

The rapid adoption of Internet-of-Things (IoT) and digital twins (DTs) technologies within indust... more The rapid adoption of Internet-of-Things (IoT) and digital twins (DTs) technologies within industrial environments has highlighted diverse critical issues related to safety and security. Sensor failure is one of the major threats compromising DTs operations. In this paper, for the first time, we address the problem of sensor fault detection, isolation and accommodation (SFDIA) in large-size networked systems. Current available machine-learning solutions are either based on shallow networks unable to capture complex features from input graph data or on deep networks with overshooting complexity in the case of large number of sensors. To overcome these challenges, we propose a new framework for sensor validation based on a deep recurrent graph convolutional architecture which jointly learns a graph structure and models spatiotemporal inter-dependencies. More specifically, the proposed twoblock architecture (i) constructs the virtual sensors in the first block to refurbish anomalous (i.e. faulty) behaviour of unreliable sensors and to accommodate the isolated faulty sensors and (ii) performs the detection and isolation tasks in the second block by means of a classifier. Extensive analysis on two publicly-available datasets demonstrates the superiority of the proposed architecture over existing state-of-the-art solutions.

Research paper thumbnail of Time-Aware Distributed Sequential Detection of Gas Dispersion via Wireless Sensor Networks

Research paper thumbnail of Optimization Based Sensor Placement for Multi-Target Localization with Coupling Sensor Clusters

IEEE Transactions on Signal and Information Processing over Networks, 2023

Since the Cramér-Rao lower bounds (CRLB) of target localization depends on the sensor geometry ex... more Since the Cramér-Rao lower bounds (CRLB) of target localization depends on the sensor geometry explicitly, sensor placement becomes a crucial issue in many target or source localization applications. In the context of simultaneous time-ofarrival (TOA) based multi-target localization, we consider the sensor placement for multiple sensor clusters in the presence of shared sensors. To minimize the mean squared error (MSE) of target localization, we formulate the sensor placement problem as a minimization of the trace of the Cramér-Rao lower bound (CRLB) matrix (i.e., A-optimal design), subject to the coupling constraints corresponding to the freely-placed shared sensors. For the formulated nonconvex problem, we propose an optimization approach based on the combination of alternating minimization (AM), alternating direction method of multipliers (ADMM) and majorization-minimization (MM), in which the AM alternates between sensor clusters and the integrated ADMM and MM are employed to solve the subproblems. The proposed algorithm monotonically minimizes the joint design criterion and converges to a stationary point of the objective. Unlike the state-of-the-art analytical approaches in the literature, the proposed algorithm can handle both the non-uniform and correlated measurement noise in the simultaneous multi-target case. Through various numerical simulations under different scenario settings, we show the efficiency of the proposed method to design the optimal sensor geometry.

Research paper thumbnail of Benchmarking Class Incremental Learning in Deep Learning Traffic Classification

IEEE Transactions on Network and Service Management, 2023

Traffic Classification (TC) is experiencing a renewed interest, fostered by the growing popularit... more Traffic Classification (TC) is experiencing a renewed interest, fostered by the growing popularity of Deep Learning (DL) approaches. In exchange for their proved effectiveness, DL models are characterized by a computationally-intensive training procedure that badly matches the fast-paced release of new (mobile) applications, resulting in significantly limited efficiency of model updates. To address this shortcoming, in this work we systematically explore Class Incremental Learning (CIL) techniques, aimed at adding new apps/services to preexisting DL-based traffic classifiers without a full retraining, hence speeding up the model's updates cycle. We investigate a large corpus of state-of-the-art CIL approaches for the DL-based TC task, and delve into their working principles to highlight relevant insight, aiming to understand if there is a case for CIL in TC. We evaluate and discuss their performance varying the number of incremental learning episodes, and the number of new apps added for each episode. Our evaluation is based on the publicly available MIRAGE19 dataset comprising traffic of 40 popular Android applications, fostering reproducibility. Despite our analysis reveals their infancy, CIL techniques are a promising research area on the roadmap towards automated DL-based traffic analysis systems.

Research paper thumbnail of Constant-Modulus Waveform Design with Polarization-Adaptive Power Allocation in Polarimetric Radar

IEEE Transactions on Signal Processing, 2023

In polarimetric radars, corresponding to the polarized antennas, exploiting waveform diversity al... more In polarimetric radars, corresponding to the polarized antennas, exploiting waveform diversity along the polarization dimension becomes accessible. In this paper, we aim to maximize the signal-to-interference plus noise ratio (SINR) of a polarimetric radar by optimizing the transmit polarimetric waveform, the power allocation on its horizontal and vertical polarization segments, and the receiving filters jointly, subject to separate (while practical) unit-modulus and similarity constraints. To mitigate the SINR sensitivity on Target-Aspect-Angle (TAA), the average Target-Impulse-Response Matrix (TIRM) within a certain (TAA) interval is employed as the target response, which leads to an average SINR as the metric to be maximized. For the formulated nonconvex fractional programming problem, we propose an efficient algorithm under the framework of the alternating optimization method. Within, the alternating direction method of multiplier (ADMM) is deployed to solve the inner subproblems with closed form solutions obtained at each iteration. The analysis on computational cost and convergence of the proposed algorithm is also provided. Experiment results show the effectiveness of the proposed algorithm, the robustness of the output SINR against the TAA uncertainty, and the superior performance of polarimetric power adaption.

Research paper thumbnail of Network Anomaly Detection Methods in IoT Environments via Deep Learning: A Fair Comparison of Performance and Robustness

Elsevier Computers and Security, 2023

The Internet of Things (IoT) is a key enabler in closing the loop in Cyber-Physical Systems, prov... more The Internet of Things (IoT) is a key enabler in closing the loop in Cyber-Physical Systems, providing "smartness" and thus additional value to each monitored/controlled physical asset. Unfortunately, these devices are more and more targeted by cyberattacks because of their diffusion and of the usually limited hardware and software resources. This calls for designing and evaluating new effective approaches for protecting IoT systems at the network level (Network Intrusion Detection Systems, NIDSs). These in turn are challenged by the heterogeneity of IoT devices and the growing volume of transmitted data. To tackle this challenge, we select a Deep Learning architecture to perform unsupervised early anomaly detection. With a data-driven approach, we explore in-depth multiple design choices and exploit the appealing structural properties of the selected architecture to enhance its performance. The experimental evaluation is performed on two recent and publicly available IoT datasets (IoT-23 and Kitsune). Finally, we adopt an adversarial approach to investigate the robustness of our solution in the presence of Label Flipping poisoning attacks. The experimental results highlight the improved performance of the proposed architecture, in comparison to both well-known baselines and previous proposals.

Research paper thumbnail of Improving Performance, Reliability, and Feasibility in Multimodal Multitask Traffic Classification with XAI

IEEE Transactions on Network and Service Management, 2023

The promise of Deep Learning (DL) in solving hard problems such as network Traffic Classification... more The promise of Deep Learning (DL) in solving hard problems such as network Traffic Classification (TC) is being held back by the severe lack of transparency and explainability of this kind of approaches. To cope with this strongly felt issue, the field of eXplainable Artificial Intelligence (XAI) has been recently founded, and is providing effective techniques and approaches. Accordingly, in this work we investigate interpretability via XAIbased techniques to understand and improve the behavior of state-of-the-art multimodal and multitask DL traffic classifiers. Using a publicly available security-related dataset (ISCX VPN-NONVPN), we explore and exploit XAI techniques to characterize the considered classifiers providing global interpretations (rather than sample-based ones), and define a novel classifier, DISTILLER-EVOLVED, optimized along three objectives: performance, reliability, feasibility. The proposed methodology proves as highly appealing, allowing to much simplify the architecture to get faster training time and shorter classification time, as fewer packets must be collected. This is at the expenses of negligible (or even positive) impact on classification performance, while understanding and controlling the interplay between inputs, model complexity, performance, and reliability.

Research paper thumbnail of A Machine-Learning Architecture for Sensor Fault Detection, Isolation and Accommodation in Digital Twins

IEEE Sensors Journal, 2022

Sensor technologies empower Industry 4.0 by enabling integration of in-field and real-time raw da... more Sensor technologies empower Industry 4.0 by enabling integration of in-field and real-time raw data into digital twins. However, sensors might be unreliable due to inherent issues and/or environmental conditions. This paper aims at detecting anomalies instantaneously in measurements from sensors, identifying the faulty ones and accommodating them with appropriate estimated data, thus paving the way to reliable digital twins. More specifically, a real-time general machine-learning-based architecture for sensor validation is proposed, built upon a series of neural-network estimators and a classifier. Estimators correspond to virtual sensors of all unreliable sensors (to reconstruct normal behaviour and replace the isolated faulty sensor within the system), whereas the classifier is used for detection and isolation tasks. A comprehensive statistical analysis on three different real-world data-sets is conducted and the performance of the proposed architecture is validated under hard and soft synthetically-generated faults.

Research paper thumbnail of Joint Design of Horizontal and Vertical Polarization Waveforms for Polarimetric Radar via SINR Maximization

IEEE Transactions on Aerospace and Electronic Systems, 2022

For an extended target with different polarimetric response, one way of improving the detection p... more For an extended target with different polarimetric response, one way of improving the detection performance is to exploit waveform diversity on the dimension of polarization. In this paper, we focus on joint design of transmit signal and receive filter for polarimetric radars with local waveform constraints. Considering the signal-to-interference-plus-noise ratio (SINR) as the figure of merit to optimize, where the average Target-Impulse-Response Matrix (TIRM) within a certain Target-Aspect-Angle (TAA) interval is employed as the target response, the waveform is decomposed and then designed for both horizontal and vertical polarization segments, subject to energy and similarity constraints. An iterative algorithm is proposed based on the majorization-minimization (MM) method to solve the formulated problem. The developed algorithm guarantees the convergence to a B-stationary point, where in each iteration, optimal horizontal and vertical transmit waveforms are respectively solved by using the feasible point pursuit and successive convex approximation (FPP-SCA) technique. Experiment results show the effectiveness of the proposed algorithm, the robustness of the output SINR against the TAA change, and the advantages of polarization diversity and local design.

Research paper thumbnail of Contextual Counters and Multimodal Deep Learning for Activity-Level Traffic Classification of Mobile Communication Apps during COVID-19 Pandemic

Elsevier Computer Networks, 2022

The COVID-19 pandemic has reshaped Internet traffic due to the huge modifications imposed to life... more The COVID-19 pandemic has reshaped Internet traffic due to the huge modifications imposed to lifestyle of people resorting more and more to collaboration and communication apps to accomplish daily tasks. Accordingly, these dramatic changes call for novel traffic management solutions to adequately countermeasure such unexpected and massive changes in traffic characteristics. In this paper, we focus on communication and collaboration apps whose traffic experienced a sudden growth during the last two years. Specifically, we consider nine apps whose traffic we collect, reliably label, and publicly release as a new dataset (MIRAGE-COVID-CCMA-2022) to the scientific community. First, we investigate the capability of state-of-art single-modal and multimodal Deep Learning-based classifiers in telling the specific app, the activity performed by the user, or both. While we highlight that state-of-art solutions reports a more-than-satisfactory performance in addressing app classification (96%-98% Fmeasure), evident shortcomings stem out when tackling activity classification (56%-65% F-measure) when using approaches that leverage the transport-layer payload and/or per-packet information attainable from the initial part of the biflows. In line with these limitations, we design a novel set of inputs (namely Context Inputs) providing clues about the nature of a biflow by observing the biflows coexisting simultaneously. Based on these considerations, we propose Mimetic-All a novel early traffic classification multimodal solution that leverages Context Inputs as an additional modality, achieving ≥ 82% F-measure in activity classification. Also, capitalizing the multimodal nature of Mimetic-All, we evaluate different combinations of the inputs. Interestingly, experimental results witness that Mimetic-ConSeq-a variant that uses the Context Inputs but does not rely on payload information (thus gaining greater robustness to more opaque encryption sub-layers possibly going to be adopted in the future)-experiences only ≈ 1% F-measure drop in performance w.r.t. Mimetic-All and results in a shorter training time.

Research paper thumbnail of Generalized Locally Most Powerful Tests for Distributed Sparse Signal Detection

IEEE Transactions on Signal Processing and Information over Networks, 2022

In this paper we tackle distributed detection of a localized phenomenon of interest (POI) whose s... more In this paper we tackle distributed detection of a localized phenomenon of interest (POI) whose signature is sparse via a wireless sensor network. We assume that both the position and the emitted power of the POI are unknown, other than the sparsity degree associated to its signature. We consider two communication scenarios in which sensors send either (i) their compressed observations or (ii) a 1-bit quantization of them to the fusion center (FC). In the latter case, we consider non-ideal reporting channels between the sensors and the FC. We derive generalized (i.e. based on Davies' framework [1]) locally most powerful detectors for the considered problem with the aim of obtaining computationally-efficient fusion rules. Moreover, we obtain their asymptotic performance and, based on such result, we design the local quantization thresholds at the sensors by solving a 1-D optimization problem. Simulation results confirm the effectiveness of the proposed design and highlight only negligible performance loss with respect to counterparts based on the (more-complex) generalized likelihood ratio.

Research paper thumbnail of Distributed Detection Fusion in Clustered Sensor Networks over Multiple Access Fading Channels

IEEE Transactions on Signal and Information Processing over Networks, 2022

In this paper, we tackle decision fusion for distributed detection in a randomly-deployed cluster... more In this paper, we tackle decision fusion for distributed detection in a randomly-deployed clustered Wireless Sensor Networks (WSNs) operating over a non-ideal multiple access channels (MACs), i.e. considering Rayleigh fading, path loss and additive noise. To mitigate fading, we propose the distributed equal gain transmit combining (dEGTC) and distributed maximum ratio transit combining (dMRTC). The first and second order statistics of the received signals were analytically computed via stochastic geometry tools. Then the distribution of the received signal over the MAC are approximated by Gaussian and log-normal distributions via moment matching. This enabled the derivation of moment matching optimal fusion rules (MOR) for both distributions. Moreover, suboptimal simpler fusion rules were also proposed, in which all the CHs data are equally weighed, which is termed moment matching equal gain fusion rule (MER). It is shown by simulations that increasing the number of clusters improve the performance. Moreover, MOR-Gaussian based algorithms are better under free-space propagation whereas their lognormal counterparts are more suited in the ground-reflection case. Also, the latter algorithms show better results in low SNR and SN numbers conditions. We have proved that the received power at the CH in MAC is proportional O λ 2 R 2 and to O λ 2 ln 2 R in the freespace propagation and the ground-reflection cases respectively, where λ is SN deployment intensity and R is the cluster radius. This implies that having more clusters decreases the required transmission power for a given SNR at the receiver.

Research paper thumbnail of Multi-bit & Sequential Decentralized Detection of a Noncooperative Moving Target Through a Generalized Rao Test

IEEE Transactions on Signal Processing and Information over Networks, 2021

We consider decentralized detection (DD) of an uncooperative moving target via wireless sensor ne... more We consider decentralized detection (DD) of an uncooperative moving target via wireless sensor networks (WSNs), measured in zero-mean unimodal noise. To address energy and bandwidth limitations, the sensors use multi-level quantizers. The encoded bits are then reported to a fusion center (FC) via binary symmetric channels. Herein, we propose a generalized Rao (G-Rao) test as a simpler alternative to the generalized likelihood ratio test (GLRT). Then, at the FC, a truncated one-sided sequential (TOS) test rule is considered in addition to the fixed-sample-size (FSS) manner. Further, the asymptotic performance of a trajectory-clairvoyant (multi-bit) Rao test is leveraged to develop an offline and per-sensor quantizer design. Detection gain measures are also introduced to assess resolution improvements. Simulations show the appeal of G-Rao test with respect to the GLRT, and the gain in detection by using multiple bits for quantization, as well as the advantage of the sequential detection approach.

Research paper thumbnail of Packet-Level Prediction of Mobile-App Traffic using Multitask Deep Learning

Elsevier Computer Networks, 2021

The prediction of network traffic characteristics helps in understanding this complex phenomenon ... more The prediction of network traffic characteristics helps in understanding this complex phenomenon and enables a number of practical applications, ranging from network planning and provisioning to management, with security implications as well. A significant corpus of work has so far focused on aggregated behavior, e.g., considering traffic volumes observed over a given time interval. Very limited attempts can instead be found tackling prediction at packet-level granularity. This much harder problem (whose solution extends trivially to the aggregated prediction) allows a finer-grained knowledge and wider possibilities of exploitation. The recent investigation and success of sophisticated Deep Learning algorithms is now providing mature tools to face this challenging but promising goal. In this work, we investigate and specialize a set of architectures selected among Convolutional, Recurrent, and Composite Neural Networks, to predict mobile-app traffic at the finest (packet-level) granularity. We discuss and experimentally evaluate the prediction effectiveness of the provided approaches also assessing the benefits of a number of design choices such as memory size or multimodality, investigating performance trends at packet level focusing on the head and the tail of biflows. We compare the results with both Markovian and classic Machine Learning approaches, showing increased performance with respect to state-of-the-art predictors (high-order Markov chains and Random Forest Regressor). For the sake of reproducibility and relevance to modern traffic, all evaluations are conducted leveraging two real human-generated mobile traffic datasets including different categories of mobile apps. The experimental results witness remarkable variability in prediction performance among different apps categories. The work also provides valuable analysis results and tools to compare different predictors and strike the best balance among the performance measures.

Research paper thumbnail of Tracking a Low-Angle Isolated target via Elevation-Angle Estimation Algorithm based on Extended Kalman Filter with Array Antenna

MDPI Remote Sensing, 2021

In a low-angle tracking situation, estimating the elevation angle is quite challenging, because o... more In a low-angle tracking situation, estimating the elevation angle is quite challenging, because of the entrance of the multipath signals in the antenna's main lobe. In this article, we propose two methods based on the extended Kalman filter (EKF) and frequency diversity (FD) process to estimate the elevation angle of a low-angle isolated target. In the first case, a simple weighting of the per-frequency estimates is obtained (termed WFD). Differently, in the second case, a matrix-based elaboration of per-frequency estimates is proposed (termed MFD). The proposed methods are completely independent of prior knowledge of geometrical information and physical parameters. The simulation results show that both methods have excellent performance and guarantee accurate elevation angle estimation in different multipath environments and even in very-low SNR condition. Hence, they are both suitable for low-peak-power radars.

Research paper thumbnail of Suppressing Interrupted Sampling Repeater Jamming by Transceiver Design of Fully-polarimetric Wideband Radars

IEEE Sensors Journal, 2024

The Interrupted Sampling Repeater Jamming (ISRJ) is adept at generating multiple false targets wi... more The Interrupted Sampling Repeater Jamming (ISRJ) is adept at generating multiple false targets with high fidelity at radar receivers through sub-sampling, leading to significant challenges in detecting actual targets. This paper presents a novel approach to mitigate such jamming by jointly designing the transmit waveform and receive filter of a fully-polarimetric wideband radar system. In this study, we aim to minimize the sum of the target's Integral SideLobe (ISL) energy and the jamming's total energy at the filter output. To ensure effective control over the mainlobe energy levels, we impose equality constraints on the peak values of both the target and jamming signals. Additionally, a constant-module constraint is applied to the transmit signal to prevent distortion at the transmitter. We incorporate the modulation of the Target Impulse Response Matrix (TIRM) to align with wideband illumination scenarios, utilizing the average TIRM over a specific Target-Aspect-Angle (TAA) interval to mitigate sensitivity in the Signal-to-Interference plus Noise Ratio (SINR) related to TAA variations. To address this nonconvex optimization problem, we propose an efficient algorithm based on an alternating optimization framework. Within this framework, the alternating direction method of multipliers (ADMM) is employed to tackle the inner subproblems, yielding closed-form solutions at each iteration. Experimental results demonstrate the effectiveness of the proposed algorithm, highlighting the benefits of wideband radar illumination, the resilience of output SINR to TAA uncertainty, and the enhanced jamming suppression capabilities of the fully-polarimetric system.

Research paper thumbnail of MEMENTO: A Novel Approach for Class Incremental Learning of Encrypted Traffic

Elsevier Computer Networks, 2024

In the ever-changing digital environment, ensuring the ongoing effectiveness of traffic analysis ... more In the ever-changing digital environment, ensuring the ongoing effectiveness of traffic analysis and security measures is crucial. Therefore, Class Incremental Learning (CIL) in encrypted Traffic Classification (TC) is essential for adapting to evolving network behaviors and the rapid development of new applications. However, the application of CIL techniques in the TC domain is not straightforward, usually leading to unsatisfactory performance figures. Specifically, the improvement goal is to reduce forgetting on old apps and increase the capacity in learning new ones, in order to improve overall classification performance-reducing the drop from a model "trained-from-scratch". The contribution of this work is the design of a novel fine-tuning approach called MEMENTO, which is obtained through the careful design of different building blocks: memory management, model training, and rectification strategies. In detail, we propose the application of traffic biflows augmentation strategies to better capitalize on old apps biflows, we introduce improvements in the distillation stage, and we design a general rectification strategy that includes several existing proposals. To assess our proposal, we leverage two publicly-available encrypted network traffic datasets, i.e., MIRAGE19 and CESNET-TLS22. As a result, on both datasets MEMENTO achieves a significant improvement in classifying new apps (w.r.t. the best-performing alternative, i.e., BiC) while maintaining stable performance on old ones. Equally important, MEMENTO achieves satisfactory overall TC performance, filling the gap toward a trained-from-scratch model and offering a considerable gain in terms of time (up to 10× speed-up) to obtain up-to-date and running classifiers. The experimental evaluation relies on a comprehensive performance evaluation workbench for CIL proposals, which is based on a wider set of metrics (as opposed to the existing literature in TC).

Research paper thumbnail of Explainable Deep-Learning Approaches for Packet-level Traffic Prediction of Collaboration and Communication Mobile Apps

IEEE Open Journal of the Communications Society, 2024

Significant transformations in lifestyle have reshaped the Internet landscape, resulting in notab... more Significant transformations in lifestyle have reshaped the Internet landscape, resulting in notable shifts in both the magnitude of Internet traffic and the diversity of apps utilized. The increased adoption of communication-and-collaboration apps, also fueled by lockdowns in the COVID pandemic years, has heavily impacted the management of network infrastructures and their traffic. A notable characteristic of these apps is their multi-activity nature, e.g., they can be used for chat and (interactive) audio/video in the same usage session: predicting and managing the traffic they generate is an important but especially challenging task. In this study, we focus on real data from four popular apps belonging to the aforementioned category: Skype, Teams, Webex, and Zoom. First, we collect traffic data from these apps, reliably label it with both the app and the specific user activity and analyze it from the perspective of traffic prediction. Second, we design data-driven models to predict this traffic at the finest granularity (i.e. at packet level) employing four advanced multitask deep learning architectures and investigating three different training strategies. The trade-off between performance and complexity is explored as well. We publish the dataset and release our code as open source to foster the replicability of our analysis. Third, we leverage the packet-level prediction approach to perform aggregate prediction at different timescales. Fourth, our study pioneers the trustworthiness analysis of these predictors via the application of eXplainable Artificial Intelligence to (a) interpret their forecasting results and (b) evaluate their reliability, highlighting the relative importance of different parts of observed traffic and thus offering insights for future analyses and applications. The insights gained from the analysis provided with this work have implications for various network management tasks, including monitoring, planning, resource allocation, and enforcing security policies.

Research paper thumbnail of Bayesian Fault Detection and Localization Through Wireless Sensor Networks in Industrial Plants

IEEE Internet of Things, 2024

This work proposes a data fusion approach for quickest fault detection and localization within in... more This work proposes a data fusion approach for quickest fault detection and localization within industrial plants via wireless sensor networks. Two approaches are proposed, each exploiting different network architectures. In the first approach, multiple sensors monitor a plant section and individually report their local decisions to a fusion center. The fusion center provides a global decision after spatial aggregation of the local decisions. A post-processing center subsequently processes these global decisions in time, which performs quick detection and localization. Alternatively, the fusion center directly performs a spatio-temporal aggregation directed at quickest detection, together with a possible estimation of the faulty item. Both architectures are provided with a feedback system where the network's highest hierarchical level transmits parameters to the lower levels. The two proposed approaches model the faults according to a Bayesian criterion and exploit the knowledge of the reliability model of the plant under monitoring. Moreover, adaptations of the well-known Shewhart and CUSUM charts are provided to fit the different architectures and are used for comparison purposes. Finally, the algorithms are tested via simulation on an active Oil and Gas subsea production system, and performances are provided.

Research paper thumbnail of AI-powered Internet Traffic Classification: Past, Present, and Future

IEEE Communications Magazine, 2023

Traffic classification (TC) is pivotal for network traffic management and security. Over time, TC... more Traffic classification (TC) is pivotal for network traffic management and security. Over time, TC solutions leveraging Artificial Intelligence (AI) have undergone significant advancements, primarily fueled by Machine Learning (ML). This paper analyzes the history and current state of AI-powered TC on the Internet, highlighting unresolved research questions. Indeed, despite extensive research, key desiderata goals to product-line implementations remain. AI presents untapped potential for addressing the complex and evolving challenges of TC, drawing from successful applications in other domains. We identify novel ML topics and solutions that address unmet TC requirements, shaping a comprehensive research landscape for the TC future. We also discuss the interdependence of TC desiderata and identify obstacles hindering AI-powered next-generation solutions. Overcoming these roadblocks will unlock two intertwined visions for future networks: self-managed and human-centered networks.

Research paper thumbnail of Deep Recurrent Graph Convolutional Architecture for Sensor Fault Detection, Isolation and Accommodation in Digital Twins

IEEE Sensors Journal, 2023

The rapid adoption of Internet-of-Things (IoT) and digital twins (DTs) technologies within indust... more The rapid adoption of Internet-of-Things (IoT) and digital twins (DTs) technologies within industrial environments has highlighted diverse critical issues related to safety and security. Sensor failure is one of the major threats compromising DTs operations. In this paper, for the first time, we address the problem of sensor fault detection, isolation and accommodation (SFDIA) in large-size networked systems. Current available machine-learning solutions are either based on shallow networks unable to capture complex features from input graph data or on deep networks with overshooting complexity in the case of large number of sensors. To overcome these challenges, we propose a new framework for sensor validation based on a deep recurrent graph convolutional architecture which jointly learns a graph structure and models spatiotemporal inter-dependencies. More specifically, the proposed twoblock architecture (i) constructs the virtual sensors in the first block to refurbish anomalous (i.e. faulty) behaviour of unreliable sensors and to accommodate the isolated faulty sensors and (ii) performs the detection and isolation tasks in the second block by means of a classifier. Extensive analysis on two publicly-available datasets demonstrates the superiority of the proposed architecture over existing state-of-the-art solutions.

Research paper thumbnail of Time-Aware Distributed Sequential Detection of Gas Dispersion via Wireless Sensor Networks

Research paper thumbnail of Optimization Based Sensor Placement for Multi-Target Localization with Coupling Sensor Clusters

IEEE Transactions on Signal and Information Processing over Networks, 2023

Since the Cramér-Rao lower bounds (CRLB) of target localization depends on the sensor geometry ex... more Since the Cramér-Rao lower bounds (CRLB) of target localization depends on the sensor geometry explicitly, sensor placement becomes a crucial issue in many target or source localization applications. In the context of simultaneous time-ofarrival (TOA) based multi-target localization, we consider the sensor placement for multiple sensor clusters in the presence of shared sensors. To minimize the mean squared error (MSE) of target localization, we formulate the sensor placement problem as a minimization of the trace of the Cramér-Rao lower bound (CRLB) matrix (i.e., A-optimal design), subject to the coupling constraints corresponding to the freely-placed shared sensors. For the formulated nonconvex problem, we propose an optimization approach based on the combination of alternating minimization (AM), alternating direction method of multipliers (ADMM) and majorization-minimization (MM), in which the AM alternates between sensor clusters and the integrated ADMM and MM are employed to solve the subproblems. The proposed algorithm monotonically minimizes the joint design criterion and converges to a stationary point of the objective. Unlike the state-of-the-art analytical approaches in the literature, the proposed algorithm can handle both the non-uniform and correlated measurement noise in the simultaneous multi-target case. Through various numerical simulations under different scenario settings, we show the efficiency of the proposed method to design the optimal sensor geometry.

Research paper thumbnail of Benchmarking Class Incremental Learning in Deep Learning Traffic Classification

IEEE Transactions on Network and Service Management, 2023

Traffic Classification (TC) is experiencing a renewed interest, fostered by the growing popularit... more Traffic Classification (TC) is experiencing a renewed interest, fostered by the growing popularity of Deep Learning (DL) approaches. In exchange for their proved effectiveness, DL models are characterized by a computationally-intensive training procedure that badly matches the fast-paced release of new (mobile) applications, resulting in significantly limited efficiency of model updates. To address this shortcoming, in this work we systematically explore Class Incremental Learning (CIL) techniques, aimed at adding new apps/services to preexisting DL-based traffic classifiers without a full retraining, hence speeding up the model's updates cycle. We investigate a large corpus of state-of-the-art CIL approaches for the DL-based TC task, and delve into their working principles to highlight relevant insight, aiming to understand if there is a case for CIL in TC. We evaluate and discuss their performance varying the number of incremental learning episodes, and the number of new apps added for each episode. Our evaluation is based on the publicly available MIRAGE19 dataset comprising traffic of 40 popular Android applications, fostering reproducibility. Despite our analysis reveals their infancy, CIL techniques are a promising research area on the roadmap towards automated DL-based traffic analysis systems.

Research paper thumbnail of Constant-Modulus Waveform Design with Polarization-Adaptive Power Allocation in Polarimetric Radar

IEEE Transactions on Signal Processing, 2023

In polarimetric radars, corresponding to the polarized antennas, exploiting waveform diversity al... more In polarimetric radars, corresponding to the polarized antennas, exploiting waveform diversity along the polarization dimension becomes accessible. In this paper, we aim to maximize the signal-to-interference plus noise ratio (SINR) of a polarimetric radar by optimizing the transmit polarimetric waveform, the power allocation on its horizontal and vertical polarization segments, and the receiving filters jointly, subject to separate (while practical) unit-modulus and similarity constraints. To mitigate the SINR sensitivity on Target-Aspect-Angle (TAA), the average Target-Impulse-Response Matrix (TIRM) within a certain (TAA) interval is employed as the target response, which leads to an average SINR as the metric to be maximized. For the formulated nonconvex fractional programming problem, we propose an efficient algorithm under the framework of the alternating optimization method. Within, the alternating direction method of multiplier (ADMM) is deployed to solve the inner subproblems with closed form solutions obtained at each iteration. The analysis on computational cost and convergence of the proposed algorithm is also provided. Experiment results show the effectiveness of the proposed algorithm, the robustness of the output SINR against the TAA uncertainty, and the superior performance of polarimetric power adaption.

Research paper thumbnail of Network Anomaly Detection Methods in IoT Environments via Deep Learning: A Fair Comparison of Performance and Robustness

Elsevier Computers and Security, 2023

The Internet of Things (IoT) is a key enabler in closing the loop in Cyber-Physical Systems, prov... more The Internet of Things (IoT) is a key enabler in closing the loop in Cyber-Physical Systems, providing "smartness" and thus additional value to each monitored/controlled physical asset. Unfortunately, these devices are more and more targeted by cyberattacks because of their diffusion and of the usually limited hardware and software resources. This calls for designing and evaluating new effective approaches for protecting IoT systems at the network level (Network Intrusion Detection Systems, NIDSs). These in turn are challenged by the heterogeneity of IoT devices and the growing volume of transmitted data. To tackle this challenge, we select a Deep Learning architecture to perform unsupervised early anomaly detection. With a data-driven approach, we explore in-depth multiple design choices and exploit the appealing structural properties of the selected architecture to enhance its performance. The experimental evaluation is performed on two recent and publicly available IoT datasets (IoT-23 and Kitsune). Finally, we adopt an adversarial approach to investigate the robustness of our solution in the presence of Label Flipping poisoning attacks. The experimental results highlight the improved performance of the proposed architecture, in comparison to both well-known baselines and previous proposals.

Research paper thumbnail of Improving Performance, Reliability, and Feasibility in Multimodal Multitask Traffic Classification with XAI

IEEE Transactions on Network and Service Management, 2023

The promise of Deep Learning (DL) in solving hard problems such as network Traffic Classification... more The promise of Deep Learning (DL) in solving hard problems such as network Traffic Classification (TC) is being held back by the severe lack of transparency and explainability of this kind of approaches. To cope with this strongly felt issue, the field of eXplainable Artificial Intelligence (XAI) has been recently founded, and is providing effective techniques and approaches. Accordingly, in this work we investigate interpretability via XAIbased techniques to understand and improve the behavior of state-of-the-art multimodal and multitask DL traffic classifiers. Using a publicly available security-related dataset (ISCX VPN-NONVPN), we explore and exploit XAI techniques to characterize the considered classifiers providing global interpretations (rather than sample-based ones), and define a novel classifier, DISTILLER-EVOLVED, optimized along three objectives: performance, reliability, feasibility. The proposed methodology proves as highly appealing, allowing to much simplify the architecture to get faster training time and shorter classification time, as fewer packets must be collected. This is at the expenses of negligible (or even positive) impact on classification performance, while understanding and controlling the interplay between inputs, model complexity, performance, and reliability.

Research paper thumbnail of A Machine-Learning Architecture for Sensor Fault Detection, Isolation and Accommodation in Digital Twins

IEEE Sensors Journal, 2022

Sensor technologies empower Industry 4.0 by enabling integration of in-field and real-time raw da... more Sensor technologies empower Industry 4.0 by enabling integration of in-field and real-time raw data into digital twins. However, sensors might be unreliable due to inherent issues and/or environmental conditions. This paper aims at detecting anomalies instantaneously in measurements from sensors, identifying the faulty ones and accommodating them with appropriate estimated data, thus paving the way to reliable digital twins. More specifically, a real-time general machine-learning-based architecture for sensor validation is proposed, built upon a series of neural-network estimators and a classifier. Estimators correspond to virtual sensors of all unreliable sensors (to reconstruct normal behaviour and replace the isolated faulty sensor within the system), whereas the classifier is used for detection and isolation tasks. A comprehensive statistical analysis on three different real-world data-sets is conducted and the performance of the proposed architecture is validated under hard and soft synthetically-generated faults.

Research paper thumbnail of Joint Design of Horizontal and Vertical Polarization Waveforms for Polarimetric Radar via SINR Maximization

IEEE Transactions on Aerospace and Electronic Systems, 2022

For an extended target with different polarimetric response, one way of improving the detection p... more For an extended target with different polarimetric response, one way of improving the detection performance is to exploit waveform diversity on the dimension of polarization. In this paper, we focus on joint design of transmit signal and receive filter for polarimetric radars with local waveform constraints. Considering the signal-to-interference-plus-noise ratio (SINR) as the figure of merit to optimize, where the average Target-Impulse-Response Matrix (TIRM) within a certain Target-Aspect-Angle (TAA) interval is employed as the target response, the waveform is decomposed and then designed for both horizontal and vertical polarization segments, subject to energy and similarity constraints. An iterative algorithm is proposed based on the majorization-minimization (MM) method to solve the formulated problem. The developed algorithm guarantees the convergence to a B-stationary point, where in each iteration, optimal horizontal and vertical transmit waveforms are respectively solved by using the feasible point pursuit and successive convex approximation (FPP-SCA) technique. Experiment results show the effectiveness of the proposed algorithm, the robustness of the output SINR against the TAA change, and the advantages of polarization diversity and local design.

Research paper thumbnail of Contextual Counters and Multimodal Deep Learning for Activity-Level Traffic Classification of Mobile Communication Apps during COVID-19 Pandemic

Elsevier Computer Networks, 2022

The COVID-19 pandemic has reshaped Internet traffic due to the huge modifications imposed to life... more The COVID-19 pandemic has reshaped Internet traffic due to the huge modifications imposed to lifestyle of people resorting more and more to collaboration and communication apps to accomplish daily tasks. Accordingly, these dramatic changes call for novel traffic management solutions to adequately countermeasure such unexpected and massive changes in traffic characteristics. In this paper, we focus on communication and collaboration apps whose traffic experienced a sudden growth during the last two years. Specifically, we consider nine apps whose traffic we collect, reliably label, and publicly release as a new dataset (MIRAGE-COVID-CCMA-2022) to the scientific community. First, we investigate the capability of state-of-art single-modal and multimodal Deep Learning-based classifiers in telling the specific app, the activity performed by the user, or both. While we highlight that state-of-art solutions reports a more-than-satisfactory performance in addressing app classification (96%-98% Fmeasure), evident shortcomings stem out when tackling activity classification (56%-65% F-measure) when using approaches that leverage the transport-layer payload and/or per-packet information attainable from the initial part of the biflows. In line with these limitations, we design a novel set of inputs (namely Context Inputs) providing clues about the nature of a biflow by observing the biflows coexisting simultaneously. Based on these considerations, we propose Mimetic-All a novel early traffic classification multimodal solution that leverages Context Inputs as an additional modality, achieving ≥ 82% F-measure in activity classification. Also, capitalizing the multimodal nature of Mimetic-All, we evaluate different combinations of the inputs. Interestingly, experimental results witness that Mimetic-ConSeq-a variant that uses the Context Inputs but does not rely on payload information (thus gaining greater robustness to more opaque encryption sub-layers possibly going to be adopted in the future)-experiences only ≈ 1% F-measure drop in performance w.r.t. Mimetic-All and results in a shorter training time.

Research paper thumbnail of Generalized Locally Most Powerful Tests for Distributed Sparse Signal Detection

IEEE Transactions on Signal Processing and Information over Networks, 2022

In this paper we tackle distributed detection of a localized phenomenon of interest (POI) whose s... more In this paper we tackle distributed detection of a localized phenomenon of interest (POI) whose signature is sparse via a wireless sensor network. We assume that both the position and the emitted power of the POI are unknown, other than the sparsity degree associated to its signature. We consider two communication scenarios in which sensors send either (i) their compressed observations or (ii) a 1-bit quantization of them to the fusion center (FC). In the latter case, we consider non-ideal reporting channels between the sensors and the FC. We derive generalized (i.e. based on Davies' framework [1]) locally most powerful detectors for the considered problem with the aim of obtaining computationally-efficient fusion rules. Moreover, we obtain their asymptotic performance and, based on such result, we design the local quantization thresholds at the sensors by solving a 1-D optimization problem. Simulation results confirm the effectiveness of the proposed design and highlight only negligible performance loss with respect to counterparts based on the (more-complex) generalized likelihood ratio.

Research paper thumbnail of Distributed Detection Fusion in Clustered Sensor Networks over Multiple Access Fading Channels

IEEE Transactions on Signal and Information Processing over Networks, 2022

In this paper, we tackle decision fusion for distributed detection in a randomly-deployed cluster... more In this paper, we tackle decision fusion for distributed detection in a randomly-deployed clustered Wireless Sensor Networks (WSNs) operating over a non-ideal multiple access channels (MACs), i.e. considering Rayleigh fading, path loss and additive noise. To mitigate fading, we propose the distributed equal gain transmit combining (dEGTC) and distributed maximum ratio transit combining (dMRTC). The first and second order statistics of the received signals were analytically computed via stochastic geometry tools. Then the distribution of the received signal over the MAC are approximated by Gaussian and log-normal distributions via moment matching. This enabled the derivation of moment matching optimal fusion rules (MOR) for both distributions. Moreover, suboptimal simpler fusion rules were also proposed, in which all the CHs data are equally weighed, which is termed moment matching equal gain fusion rule (MER). It is shown by simulations that increasing the number of clusters improve the performance. Moreover, MOR-Gaussian based algorithms are better under free-space propagation whereas their lognormal counterparts are more suited in the ground-reflection case. Also, the latter algorithms show better results in low SNR and SN numbers conditions. We have proved that the received power at the CH in MAC is proportional O λ 2 R 2 and to O λ 2 ln 2 R in the freespace propagation and the ground-reflection cases respectively, where λ is SN deployment intensity and R is the cluster radius. This implies that having more clusters decreases the required transmission power for a given SNR at the receiver.

Research paper thumbnail of Multi-bit & Sequential Decentralized Detection of a Noncooperative Moving Target Through a Generalized Rao Test

IEEE Transactions on Signal Processing and Information over Networks, 2021

We consider decentralized detection (DD) of an uncooperative moving target via wireless sensor ne... more We consider decentralized detection (DD) of an uncooperative moving target via wireless sensor networks (WSNs), measured in zero-mean unimodal noise. To address energy and bandwidth limitations, the sensors use multi-level quantizers. The encoded bits are then reported to a fusion center (FC) via binary symmetric channels. Herein, we propose a generalized Rao (G-Rao) test as a simpler alternative to the generalized likelihood ratio test (GLRT). Then, at the FC, a truncated one-sided sequential (TOS) test rule is considered in addition to the fixed-sample-size (FSS) manner. Further, the asymptotic performance of a trajectory-clairvoyant (multi-bit) Rao test is leveraged to develop an offline and per-sensor quantizer design. Detection gain measures are also introduced to assess resolution improvements. Simulations show the appeal of G-Rao test with respect to the GLRT, and the gain in detection by using multiple bits for quantization, as well as the advantage of the sequential detection approach.

Research paper thumbnail of Packet-Level Prediction of Mobile-App Traffic using Multitask Deep Learning

Elsevier Computer Networks, 2021

The prediction of network traffic characteristics helps in understanding this complex phenomenon ... more The prediction of network traffic characteristics helps in understanding this complex phenomenon and enables a number of practical applications, ranging from network planning and provisioning to management, with security implications as well. A significant corpus of work has so far focused on aggregated behavior, e.g., considering traffic volumes observed over a given time interval. Very limited attempts can instead be found tackling prediction at packet-level granularity. This much harder problem (whose solution extends trivially to the aggregated prediction) allows a finer-grained knowledge and wider possibilities of exploitation. The recent investigation and success of sophisticated Deep Learning algorithms is now providing mature tools to face this challenging but promising goal. In this work, we investigate and specialize a set of architectures selected among Convolutional, Recurrent, and Composite Neural Networks, to predict mobile-app traffic at the finest (packet-level) granularity. We discuss and experimentally evaluate the prediction effectiveness of the provided approaches also assessing the benefits of a number of design choices such as memory size or multimodality, investigating performance trends at packet level focusing on the head and the tail of biflows. We compare the results with both Markovian and classic Machine Learning approaches, showing increased performance with respect to state-of-the-art predictors (high-order Markov chains and Random Forest Regressor). For the sake of reproducibility and relevance to modern traffic, all evaluations are conducted leveraging two real human-generated mobile traffic datasets including different categories of mobile apps. The experimental results witness remarkable variability in prediction performance among different apps categories. The work also provides valuable analysis results and tools to compare different predictors and strike the best balance among the performance measures.

Research paper thumbnail of Tracking a Low-Angle Isolated target via Elevation-Angle Estimation Algorithm based on Extended Kalman Filter with Array Antenna

MDPI Remote Sensing, 2021

In a low-angle tracking situation, estimating the elevation angle is quite challenging, because o... more In a low-angle tracking situation, estimating the elevation angle is quite challenging, because of the entrance of the multipath signals in the antenna's main lobe. In this article, we propose two methods based on the extended Kalman filter (EKF) and frequency diversity (FD) process to estimate the elevation angle of a low-angle isolated target. In the first case, a simple weighting of the per-frequency estimates is obtained (termed WFD). Differently, in the second case, a matrix-based elaboration of per-frequency estimates is proposed (termed MFD). The proposed methods are completely independent of prior knowledge of geometrical information and physical parameters. The simulation results show that both methods have excellent performance and guarantee accurate elevation angle estimation in different multipath environments and even in very-low SNR condition. Hence, they are both suitable for low-peak-power radars.

Research paper thumbnail of A Comparison between Classical and Quantum Machine Learning for Mobile App Traffic Classification

1st Workshop on Communication and Networking for TinyML based Consumer Applications (INTERACT), within ACM/IEEE Symposium on Edge Computing (SEC 2024), 2024

Network traffic analysis is essential for modern communication systems, focusing on tasks like tr... more Network traffic analysis is essential for modern communication systems, focusing on tasks like traffic classification, prediction, and anomaly detection. While classical Machine Learning (ML) and Deep Learning (DL) methods have proven effective, their scalability and real-time performance can be limited by evolving traffic patterns and computational demands. Quantum Machine-Learning (QML) offers a promising alternative by utilizing quantum computing's parallelism. This paper examines QML's application in mobile traffic classification, comparing classical methods such as Multi-layer Perceptron (MLP) and Convolutional Neural Networks (CNNs) with Quantum Neural Networks (QNNs) using different embedding types. Our experiments, conducted on the MIRAGE-COVID-CCMA-2022 dataset, show that QNNs achieve competitive performance, indicating QML's potential for efficient large-scale traffic classification in future networks.

Research paper thumbnail of MIRAGE-APP×ACT-2024: A Novel Dataset for Mobile App and Activity Traffic Analysis

International Conference on Wireless and Mobile Computing, Networking and Communications (WiMob), 2024

Research paper thumbnail of Explainable Few-Shot Class Incremental Learning for Mobile Network Traffic Classification

IEEE Global Communications Conference (GLOBECOM), 2024

Research paper thumbnail of Adaptive Intrusion Detection Systems: Class Incremental Learning for IoT Emerging Threats

1st IEEE International Workshop on Machine Learning for Securing IoT Systems Using Big Data, 2023

In the evolving landscape of Internet of Things (IoT) security, the need for continuous adaptatio... more In the evolving landscape of Internet of Things (IoT) security, the need for continuous adaptation of defenses is critical. Class Incremental Learning (CIL) can provide a viable solution by enabling Machine Learning (ML) and Deep Learning (DL) models to (i) learn and adapt to new attack types (0-day attacks), (ii) retain their ability to detect known threats, (iii) safeguard computational efficiency (i.e. no full retraining). In IoT security, where novel attacks frequently emerge, CIL offers an effective tool to enhance Intrusion Detection Systems (IDS) and secure network environments. In this study, we explore how CIL approaches empower DLbased IDS in IoT networks, using the publicly-available IoT-23 dataset. Our evaluation focuses on two essential aspects of an IDS: (a) attack classification and (b) misuse detection. A thorough comparison against a fully-retrained IDS, namely starting from scratch, is carried out. Finally, we place emphasis on interpreting the predictions made by incremental IDS models through eXplainable AI (XAI) tools, offering insights into potential avenues for improvement.

Research paper thumbnail of Explainable Mobile Traffic Classification: the Case of Incremental Learning

Explainable and Safety Bounded, Fidelitous, Machine Learning for Networking (SAFE ’23), 2023

The surge in mobile network usage has contributed to the adoption of Deep Learning (DL) technique... more The surge in mobile network usage has contributed to the adoption of Deep Learning (DL) techniques for Traffic Classification (TC) to ensure efficient network management. However, DL-based classifiers still face challenges due to the frequent release of new apps (making them outdated) and the lack of interpretability (limiting their adoption). In this regard, Class Incremental Learning and eXplainable Artificial Intelligence have emerged as fundamental methodological tools. This work aims at reducing the gap between the DL models' performance and their interpretability in the TC domain. In this study, we examine from different perspectives the differences between classifiers when trained from scratch and incrementally. Using Deep SHAP, we derive global explanations to emphasize disparities in input importance. We comprehensively analyze base classifiers' behavior to understand the starting point of the incremental process and examine updated models to uncover architectures' features resulting from the incremental training. The analysis is based on MIRAGE19, an open dataset focused on mobile app traffic. CCS CONCEPTS • Networks → Network monitoring.

Research paper thumbnail of SPARSE BAYESIAN LEARNING ASSISTED DECISION FUSION IN MILLIMETER WAVE MASSIVE MIMO SENSOR NETWORKS

IEEE ICASSP, 2023

This paper investigates decision fusion in millimeter wave (mmWave) massive multiple-input multip... more This paper investigates decision fusion in millimeter wave (mmWave) massive multiple-input multiple-output (MIMO) wireless sensor network (WSNs), where the sparse Bayesian learning (SBL) algorithm is employed to estimate the channel between the sensors and the fusion center (FC). We present low-complexity fusion rules based on the hybrid combining architecture for the considered framework. Further, a deflection coefficient maximization-based optimization framework is developed to determine the transmit signaling matrix that can improve detection performance. The performance of the proposed fusion rule is presented through simulation results demonstrating the validation of the analytical findings.

Research paper thumbnail of Fine-Grained Traffic Prediction of Communication-and-Collaboration Apps via Deep-Learning: a First Look at Explainability

IEEE International Conference on Communication (IEEE ICC), 2023

The lifestyle change originated from the COVID-19 pandemic has caused a measurable impact on Inte... more The lifestyle change originated from the COVID-19 pandemic has caused a measurable impact on Internet traffic in terms of volume and application mix, with a sudden increase in usage of communication-and-collaboration apps. In this work, we focus on four of these apps (Skype, Teams, Webex, and Zoom), whose traffic we collect, reliably label at fine (i.e. per-activity) granularity, and analyze from the viewpoint of traffic prediction. The outcome of this analysis is informative for a number of network management tasks, including monitoring, planning, resource provisioning, and (security) policy enforcement. To this aim, we employ state-of-the-art multitask deep learning approaches to assess to which degree the traffic generated by these apps and their different use cases (i.e. activities: audio-call, video-call, and chat) can be forecast at packet level. The experimental analysis investigates the performance of the considered deep learning architectures, in terms of both traffic-prediction accuracy and complexity, and the related trade-off. Equally important, our work is a first attempt at interpreting the results obtained by these predictors via eXplainable Artificial Intelligence (XAI).

Research paper thumbnail of Non-cooperative Distributed Detection via Federated Sensor Networks

IEEE Radar Conference (RadarConf), 2023

In this study, we address the challenge of noncooperative target detection by federating two wire... more In this study, we address the challenge of noncooperative target detection by federating two wireless sensor networks. The objective is to capitalize on the diversity achievable from both sensing and reporting phases. The target's presence results in an unknown signal that is influenced by unknown distances between the sensors and target, as well as by symmetrical and single-peaked noise. The fusion center, responsible for making more accurate decisions, receives quantized sensor observations through error-prone binary symmetric channels. This leads to a two-sided testing problem with nuisance parameters (the target position) only present under the alternative hypothesis. To tackle this challenge, we present a generalized likelihood ratio test and design a fusion rule based on a generalized Rao test to reduce the computational complexity. Our results demonstrate the efficacy of the Rao test in terms of detection/false-alarm rate and computational simplicity, highlighting the advantage of designing the system using federation.

Research paper thumbnail of Exploring a Modular Architecture for Sensor Validation in Digital Twins

IEEE Sensors Conference (Sensors), 2022

Decision-support systems rely on data exchange between digital twins (DTs) and physical twins (PT... more Decision-support systems rely on data exchange between digital twins (DTs) and physical twins (PTs). Faulty sensors (e.g. due to hardware/software failures) deliver unreliable data and potentially generate critical damages. Prompt sensor fault detection, isolation and accommodation (SFDIA) plays a crucial role in DT design. In this respect, data-driven approaches to SFDIA have recently shown to be effective. This work focuses on a modular SFDIA (M-SFDIA) architecture and explores the impact of using different types of neural-network (NN) building blocks. Numerical results of different choices are shown with reference to a wireless sensor network publicly-available dataset demonstrating the validity of such architecture.

Research paper thumbnail of On the use of Machine Learning Approaches for the Early Classification in Network Intrusion Detection

IEEE International Symposium on Measurements & Networking (M&N), 2022

Current intrusion detection techniques cannot keep up with the increasing amount and complexity o... more Current intrusion detection techniques cannot keep up with the increasing amount and complexity of cyber attacks. In fact, most of the traffic is encrypted and does not allow to apply deep packet inspection approaches. In recent years, Machine Learning techniques have been proposed for postmortem detection of network attacks, and many datasets have been shared by research groups and organizations for training and validation. Differently from the vast related literature, in this paper we propose an early classification approach conducted on CSE-CIC-IDS2018 dataset, which contains both benign and malicious traffic, for the detection of malicious attacks before they could damage an organization. To this aim, we investigated a different set of features, and the sensitivity of performance of five classification algorithms to the number of observed packets. Results show that ML approaches relying on ten packets provide satisfactory results.

Research paper thumbnail of Wireless Inference Gets Smarter: RIS-assisted Channel-Aware MIMO Decision Fusion

IEEE 12th Sensor Array and Multichannel Signal Processing Workshop (SAM), 2022

We study channel-aware binary-decision fusion over a shared flat-fading channel with multiple ant... more We study channel-aware binary-decision fusion over a shared flat-fading channel with multiple antennas at the Fusion Center (FC). This paper considers the aid of a Reconfigurable Intelligent Surface (RIS) to effectively convey the information of the phenomenon of interest to the FC and foster energyefficient data analytics supporting the Internet of Things (IoT) paradigm. We present the optimal rule and derive a (sub-optimal) joint fusion rule & RIS design, representing an alternative with reduced complexity and lower system knowledge required. Simulation results for performance are presented showing the benefit of RIS adoption even in a suboptimal case.

Research paper thumbnail of Prediction of Mobile-App Network-Video-Traffic Aggregates using Multi-task Deep Learning

IFIP Networking WKSHPS: the 4th International Workshop on Network Intelligence (NI), 2022

Traffic prediction has proven to be useful for several network management domains and represents ... more Traffic prediction has proven to be useful for several network management domains and represents one of the main enablers for instilling intelligence within future networks. Recent solutions have focused on predicting the behavior of traffic aggregates. Nonetheless, minimal attempts have tackled the prediction of mobile network traffic generated by different video application categories. To this end, in this work we apply Multitask Deep Learning to predict network traffic aggregates generated by mobile video applications over short-term time scales. We investigate our approach leveraging state-of-art prediction models such as Convolutional Neural Networks, Gated Recurrent Unit, and Random Forest Regressor, showing some surprising results (e.g. NRMSE < 0.075 for upstream packet count prediction while NRMSE < 0.15 for the downstream counterpart), including some variability in prediction performance among the examined video application categories. Furthermore, we show that using smaller time intervals when predicting traffic aggregates may achieve better performances for specific traffic profiles.

Research paper thumbnail of Sensor Fusion for Detection and Localization of Carbon Dioxide Releases for Industry 4.0

IEEE International Conference on Information Fusion (Fusion), 2022

This work tackles the distributed detection & localization of carbon dioxide (CO 2) release from ... more This work tackles the distributed detection & localization of carbon dioxide (CO 2) release from storage tanks caused by the opening of pressure relief devices via inexpensive sensor devices in an industrial context. A realistic model of the dispersion is put forward in this paper. Both full-precision and rate-limited setups for sensors are considered, and fusion rules capitalizing the dispersion model are derived. Simulations analyze the performance trends with realistic system parameters (e.g. wind direction).

Research paper thumbnail of Decision Fusion for Carbon Dioxide Release Detection from Pressure Relief Devices

IEEE 12th Sensor Array and Multichannel Signal Processing Workshop (SAM), 2022

This work investigates the distributed detection of carbon dioxide (CO 2) release from storage ta... more This work investigates the distributed detection of carbon dioxide (CO 2) release from storage tanks caused by the opening of pressure relief devices via inexpensive sensor devices in an industrial context. A realistic model of the dispersion is put forward in this paper. Both full-precision and rate-limited setups for sensors are considered, and fusion rules capitalizing the dispersion model are derived. Simulations analyze the performance trends with relevant system parameters.

Research paper thumbnail of Optimal Linear Fusion Rule for Distributed Detection in Clustered Wireless Sensor Networks

IEEE Jordan International Joint Conference on Electrical Engineering and Information Technology (JEEIT), 2021

In this paper we consider the distributed detection of intruders in clustered wireless sensor net... more In this paper we consider the distributed detection of intruders in clustered wireless sensor networks (WSNs). The WSN is modelled by a homogeneous Poisson point process (PPP). The sensor nodes (SNs) compute local decisions about the intruder's presence and send them to the cluster heads (CHs). Hence, the CHs collect the number of detecting SNs in the cluster. The fusion center (FC), on the other hand, combines the the CH's data in order to reach a global detection decision. We propose an optimal cluster-based linear fusion (OCLR), in which the CHs' data are linearly fused. Interestingly, the OCLR performance is very close to the optimal clustered fusion rule (OCR) previously proposed in literature. Furthermore, the OCLR performance approaches the optimal Chair-Varshney fusion rule as the number of SNs increases.

Research paper thumbnail of Transmit Signal Design for Polarimetric Radars under Local Waveform Constraints

CIE Radar Conference, 2021

The joint design of transmit signals and receive filters for polarimetric radars is investigated.... more The joint design of transmit signals and receive filters for polarimetric radars is investigated. For an extended target with different polarimetric responses, the detection performance can be improved by leveraging polarization diversity of the transmit waveform, i.e., matching the horizontal and vertical polarization signals to counterparts of the target response. Besides, to meet some system requirements, each polarization signal is expected to approach some reference signal with good properties. Hence, we aim to maximize the signal-to-interference-plus-noise ratio (SINR) while maintaining a closeness to a particular reference waveform, as well as an energy constraint. The resulting problem is solved based on the majorization-minimization (MM) method. Experimental results demonstrate the effectiveness of the proposed algorithm, the advantages of polarization diversity and local design.

Research paper thumbnail of Classification of Communication and Collaboration Apps via Advanced Deep-Learning Approaches

IEEE International Workshop on Computer Aided Modeling and Design of Communication Links and Networks (CAMAD), 2021

The lockdowns and lifestyle changes during the COVID-19 pandemic have caused a measurable impact ... more The lockdowns and lifestyle changes during the COVID-19 pandemic have caused a measurable impact on Internet traffic in terms of volumes and application mix, with a sudden increase of usage of communication and collaboration apps. In this work, we focus on five such apps, whose traffic we collect, reliably label at fine granularity (per-activity), and analyze from the viewpoint of traffic classification. To this aim, we employ state-of-art deep learning approaches to assess to which degree the apps, their different use cases (activities), and the pairs app-activity can be told apart from each other. We investigate the early behavior of the biflows composing the traffic and the effect of tuning the dimension of the input, via a sensitivity analysis. The experimental analysis highlights the figures of the different architectures, in terms of both traffic-classification performance and complexity w.r.t. different classification tasks, and the related trade-off. The outcome of this analysis is informative for a number of network management tasks, including monitoring, planning, resource provisioning, and (security) policy enforcement.

Research paper thumbnail of Real-Time Sensor Fault Detection, Isolation and Accommodation for Industrial Digital Twins

IEEE International Conference on Networking, Sensing and Control (ICNSC), 2021

The development of Digital Twins (DTs) has bloomed significantly in last years and related use ca... more The development of Digital Twins (DTs) has bloomed significantly in last years and related use cases are now pervading several application domains. DTs are built upon Internet of Things (IoT) and Industrial IoT platforms and critically rely on the availability of reliable sensor data. To this aim, in this article, we propose a sensor fault detection, isolation and accommodation (SFDIA) architecture based on machinelearning methodologies. Specifically, our architecture exploits the available spatio-temporal correlation in the sensory data in order to detect, isolate and accommodate faulty data via a bank of estimators, a bank of predictors and one classifier, all implemented via multi-layer perceptrons (MLPs). Faulty data are detected and isolated using the classifier, while isolated sensors are accommodated using the estimators. Performance evaluation confirms the effectiveness of the proposed SFDIA architecture to detect, isolate and accommodate faulty data injected into a (real) wireless sensor network (WSN) dataset.

Research paper thumbnail of Characterizing and Modeling Traffic of Communication and Collaboration Apps Bloomed With COVID-19 Outbreak

IEEE 6th International Forum on Research and Technology for Society and Industry (RTSI), 2021

In this work, we address the characterization and modeling of the network traffic generated by co... more In this work, we address the characterization and modeling of the network traffic generated by communication and collaboration apps which have been the object of recent traffic surge due to the COVID-19 pandemic spread. In detail, focusing on five of the top popular mobile apps (collected via the MIRAGE architecture) used for working/studying during the pandemic time frame, we provide characterization at trace and flow level, and modeling by means of Multimodal Markov Chains for both apps and related activities. The results highlight interesting peculiarities related to both the running applications and the specific activities performed. The outcome of this analysis constitutes the stepping stone toward a number of tasks related to network management and traffic analysis, such as identification/classification and prediction, and modern IT management in general.

Research paper thumbnail of Spatio-Temporal Decision Fusion for Quickest Fault Detection Within Industrial Plants: The Oil and Gas Scenario

24th International Conference on Information Fusion (FUSION), 2021

In this work, we present a spatio-temporal decision fusion approach aimed at performing quickest ... more In this work, we present a spatio-temporal decision fusion approach aimed at performing quickest detection of faults within an Oil and Gas subsea production system. Specifically, a sensor network collectively monitors the state of different pieces of equipment and reports the collected decisions to a fusion center. Therein, a spatial aggregation is performed and a global decision is taken. Such decisions are then aggregated in time by a post-processing center, which performs quickest detection of system fault according to a Bayesian criterion which exploits change-time statistical distributions originated by system components' datasheets. The performance of our approach is analyzed in terms of both detection-and reliability-focused metrics, with a focus on (fast & inspection-cost-limited) leak detection in a real-world oil platform located in the Barents Sea.

Research paper thumbnail of A system for reliable and scalable ground truth generation and traffic classification of mobile apps encrypted traffic

Internet Measurement Conference (IMC), 2017

The process of associating (labeling) network traffic with specific applications or application t... more The process of associating (labeling) network traffic with specific applications or application types, known as Traffic Classification (TC), is increasingly challenged by the growing usage of smartphones, which is profoundly changing the kind of traffic that travels over home and enterprise networks and the Internet.

TC comes with its own challenges and requirements that are even exacerbated in a mobile-traffic context, such as: (a) the adoption of encrypted protocols (b) a large number of apps to discriminate from, (c) the dynamic nature of network traffic and, more importantly, (d) the lack of a satisfactory flow-level Ground Truth (GT) to train the classification algorithms on, and test and compare them against.

For this reason, this work proposes a novel self-supervised TC architecture composed of two main blocks: (i) an automatic GT generation tool and (ii) a Multi-Classifier System (MCS). The first block automatically produces a corpus of traffic traces with flow-level labeling, the label being the package name and version (uniquely identifying the mobile app); this is exploited to rapidly train (or re-train), in a supervised way, the proposed MCS, which is then employed on classification of true (human-generated) mobile traffic.

In more detail, in the first block of the proposed system each app package of interest is automatically installed and run on a (physical- or virtual-) device connected to a network where all traffic generated or received by the device can be captured. Then the Graphical User Interface (GUI) of the app is explored, generating events as taps and keystrokes, causing the generation of network traffic. The GUI explorer is based on Android GUI Ripper, a tool implementing both Random and Active Learning techniques. The device is instrumented with a logger that records all network-related system calls originated by the exercised app to properly associate traffic flows with originating process names, thus avoiding mislabeling traffic from other apps or from the operating system. The traffic generated by the device is captured on a host (wireless access point) from which the device can also be controlled (e.g. via USB).

The second block is represented by a MCS which intelligently-combines decisions from state-of-the-art (base) classifiers specifically devised for mobile- and encrypted-traffic classification. The MCS is intended to overcome the deficiencies of each single classifier (not improvable over a certain bound, despite efforts in “tuning”) and provide improved performance with respect to any of the base classifiers. The proposed MCS is not restricted to a specific set of classification algorithms and also allows for modularity of classifiers' selection in the pool. Additionally, the MCS can adopt several types of combiners (based on both hard and soft approaches) developed in the literature constituting a wide spectrum of achievable performance, operational complexity, and training set requirements.

Preliminary results show that our system is able to: (i) automatically run mobile apps making them generate sufficient traffic to train a MCS; (ii) obtain promising results in terms of classification accuracy of new mobile apps traffic.

Research paper thumbnail of Statistical Signal Processing for Data Fusion

Research paper thumbnail of Time-Aware Distributed Sequential Detection of Gas Dispersion via Wireless Sensor Networks

IEEE Transactions on Signal and Information Processing over Networks, 2023

This work addresses the problem of detecting gas dispersions through concentration sensors with w... more This work addresses the problem of detecting gas dispersions through concentration sensors with wireless transmission capabilities organized as a distributed Wireless Sensor Network (WSN). The concentration sensors in the WSN perform local sequential detection (SD) and transmit their individual decisions to the Fusion Center (FC) according to a transmission rule designed to meet the low-energy requirements of a wireless setup. The FC receives the transmissions sent by the sensors and makes a more reliable global decision by employing a SD algorithm. Two variants of the SD algorithm named Continuous Sampling Algorithm (CSA) and Decision-Triggered Sampling Algorithm (DTSA), each with its own transmission rule, are presented and compared against a fully-batch algorithm named Batch Sampling Algorithm (BSA). The CSA operates as a time-aware detector by incorporating the time of each transmission in the detection rule. The proposed framework encompasses the gas dispersion model into the FC's decision rule and leverages real-time weather measurements. The case study involves an accidental dispersion of carbon dioxide (CO2). System performances are evaluated in terms of the receiver operating characteristic (ROC) curve as well as average decision delay and communication cost.

Research paper thumbnail of Mobile sensor networks based on autonomous platforms for homeland security

Research paper thumbnail of On MSE performance of time-reversal MUSIC

2014 IEEE 8th Sensor Array and Multichannel Signal Processing Workshop (SAM), 2014