Aleksandar Ignjatovic - Academia.edu (original) (raw)

Papers by Aleksandar Ignjatovic

Research paper thumbnail of Local Approximations Based on Orthogonal Differential Operators

Journal of Fourier Analysis and Applications, 2007

Let M be a symmetric positive definite moment functional and let {P M n (ω)} n∈N be the family of... more Let M be a symmetric positive definite moment functional and let {P M n (ω)} n∈N be the family of orthonormal polynomials that corresponds to M. We introduce a family of linear differential operators K n = (−i) n P M n (i d dt), called the chromatic derivatives associated with M, which are orthonormal with respect to a suitably defined scalar product. We consider a Taylor type expansion of an analytic function f (t), with the values f (n) (t 0) of the derivatives replaced by the values K n [f ](t 0) of these orthonormal operators, and with monomials (t − t 0) n /n! replaced by an orthonormal family of "special functions" of the form (−1) n K n [m](t − t 0), where m(t) = ∞ n=0 (−1) n M(ω 2n) t 2n /(2n)!. Such expansions are called the chromatic expansions. Our main results relate the convergence of the chromatic expansions to the asymptotic behavior of the coefficients appearing in the three term recurrence satisfied by the corresponding family of orthogonal polynomials P M n (ω). Like the truncations of the Taylor expansion, the truncations of a chromatic expansion at t = t 0 of an analytic function f (t) approximate f (t) locally, in a neighborhood of t 0. However, unlike the values of f (n) (t 0), the values of the chromatic derivatives K n [f ](t 0) can be obtained in a noise robust way from sufficiently dense samples of f (t). The chromatic expansions have properties which make them useful in fields involving empirically sampled data, such as signal processing.

Research paper thumbnail of Power to Pulse Width Modulation Sensor for Remote Power Analysis Attacks

IACR Transactions on Cryptographic Hardware and Embedded Systems

Field-programmable gate arrays (FPGAs) deployed on commercial cloud services are increasingly gai... more Field-programmable gate arrays (FPGAs) deployed on commercial cloud services are increasingly gaining popularity due to the cost and compute benefits offered by them. Recent studies have discovered security threats than can be launched remotely on FPGAs that share the logic fabric between trusted and untrusted parties, posing a danger to designs deployed on cloud FPGAs. With remote power analysis (RPA) attacks, an attacker aims to deduce secret information present on a remote FPGA by deploying an on-chip sensor on the FPGA logic fabric. Information captured with the on-chip sensor is transferred off the chip for analysis and existing on-chip sensors demand a significant amount of bandwidth for this task as a result of their wider output bit width. However, attackers are often left with the only option of using a covert communication channel and the bandwidth of such channels is generally limited. This paper proposes a novel area-efficient on-chip power sensor named PPWM that integra...

Research paper thumbnail of Trust-Based Blockchain Authorization for IoT

IEEE Transactions on Network and Service Management, 2021

Authorization or access control limits the actions a user may perform on a computer system, based... more Authorization or access control limits the actions a user may perform on a computer system, based on predetermined access control policies, thus preventing access by illegitimate actors. Access control for the Internet of Things (IoT) should be tailored to take inherent IoT network scale and device resource constraints into consideration. However, common authorization systems in IoT employ conventional schemes, which suffer from overheads and centralization. Recent research trends suggest that blockchain has the potential to tackle the issues of access control in IoT. However, proposed solutions overlook the importance of building dynamic and flexible access control mechanisms. In this paper, we design a decentralized attribute-based access control mechanism with an auxiliary Trust and Reputation System (TRS) for IoT authorization. Our system progressively quantifies the trust and reputation scores of each node in the network and incorporates the scores into the access control mechanism to achieve dynamic and flexible access control. We design our system to run on a public blockchain, but we separate the storage of sensitive information, such as user's attributes, to private sidechains for privacy preservation. We implement our solution in a public Rinkeby Ethereum test-network interconnected with a lab-scale testbed. Our evaluations consider various performance metrics to highlight the applicability of our solution for IoT contexts.

Research paper thumbnail of Simultaneous Escape Routing using Network Flow Optimization

Malaysian Journal of Computer Science, 2016

With the advancement in technology, the size of electronic components and printed circuit boards ... more With the advancement in technology, the size of electronic components and printed circuit boards (PCB) is becoming small while the pin count of each component is increasing. This has necessitated the use of ball grid array (BGA) type of components where pins are attached under the body of component as a grid. The problem of routing pins from under the body of component to the boundary of the component is known as escape routing. It is often desirable to perform ordered simultaneous escape routing (SER) to facilitate area routing and produce elegant PCB design. The task of SER is non-trivial, given the small size of components and hundreds of pins arranged in random order in each component that needs ordered connectivity. In this paper, first we propose flow models for different inter pin capacities. We then propose linear network flow optimization model that simultaneously solves the net ordering and net escape problem. The model routes maximum possible nets between two components of the PCB, by considering the design rules. Comparative analysis shows that the proposed optimization model performs better than the existing routing algorithms in terms of number of nets routed.

Research paper thumbnail of Simeon - Secure Federated Machine Learning Through Iterative Filtering

ArXiv, 2021

Federated learning enables a global machine learning model to be trained collaboratively by distr... more Federated learning enables a global machine learning model to be trained collaboratively by distributed, mutually non-trusting learning agents who desire to maintain the privacy of their training data and their hardware. A global model is distributed to clients, who perform training, and submit their newly-trained model to be aggregated into a superior model. However, federated learning systems are vulnerable to interference from malicious learning agents who may desire to prevent training or induce targeted misclassification in the resulting global model. A class of Byzantine-tolerant aggregation algorithms has emerged, offering varying degrees of robustness against these attacks, often with the caveat that the number of attackers is bounded by some quantity known prior to training. This paper presents Simeon: a novel approach to aggregation that applies a reputation-based iterative filtering technique to achieve robustness even in the presence of attackers who can exhibit arbitrar...

Research paper thumbnail of VITI: A Tiny Self-Calibrating Sensor for Power-Variation Measurement in FPGAs

IACR Transactions on Cryptographic Hardware and Embedded Systems, 2021

On-chip sensors, built using reconfigurable logic resources in field programmable gate arrays (FP... more On-chip sensors, built using reconfigurable logic resources in field programmable gate arrays (FPGAs), have been shown to sense variations in signalpropagation delay, supply voltage and power consumption. These sensors have been successfully used to deploy security attacks called Remote Power Analysis (RPA) Attacks on FPGAs. The sensors proposed thus far consume significant logic resources and some of them could be used to deploy power viruses. In this paper, a sensor (named VITI) occupying a far smaller footprint than existing sensors is presented. VITI is a self-calibrating on-chip sensor design, constructed using adjustable delay elements, flip-flops and LUT elements instead of combinational loops, bulky carry chains or latches. Self-calibration enables VITI the autonomous adaptation to differing situations (such as increased power consumption, temperature changes or placement of the sensor in faraway locations from the circuit under attack). The efficacy of VITI for power consum...

Research paper thumbnail of UCloD: Small Clock Delays to Mitigate Remote Power Analysis Attacks

IEEE Access, 2021

This paper presents UCloD, a novel random clock delay-based robust and scalable countermeasure ag... more This paper presents UCloD, a novel random clock delay-based robust and scalable countermeasure against recently discovered remote power analysis (RPA) attacks. UCloD deploys very small clock delays (in the picosecond range) generated using the tapped delays lines (TDLs) to mitigate RPA attacks. UCloD provides the most robust countermeasures demonstrated thus far against RPA attacks. RPA attacks use delay sensors, such as Time to Digital Converters (TDC) or Ring Oscillators (ROs) to measure voltage fluctuations occurring in power delivery networks (PDNs) of Field Programmable Gate Arrays (FPGAs). These voltage fluctuations reveal secret information, such as secret keys of cryptographic circuits. The only countermeasure proposed thus far activates ROs to consume significant power and has managed to secure Advanced Encryption Standard (AES) circuits for up to 300,000 encryptions. Using TDLs available in FPGAs, UCloD randomly varies the clock to the cryptographic circuits under attack to induce noise in the adversary's delay sensor(s). We demonstrate correlation power analysis (referred to as CPA) attack resistance of UCloD AES implementations for up to one million encryptions. Compared to an unprotected AES circuit, UCloD implementations have minimal overheads (0.2% Slice LUT overhead and 4.8% Slice register overhead for Xilinx implementations and 0.5% LogicCells overhead for Lattice Semiconductor implementations).

Research paper thumbnail of Chromatic Derivatives and Approximations in Practice—Part I: A General Framework

IEEE Transactions on Signal Processing, 2018

Chromatic derivatives are special, numerically robust differential operators that preserve spectr... more Chromatic derivatives are special, numerically robust differential operators that preserve spectral features of a signal; the associated chromatic approximations accurately capture local features of a signal. For this reason they allow digital processing of continuous time signals often superior to processing of discrete samples of such signals. We introduce a new concept of "matched filter" chromatic approximations, where the underlying basis functions are chosen to match the spectral profile of the signals being approximated. We then derive a collection of formulas and theorems that form a general framework for practical applications of chromatic derivatives and approximations. In the second part of this paper, we use such a general framework in several case studies of such applications that aim to illustrate how chromatic derivatives and approximations can be used in signal processing with an intention of motivating DSP engineers to find applications of these novel concepts in their own practice. Index Terms-Chromatic derivatives, chromatic expansions, digital representation and processing of continuous time signals.

Research paper thumbnail of A Provenance-Aware Multi-dimensional Reputation System for Online Rating Systems

ACM Transactions on Internet Technology, 2018

Online rating systems are widely accepted as means for quality assessment on the web and users in... more Online rating systems are widely accepted as means for quality assessment on the web and users increasingly rely on these systems when deciding to purchase an item online. This makes such rating systems frequent targets of attempted manipulation by posting unfair rating scores. Therefore, providing useful, realistic rating scores as well as detecting unfair behavior are both of very high importance. Existing solutions are mostly majority based, also employing temporal analysis and clustering techniques. However, they are still vulnerable to unfair ratings. They also ignore distances between options, the provenance of information, and different dimensions of cast rating scores while computing aggregate rating scores and trustworthiness of users. In this article, we propose a robust iterative algorithm which leverages information in the profile of users and provenance of information, and which takes into account the distance between options to provide both more robust and informative ...

Research paper thumbnail of Mobility based Net Ordering for Simultaneous Escape Routing

International Journal of Advanced Computer Science and Applications, 2017

With the advancement in electronics technology, number of pins under the ball grid array (BGA) ar... more With the advancement in electronics technology, number of pins under the ball grid array (BGA) are increasing on reduced size components. In small size components, a challenging task is to solve the escape routing problem where BGA pins escape towards the component boundary. It is often desirable to perform ordered simultaneous escape routing (SER) to facilitate area routing and produce elegant Printed Circuit Board (PCB) design. Some heuristic techniques help in finding the PCB routing solution for SER but for larger problems these are time consuming and produce sub-optimal results. This work propose solution which divides the problem into two parts. First, a novel net ordering algorithm for SER using network theoretic approach and then linear optimization model for single component ordered escape routing has been proposed. The model routes maximum possible nets between two components of the PCB by considering the design rules based on the given net ordering. Comparative analysis shows that the proposed net ordering algorithm and optimization model performs better than the existing routing algorithms for SER in terms of number of nets routed. Also the running time using proposed algorithm reduces to O(2 N E/2) + O(2 N E/2) for ordered escape routing of both components. This time is much lesser than O(2 N E) due to exponential reduction.

Research paper thumbnail of Interdependent Security Risk Analysis of Hosts and Flows

IEEE Transactions on Information Forensics and Security, 2015

Detection of high risk hosts and flows continues to be a significant problem in security monitori... more Detection of high risk hosts and flows continues to be a significant problem in security monitoring of high throughput networks. A comprehensive risk assessment method should take into account the risk propagation among risky hosts and flows. In this paper this is achieved by introducing two novel concepts. The first is an interdependency relationship among the risk scores of a network flow and its source and destination hosts. In one hand, the risk score of a host depends on risky flows such a host initiates and is targeted by. On the other hand, the risk score of a flow depends on the risk scores of its source and destination hosts. The second concept, which we call flow provenance, represents risk propagation among network flows which takes into account the likelihood that a particular flow is caused by other flows. Based on these two concepts, we develop an iterative algorithm for computing the risk level of hosts and network flows. We give a rigorous proof that our algorithm rapidly converges to unique risk estimates, and provide its extensive empirical evaluation using two real-world datasets. Our evaluation demonstrates that our method is effective in detecting high risk hosts and flows and is sufficiently efficient to be deployed in high throughput networks.

Research paper thumbnail of Iterative Security Risk Analysis for Network Flows Based on Provenance and Interdependency

2013 IEEE International Conference on Distributed Computing in Sensor Systems, 2013

Discovering high risk network flows and hosts in a high throughput network is a challenging task ... more Discovering high risk network flows and hosts in a high throughput network is a challenging task of network monitoring. Emerging complicated attack scenarios such as DDoS attacks increases the complexity of tracking malicious and high risk network activities within a huge number of monitored network flows. To address this problem, we propose an iterative framework to assessing risk scores for hosts and network flows. To obtain risk scores of flows, we take into account two properties, flow attributes and flow provenance. Also, our iterative risk assessment measures the risk scores of hosts and flows based on an interdependency property where the risk score of a flow influences the risk of its source and destination hosts, and the risk score of a host is evaluated by risk scores of flows initiated by or terminated at the host. Moreover, the update mechanism in our framework allows flows to keep streaming into the system while our risk assessment method performs an online monitoring task. The experimental results show that our approach is effective in detecting high risk hosts and flows as well as sufficiently efficient to be deployed in high throughput networks compared to other algorithms.

Research paper thumbnail of Frequency estimation using time domain methods based on robust differential operators

IEEE 10th INTERNATIONAL CONFERENCE ON SIGNAL PROCESSING PROCEEDINGS, 2010

Given a band limited signal which over some disjoint intervals of time In behaves as a correspond... more Given a band limited signal which over some disjoint intervals of time In behaves as a corresponding linear combination fn(t) of up to N damped sinusoids, we present a method which detects intervals In, determines the number of the sinusoidal components over each interval and estimates their frequencies, with high accuracy and in the presence of noise which can be colored. Intervals In can have very short duration of just a dozen Nyquist rate intervals, hampering the use of the Fourier transform based methods. Our method operates entirely in the time domain; to be applicable, the signal must be sampled at a rate higher than the Nyquist rate. It is based on analyzing local signal behavior using special, numerically robust linear differential operators, called the chromatic derivatives, which were introduced relatively recently, and which hold yet unexplored promise in signal and image processing.

Research paper thumbnail of κ-FSOM: Fair Link Scheduling Optimization for Energy-Aware Data Collection in Mobile Sensor Networks

Lecture Notes in Computer Science, 2014

We consider the problem of data collection from a continental-scale network of mobile sensors, sp... more We consider the problem of data collection from a continental-scale network of mobile sensors, specifically applied to wildlife tracking. Our application constraints favor a highly asymmetric solution, with heavily duty-cycled sensor nodes communicating with a network of powered base stations. Individual nodes move freely in the environment, resulting in low-quality radio links and hot-spot arrival patterns with the available data exceeding the radio link capacity. We propose a novel scheduling algorithm, κ-Fair Scheduling Optimization Model (κ-FSOM), that maximizes the amount of collected data under the constraints of radio link quality and energy, while ensuring a fair access to the radio channel. We show the problem is NP-complete and propose a heuristic to approximate the optimal scheduling solution in polynomial time. We use empirical link quality data to evaluate the κ-FSOM heuristic in a realistic setting and compare its performance to other heuristics. We show that κ-FSOM heuristic achieves high data reception rates, under different fairness and node lifetime constraints.

Research paper thumbnail of Provenance-aware security risk analysis for hosts and network flows

2014 IEEE Network Operations and Management Symposium (NOMS), 2014

Detection of high risk network flows and high risk hosts is becoming ever more important and more... more Detection of high risk network flows and high risk hosts is becoming ever more important and more challenging. In order to selectively apply deep packet inspection (DPI) one has to isolate in real time high risk network activities within a huge number of monitored network flows. To help address this problem, we propose an iterative methodology for a simultaneous assessment of risk scores for both hosts and network flows. The proposed approach measures the risk scores of hosts and flows in an interdependent manner; thus, the risk score of a flow influences the risk score of its source and destination hosts, and also the risk score of a host is evaluated by taking into account the risk scores of flows initiated by or terminated at the host. Our experimental results show that such an approach not only effective in detecting high risk hosts and flows but, when deployed in high throughput networks, is also more efficient than PageRank based algorithms.

Research paper thumbnail of Fidelity metrics for estimation models

2010 IEEE/ACM International Conference on Computer-Aided Design (ICCAD), 2010

Estimation models play a vital role in many aspects of day to day life. Extremely complex estimat... more Estimation models play a vital role in many aspects of day to day life. Extremely complex estimation models are employed in the design space exploration of SoCs, and the efficacy of these estimation models is usually measured by the absolute error of the models compared to known actual results. Such absolute error based metrics can often result in over-designed estimation models, with a number of researchers suggesting that fidelity of an estimation model (correlation between the ordering of the estimated points and the ordering of the actual points) should be examined instead of, or in addition to, the absolute error. In this paper, for the first time, we propose four metrics to measure the fidelity of an estimation model, in particular for use in design space exploration. The first two are based on two well known rank correlation coefficients. The other two are weighted versions of the first two metrics, to give importance to points nearer the Pareto front. The proposed fidelity metrics range from-1 to 1, where a value of 1 reflects a perfect positive correlation while a value of-1 reflects a perfect negative correlation. The proposed fidelity metrics were calculated for a single processor estimation model and a multiprocessor estimation model to observe their behavior, and were compared against the models' absolute error. For the multiprocessor estimation model, even though the worst average and maximum absolute error of 6.40% and 16.61% respectively can be considered reasonable in design automation, the worst fidelity of 0.753 suggests that the multiprocessor estimation model may not be as good a model (compared to an estimation model with same or higher absolute errors but a fidelity of 0.95) as depicted by its absolute accuracy, leading to an over-designed estimation model.

Research paper thumbnail of Efficient computation of robust average in wireless sensor networks using compressive sensing

Wireless sensor networks (WSNs) enable the collection of physical measurements over a large geogr... more Wireless sensor networks (WSNs) enable the collection of physical measurements over a large geographic area. It is often the case that we are interested in computing and tracking the spatial-average of the sensor measurements over a region of the WSN. Unfortunately, the standard average operation is not robust because it is highly susceptible to sensor faults (e.g. offset, stuck-at errors etc.) and variation of sensor measurement noises. In this paper, we propose a method to compute a robust average of sensor measurements, which appropriately takes sensor faults and sensor noise into consideration, in a bandwidth-and computational-efficient manner. At the same time, the proposed method can determine which sensors are likely to be faulty. Our method achieves bandwidth efficiency by exploiting compressive sensing. Instead of sending a block of sensor readings to the data fusion centre, each sensor performs random projections (as in compressive sensing) on the data block and sends the results of the projections (which we will refer to as the compressed data) to the data fusion centre. At the data fusion centre, we achieve computational efficiency by working directly with the compressed data, whose dimension is only a fraction of that of the original block of sensor data. In other words, our proposed method will work on the compressed data without decompressing them. By using the compressed data, our proposed method will determine which sensors are likely to be faulty as well as a robust average of the compressed data, which, after decompression (or compressive sensing reconstruction), will yield an approximation of the robust average of the original sensor readings. This means that the data fusion centre will only need to perform decompression once in order to obtain the robust average (rather than decompressing all the compressed data from all the sensors), therefore achieving computational efficiency. We apply our proposed method to the data collected from a number of WSN deployments to demonstrate its efficiency and accuracy.

Research paper thumbnail of Model for Voter Scoring and Best Answer Selection in Community Q&A Services

2009 IEEE/WIC/ACM International Joint Conference on Web Intelligence and Intelligent Agent Technology, 2009

Community Question Answering (cQA) services, such as Yahoo! Answers and MSN QnA, facilitate knowl... more Community Question Answering (cQA) services, such as Yahoo! Answers and MSN QnA, facilitate knowledge sharing through question answering by an online community of users. These services include incentive mechanisms to entice participation and self-regulate the quality of the content contributed by the users. In order to encourage quality contributions, community members are asked to nominate the 'best' among the answers provided to a question. The service then awards extra points to the author who provided the winning answer and to the voters who cast their vote for that answer. The best answers are typically selected by plurality voting, a scheme that is simple, yet vulnerable to random voting and collusion. We propose a weighted voting method that incorporates information about the voters' behavior. It assigns a score to each voter that captures the level of agreement with other voters. It uses the voter scores to aggregate the votes and determine the best answer. The mathematical formulation leads to the application of the Brouwer Fixed Point Theorem which guarantees the existence of a voter scoring function that satisfies the starting axiom. We demonstrate the robustness of our approach through simulations and analysis of real cQA service data.

Research paper thumbnail of Trajectory Approximation for Resource Constrained Mobile Sensor Networks

2014 IEEE International Conference on Distributed Computing in Sensor Systems, 2014

Low-power compact sensor nodes are being increasingly used to collect trajectory data from moving... more Low-power compact sensor nodes are being increasingly used to collect trajectory data from moving objects such as wildlife. The size of this data can easily overwhelm the data storage available on these nodes. Moreover, the transmission of this extensive data over the wireless channel may prove to be difficult. The memory and energy constraints of these platforms underscores the need for lightweight online trajectory compression albeit without seriously affecting the accuracy of the mobility data. In this paper, we present a novel online Polygon Based Approximation (PBA) algorithm that uses regular polygons, the size of which is determined by the allowed spatial error, as the smallest spatial unit for approximating the raw GPS samples. PBA only stores the first GPS sample as a reference. Each subsequent point is approximated to the centre of the polygon containing the point. Furthermore, a coding scheme is proposed that encodes the relative position (distance and direction) of each polygon with respect to the preceding polygon in the trajectory. The resulting trajectory is thus a series of bit codes, that have pair-wise dependencies at the reference point. It is thus possible to easily reconstruct an approximation of the original trajectory by decoding the chain of codes starting with the first reference point. Encoding a single GPS sample is an O(1) operation, with an overall complexity of O(n). Moreover, PBA only requires the storage of two raw GPS samples in memory at any given time. The low complexity and small memory footprint of PBA make it particularly attractive for low-power sensor nodes. PBA is evaluated using GPS traces that capture the actual mobility of flying foxes in the wild. Our results demonstrate that PBA can achieve up to nine-fold memory savings as compared to Douglas-Peucker line simplification heuristic. While we present PBA in the context of low-power devices, it can be equally useful for other GPS-enabled devices such smartphones and car navigation units.

Research paper thumbnail of A novel instruction scratchpad memory optimization method based on concomitance metric

Proceedings of the 2006 conference on Asia South Pacific design automation - ASP-DAC '06, 2006

Scratchpad memory has been introduced as a replacement for cache memory as it improves the perfor... more Scratchpad memory has been introduced as a replacement for cache memory as it improves the performance of certain embedded systems. Additionally, it has also been demonstrated that scratchpad memory can significantly reduce the energy consumption of the memory hierarchy of embedded systems. This is significant, as the memory hierarchy consumes a substantial proportion of the total energy of an embedded system. This paper deals with optimization of the instruction memory scratchpad based on a novel methodology that uses a metric which we call the concomitance. This metric is used to find basic blocks which are executed frequently and in close proximity in time. Once such blocks are found, they are copied into the scratchpad memory at appropriate times; this is achieved using a special instruction inserted into the code at appropriate places. For a set of benchmarks taken from Mediabench, our scratchpad system consumed just 59% (avg) of the energy of the cache system, and 73% (avg) of the energy of the state of the art scratchpad system, while improving the overall performance. Compared to the state of the art method, the number of instructions copied into the scratchpad memory from the main memory is reduced by 88%.

Research paper thumbnail of Local Approximations Based on Orthogonal Differential Operators

Journal of Fourier Analysis and Applications, 2007

Let M be a symmetric positive definite moment functional and let {P M n (ω)} n∈N be the family of... more Let M be a symmetric positive definite moment functional and let {P M n (ω)} n∈N be the family of orthonormal polynomials that corresponds to M. We introduce a family of linear differential operators K n = (−i) n P M n (i d dt), called the chromatic derivatives associated with M, which are orthonormal with respect to a suitably defined scalar product. We consider a Taylor type expansion of an analytic function f (t), with the values f (n) (t 0) of the derivatives replaced by the values K n [f ](t 0) of these orthonormal operators, and with monomials (t − t 0) n /n! replaced by an orthonormal family of "special functions" of the form (−1) n K n [m](t − t 0), where m(t) = ∞ n=0 (−1) n M(ω 2n) t 2n /(2n)!. Such expansions are called the chromatic expansions. Our main results relate the convergence of the chromatic expansions to the asymptotic behavior of the coefficients appearing in the three term recurrence satisfied by the corresponding family of orthogonal polynomials P M n (ω). Like the truncations of the Taylor expansion, the truncations of a chromatic expansion at t = t 0 of an analytic function f (t) approximate f (t) locally, in a neighborhood of t 0. However, unlike the values of f (n) (t 0), the values of the chromatic derivatives K n [f ](t 0) can be obtained in a noise robust way from sufficiently dense samples of f (t). The chromatic expansions have properties which make them useful in fields involving empirically sampled data, such as signal processing.

Research paper thumbnail of Power to Pulse Width Modulation Sensor for Remote Power Analysis Attacks

IACR Transactions on Cryptographic Hardware and Embedded Systems

Field-programmable gate arrays (FPGAs) deployed on commercial cloud services are increasingly gai... more Field-programmable gate arrays (FPGAs) deployed on commercial cloud services are increasingly gaining popularity due to the cost and compute benefits offered by them. Recent studies have discovered security threats than can be launched remotely on FPGAs that share the logic fabric between trusted and untrusted parties, posing a danger to designs deployed on cloud FPGAs. With remote power analysis (RPA) attacks, an attacker aims to deduce secret information present on a remote FPGA by deploying an on-chip sensor on the FPGA logic fabric. Information captured with the on-chip sensor is transferred off the chip for analysis and existing on-chip sensors demand a significant amount of bandwidth for this task as a result of their wider output bit width. However, attackers are often left with the only option of using a covert communication channel and the bandwidth of such channels is generally limited. This paper proposes a novel area-efficient on-chip power sensor named PPWM that integra...

Research paper thumbnail of Trust-Based Blockchain Authorization for IoT

IEEE Transactions on Network and Service Management, 2021

Authorization or access control limits the actions a user may perform on a computer system, based... more Authorization or access control limits the actions a user may perform on a computer system, based on predetermined access control policies, thus preventing access by illegitimate actors. Access control for the Internet of Things (IoT) should be tailored to take inherent IoT network scale and device resource constraints into consideration. However, common authorization systems in IoT employ conventional schemes, which suffer from overheads and centralization. Recent research trends suggest that blockchain has the potential to tackle the issues of access control in IoT. However, proposed solutions overlook the importance of building dynamic and flexible access control mechanisms. In this paper, we design a decentralized attribute-based access control mechanism with an auxiliary Trust and Reputation System (TRS) for IoT authorization. Our system progressively quantifies the trust and reputation scores of each node in the network and incorporates the scores into the access control mechanism to achieve dynamic and flexible access control. We design our system to run on a public blockchain, but we separate the storage of sensitive information, such as user's attributes, to private sidechains for privacy preservation. We implement our solution in a public Rinkeby Ethereum test-network interconnected with a lab-scale testbed. Our evaluations consider various performance metrics to highlight the applicability of our solution for IoT contexts.

Research paper thumbnail of Simultaneous Escape Routing using Network Flow Optimization

Malaysian Journal of Computer Science, 2016

With the advancement in technology, the size of electronic components and printed circuit boards ... more With the advancement in technology, the size of electronic components and printed circuit boards (PCB) is becoming small while the pin count of each component is increasing. This has necessitated the use of ball grid array (BGA) type of components where pins are attached under the body of component as a grid. The problem of routing pins from under the body of component to the boundary of the component is known as escape routing. It is often desirable to perform ordered simultaneous escape routing (SER) to facilitate area routing and produce elegant PCB design. The task of SER is non-trivial, given the small size of components and hundreds of pins arranged in random order in each component that needs ordered connectivity. In this paper, first we propose flow models for different inter pin capacities. We then propose linear network flow optimization model that simultaneously solves the net ordering and net escape problem. The model routes maximum possible nets between two components of the PCB, by considering the design rules. Comparative analysis shows that the proposed optimization model performs better than the existing routing algorithms in terms of number of nets routed.

Research paper thumbnail of Simeon - Secure Federated Machine Learning Through Iterative Filtering

ArXiv, 2021

Federated learning enables a global machine learning model to be trained collaboratively by distr... more Federated learning enables a global machine learning model to be trained collaboratively by distributed, mutually non-trusting learning agents who desire to maintain the privacy of their training data and their hardware. A global model is distributed to clients, who perform training, and submit their newly-trained model to be aggregated into a superior model. However, federated learning systems are vulnerable to interference from malicious learning agents who may desire to prevent training or induce targeted misclassification in the resulting global model. A class of Byzantine-tolerant aggregation algorithms has emerged, offering varying degrees of robustness against these attacks, often with the caveat that the number of attackers is bounded by some quantity known prior to training. This paper presents Simeon: a novel approach to aggregation that applies a reputation-based iterative filtering technique to achieve robustness even in the presence of attackers who can exhibit arbitrar...

Research paper thumbnail of VITI: A Tiny Self-Calibrating Sensor for Power-Variation Measurement in FPGAs

IACR Transactions on Cryptographic Hardware and Embedded Systems, 2021

On-chip sensors, built using reconfigurable logic resources in field programmable gate arrays (FP... more On-chip sensors, built using reconfigurable logic resources in field programmable gate arrays (FPGAs), have been shown to sense variations in signalpropagation delay, supply voltage and power consumption. These sensors have been successfully used to deploy security attacks called Remote Power Analysis (RPA) Attacks on FPGAs. The sensors proposed thus far consume significant logic resources and some of them could be used to deploy power viruses. In this paper, a sensor (named VITI) occupying a far smaller footprint than existing sensors is presented. VITI is a self-calibrating on-chip sensor design, constructed using adjustable delay elements, flip-flops and LUT elements instead of combinational loops, bulky carry chains or latches. Self-calibration enables VITI the autonomous adaptation to differing situations (such as increased power consumption, temperature changes or placement of the sensor in faraway locations from the circuit under attack). The efficacy of VITI for power consum...

Research paper thumbnail of UCloD: Small Clock Delays to Mitigate Remote Power Analysis Attacks

IEEE Access, 2021

This paper presents UCloD, a novel random clock delay-based robust and scalable countermeasure ag... more This paper presents UCloD, a novel random clock delay-based robust and scalable countermeasure against recently discovered remote power analysis (RPA) attacks. UCloD deploys very small clock delays (in the picosecond range) generated using the tapped delays lines (TDLs) to mitigate RPA attacks. UCloD provides the most robust countermeasures demonstrated thus far against RPA attacks. RPA attacks use delay sensors, such as Time to Digital Converters (TDC) or Ring Oscillators (ROs) to measure voltage fluctuations occurring in power delivery networks (PDNs) of Field Programmable Gate Arrays (FPGAs). These voltage fluctuations reveal secret information, such as secret keys of cryptographic circuits. The only countermeasure proposed thus far activates ROs to consume significant power and has managed to secure Advanced Encryption Standard (AES) circuits for up to 300,000 encryptions. Using TDLs available in FPGAs, UCloD randomly varies the clock to the cryptographic circuits under attack to induce noise in the adversary's delay sensor(s). We demonstrate correlation power analysis (referred to as CPA) attack resistance of UCloD AES implementations for up to one million encryptions. Compared to an unprotected AES circuit, UCloD implementations have minimal overheads (0.2% Slice LUT overhead and 4.8% Slice register overhead for Xilinx implementations and 0.5% LogicCells overhead for Lattice Semiconductor implementations).

Research paper thumbnail of Chromatic Derivatives and Approximations in Practice—Part I: A General Framework

IEEE Transactions on Signal Processing, 2018

Chromatic derivatives are special, numerically robust differential operators that preserve spectr... more Chromatic derivatives are special, numerically robust differential operators that preserve spectral features of a signal; the associated chromatic approximations accurately capture local features of a signal. For this reason they allow digital processing of continuous time signals often superior to processing of discrete samples of such signals. We introduce a new concept of "matched filter" chromatic approximations, where the underlying basis functions are chosen to match the spectral profile of the signals being approximated. We then derive a collection of formulas and theorems that form a general framework for practical applications of chromatic derivatives and approximations. In the second part of this paper, we use such a general framework in several case studies of such applications that aim to illustrate how chromatic derivatives and approximations can be used in signal processing with an intention of motivating DSP engineers to find applications of these novel concepts in their own practice. Index Terms-Chromatic derivatives, chromatic expansions, digital representation and processing of continuous time signals.

Research paper thumbnail of A Provenance-Aware Multi-dimensional Reputation System for Online Rating Systems

ACM Transactions on Internet Technology, 2018

Online rating systems are widely accepted as means for quality assessment on the web and users in... more Online rating systems are widely accepted as means for quality assessment on the web and users increasingly rely on these systems when deciding to purchase an item online. This makes such rating systems frequent targets of attempted manipulation by posting unfair rating scores. Therefore, providing useful, realistic rating scores as well as detecting unfair behavior are both of very high importance. Existing solutions are mostly majority based, also employing temporal analysis and clustering techniques. However, they are still vulnerable to unfair ratings. They also ignore distances between options, the provenance of information, and different dimensions of cast rating scores while computing aggregate rating scores and trustworthiness of users. In this article, we propose a robust iterative algorithm which leverages information in the profile of users and provenance of information, and which takes into account the distance between options to provide both more robust and informative ...

Research paper thumbnail of Mobility based Net Ordering for Simultaneous Escape Routing

International Journal of Advanced Computer Science and Applications, 2017

With the advancement in electronics technology, number of pins under the ball grid array (BGA) ar... more With the advancement in electronics technology, number of pins under the ball grid array (BGA) are increasing on reduced size components. In small size components, a challenging task is to solve the escape routing problem where BGA pins escape towards the component boundary. It is often desirable to perform ordered simultaneous escape routing (SER) to facilitate area routing and produce elegant Printed Circuit Board (PCB) design. Some heuristic techniques help in finding the PCB routing solution for SER but for larger problems these are time consuming and produce sub-optimal results. This work propose solution which divides the problem into two parts. First, a novel net ordering algorithm for SER using network theoretic approach and then linear optimization model for single component ordered escape routing has been proposed. The model routes maximum possible nets between two components of the PCB by considering the design rules based on the given net ordering. Comparative analysis shows that the proposed net ordering algorithm and optimization model performs better than the existing routing algorithms for SER in terms of number of nets routed. Also the running time using proposed algorithm reduces to O(2 N E/2) + O(2 N E/2) for ordered escape routing of both components. This time is much lesser than O(2 N E) due to exponential reduction.

Research paper thumbnail of Interdependent Security Risk Analysis of Hosts and Flows

IEEE Transactions on Information Forensics and Security, 2015

Detection of high risk hosts and flows continues to be a significant problem in security monitori... more Detection of high risk hosts and flows continues to be a significant problem in security monitoring of high throughput networks. A comprehensive risk assessment method should take into account the risk propagation among risky hosts and flows. In this paper this is achieved by introducing two novel concepts. The first is an interdependency relationship among the risk scores of a network flow and its source and destination hosts. In one hand, the risk score of a host depends on risky flows such a host initiates and is targeted by. On the other hand, the risk score of a flow depends on the risk scores of its source and destination hosts. The second concept, which we call flow provenance, represents risk propagation among network flows which takes into account the likelihood that a particular flow is caused by other flows. Based on these two concepts, we develop an iterative algorithm for computing the risk level of hosts and network flows. We give a rigorous proof that our algorithm rapidly converges to unique risk estimates, and provide its extensive empirical evaluation using two real-world datasets. Our evaluation demonstrates that our method is effective in detecting high risk hosts and flows and is sufficiently efficient to be deployed in high throughput networks.

Research paper thumbnail of Iterative Security Risk Analysis for Network Flows Based on Provenance and Interdependency

2013 IEEE International Conference on Distributed Computing in Sensor Systems, 2013

Discovering high risk network flows and hosts in a high throughput network is a challenging task ... more Discovering high risk network flows and hosts in a high throughput network is a challenging task of network monitoring. Emerging complicated attack scenarios such as DDoS attacks increases the complexity of tracking malicious and high risk network activities within a huge number of monitored network flows. To address this problem, we propose an iterative framework to assessing risk scores for hosts and network flows. To obtain risk scores of flows, we take into account two properties, flow attributes and flow provenance. Also, our iterative risk assessment measures the risk scores of hosts and flows based on an interdependency property where the risk score of a flow influences the risk of its source and destination hosts, and the risk score of a host is evaluated by risk scores of flows initiated by or terminated at the host. Moreover, the update mechanism in our framework allows flows to keep streaming into the system while our risk assessment method performs an online monitoring task. The experimental results show that our approach is effective in detecting high risk hosts and flows as well as sufficiently efficient to be deployed in high throughput networks compared to other algorithms.

Research paper thumbnail of Frequency estimation using time domain methods based on robust differential operators

IEEE 10th INTERNATIONAL CONFERENCE ON SIGNAL PROCESSING PROCEEDINGS, 2010

Given a band limited signal which over some disjoint intervals of time In behaves as a correspond... more Given a band limited signal which over some disjoint intervals of time In behaves as a corresponding linear combination fn(t) of up to N damped sinusoids, we present a method which detects intervals In, determines the number of the sinusoidal components over each interval and estimates their frequencies, with high accuracy and in the presence of noise which can be colored. Intervals In can have very short duration of just a dozen Nyquist rate intervals, hampering the use of the Fourier transform based methods. Our method operates entirely in the time domain; to be applicable, the signal must be sampled at a rate higher than the Nyquist rate. It is based on analyzing local signal behavior using special, numerically robust linear differential operators, called the chromatic derivatives, which were introduced relatively recently, and which hold yet unexplored promise in signal and image processing.

Research paper thumbnail of κ-FSOM: Fair Link Scheduling Optimization for Energy-Aware Data Collection in Mobile Sensor Networks

Lecture Notes in Computer Science, 2014

We consider the problem of data collection from a continental-scale network of mobile sensors, sp... more We consider the problem of data collection from a continental-scale network of mobile sensors, specifically applied to wildlife tracking. Our application constraints favor a highly asymmetric solution, with heavily duty-cycled sensor nodes communicating with a network of powered base stations. Individual nodes move freely in the environment, resulting in low-quality radio links and hot-spot arrival patterns with the available data exceeding the radio link capacity. We propose a novel scheduling algorithm, κ-Fair Scheduling Optimization Model (κ-FSOM), that maximizes the amount of collected data under the constraints of radio link quality and energy, while ensuring a fair access to the radio channel. We show the problem is NP-complete and propose a heuristic to approximate the optimal scheduling solution in polynomial time. We use empirical link quality data to evaluate the κ-FSOM heuristic in a realistic setting and compare its performance to other heuristics. We show that κ-FSOM heuristic achieves high data reception rates, under different fairness and node lifetime constraints.

Research paper thumbnail of Provenance-aware security risk analysis for hosts and network flows

2014 IEEE Network Operations and Management Symposium (NOMS), 2014

Detection of high risk network flows and high risk hosts is becoming ever more important and more... more Detection of high risk network flows and high risk hosts is becoming ever more important and more challenging. In order to selectively apply deep packet inspection (DPI) one has to isolate in real time high risk network activities within a huge number of monitored network flows. To help address this problem, we propose an iterative methodology for a simultaneous assessment of risk scores for both hosts and network flows. The proposed approach measures the risk scores of hosts and flows in an interdependent manner; thus, the risk score of a flow influences the risk score of its source and destination hosts, and also the risk score of a host is evaluated by taking into account the risk scores of flows initiated by or terminated at the host. Our experimental results show that such an approach not only effective in detecting high risk hosts and flows but, when deployed in high throughput networks, is also more efficient than PageRank based algorithms.

Research paper thumbnail of Fidelity metrics for estimation models

2010 IEEE/ACM International Conference on Computer-Aided Design (ICCAD), 2010

Estimation models play a vital role in many aspects of day to day life. Extremely complex estimat... more Estimation models play a vital role in many aspects of day to day life. Extremely complex estimation models are employed in the design space exploration of SoCs, and the efficacy of these estimation models is usually measured by the absolute error of the models compared to known actual results. Such absolute error based metrics can often result in over-designed estimation models, with a number of researchers suggesting that fidelity of an estimation model (correlation between the ordering of the estimated points and the ordering of the actual points) should be examined instead of, or in addition to, the absolute error. In this paper, for the first time, we propose four metrics to measure the fidelity of an estimation model, in particular for use in design space exploration. The first two are based on two well known rank correlation coefficients. The other two are weighted versions of the first two metrics, to give importance to points nearer the Pareto front. The proposed fidelity metrics range from-1 to 1, where a value of 1 reflects a perfect positive correlation while a value of-1 reflects a perfect negative correlation. The proposed fidelity metrics were calculated for a single processor estimation model and a multiprocessor estimation model to observe their behavior, and were compared against the models' absolute error. For the multiprocessor estimation model, even though the worst average and maximum absolute error of 6.40% and 16.61% respectively can be considered reasonable in design automation, the worst fidelity of 0.753 suggests that the multiprocessor estimation model may not be as good a model (compared to an estimation model with same or higher absolute errors but a fidelity of 0.95) as depicted by its absolute accuracy, leading to an over-designed estimation model.

Research paper thumbnail of Efficient computation of robust average in wireless sensor networks using compressive sensing

Wireless sensor networks (WSNs) enable the collection of physical measurements over a large geogr... more Wireless sensor networks (WSNs) enable the collection of physical measurements over a large geographic area. It is often the case that we are interested in computing and tracking the spatial-average of the sensor measurements over a region of the WSN. Unfortunately, the standard average operation is not robust because it is highly susceptible to sensor faults (e.g. offset, stuck-at errors etc.) and variation of sensor measurement noises. In this paper, we propose a method to compute a robust average of sensor measurements, which appropriately takes sensor faults and sensor noise into consideration, in a bandwidth-and computational-efficient manner. At the same time, the proposed method can determine which sensors are likely to be faulty. Our method achieves bandwidth efficiency by exploiting compressive sensing. Instead of sending a block of sensor readings to the data fusion centre, each sensor performs random projections (as in compressive sensing) on the data block and sends the results of the projections (which we will refer to as the compressed data) to the data fusion centre. At the data fusion centre, we achieve computational efficiency by working directly with the compressed data, whose dimension is only a fraction of that of the original block of sensor data. In other words, our proposed method will work on the compressed data without decompressing them. By using the compressed data, our proposed method will determine which sensors are likely to be faulty as well as a robust average of the compressed data, which, after decompression (or compressive sensing reconstruction), will yield an approximation of the robust average of the original sensor readings. This means that the data fusion centre will only need to perform decompression once in order to obtain the robust average (rather than decompressing all the compressed data from all the sensors), therefore achieving computational efficiency. We apply our proposed method to the data collected from a number of WSN deployments to demonstrate its efficiency and accuracy.

Research paper thumbnail of Model for Voter Scoring and Best Answer Selection in Community Q&A Services

2009 IEEE/WIC/ACM International Joint Conference on Web Intelligence and Intelligent Agent Technology, 2009

Community Question Answering (cQA) services, such as Yahoo! Answers and MSN QnA, facilitate knowl... more Community Question Answering (cQA) services, such as Yahoo! Answers and MSN QnA, facilitate knowledge sharing through question answering by an online community of users. These services include incentive mechanisms to entice participation and self-regulate the quality of the content contributed by the users. In order to encourage quality contributions, community members are asked to nominate the 'best' among the answers provided to a question. The service then awards extra points to the author who provided the winning answer and to the voters who cast their vote for that answer. The best answers are typically selected by plurality voting, a scheme that is simple, yet vulnerable to random voting and collusion. We propose a weighted voting method that incorporates information about the voters' behavior. It assigns a score to each voter that captures the level of agreement with other voters. It uses the voter scores to aggregate the votes and determine the best answer. The mathematical formulation leads to the application of the Brouwer Fixed Point Theorem which guarantees the existence of a voter scoring function that satisfies the starting axiom. We demonstrate the robustness of our approach through simulations and analysis of real cQA service data.

Research paper thumbnail of Trajectory Approximation for Resource Constrained Mobile Sensor Networks

2014 IEEE International Conference on Distributed Computing in Sensor Systems, 2014

Low-power compact sensor nodes are being increasingly used to collect trajectory data from moving... more Low-power compact sensor nodes are being increasingly used to collect trajectory data from moving objects such as wildlife. The size of this data can easily overwhelm the data storage available on these nodes. Moreover, the transmission of this extensive data over the wireless channel may prove to be difficult. The memory and energy constraints of these platforms underscores the need for lightweight online trajectory compression albeit without seriously affecting the accuracy of the mobility data. In this paper, we present a novel online Polygon Based Approximation (PBA) algorithm that uses regular polygons, the size of which is determined by the allowed spatial error, as the smallest spatial unit for approximating the raw GPS samples. PBA only stores the first GPS sample as a reference. Each subsequent point is approximated to the centre of the polygon containing the point. Furthermore, a coding scheme is proposed that encodes the relative position (distance and direction) of each polygon with respect to the preceding polygon in the trajectory. The resulting trajectory is thus a series of bit codes, that have pair-wise dependencies at the reference point. It is thus possible to easily reconstruct an approximation of the original trajectory by decoding the chain of codes starting with the first reference point. Encoding a single GPS sample is an O(1) operation, with an overall complexity of O(n). Moreover, PBA only requires the storage of two raw GPS samples in memory at any given time. The low complexity and small memory footprint of PBA make it particularly attractive for low-power sensor nodes. PBA is evaluated using GPS traces that capture the actual mobility of flying foxes in the wild. Our results demonstrate that PBA can achieve up to nine-fold memory savings as compared to Douglas-Peucker line simplification heuristic. While we present PBA in the context of low-power devices, it can be equally useful for other GPS-enabled devices such smartphones and car navigation units.

Research paper thumbnail of A novel instruction scratchpad memory optimization method based on concomitance metric

Proceedings of the 2006 conference on Asia South Pacific design automation - ASP-DAC '06, 2006

Scratchpad memory has been introduced as a replacement for cache memory as it improves the perfor... more Scratchpad memory has been introduced as a replacement for cache memory as it improves the performance of certain embedded systems. Additionally, it has also been demonstrated that scratchpad memory can significantly reduce the energy consumption of the memory hierarchy of embedded systems. This is significant, as the memory hierarchy consumes a substantial proportion of the total energy of an embedded system. This paper deals with optimization of the instruction memory scratchpad based on a novel methodology that uses a metric which we call the concomitance. This metric is used to find basic blocks which are executed frequently and in close proximity in time. Once such blocks are found, they are copied into the scratchpad memory at appropriate times; this is achieved using a special instruction inserted into the code at appropriate places. For a set of benchmarks taken from Mediabench, our scratchpad system consumed just 59% (avg) of the energy of the cache system, and 73% (avg) of the energy of the state of the art scratchpad system, while improving the overall performance. Compared to the state of the art method, the number of instructions copied into the scratchpad memory from the main memory is reduced by 88%.