usman khan - Academia.edu (original) (raw)

Papers by usman khan

Research paper thumbnail of DILAND: An Algorithm for Distributed Sensor Localization with Noisy Distance Measurements

Computing Research Repository, 2009

We present an algorithm for distributed sensor localization with noisy distance measurements (DIL... more We present an algorithm for distributed sensor localization with noisy distance measurements (DILAND) that extends and makes the DLRE more robust. DLRE is a distributed sensor localization algorithm in Rm (m ?? 1) introduced in our previous work (IEEE Trans. Signal Process., vol. 57, no. 5, pp. 2000-2016, May 2009). DILAND operates when: 1) the communication among the sensors is noisy; 2) the communication links in the network may fail with a nonzero probability; and 3) the measurements performed to compute distances among the sensors are corrupted with noise. The sensors (which do not know their locations) lie in the convex hull of at least m + 1 anchors (nodes that know their own locations). Under minimal assumptions on the connectivity and triangulation of each sensor in the network, we show that, under the broad random phenomena described above, DILAND converges almost surely (a.s.) to the exact sensor locations.

Research paper thumbnail of Distributed Algorithms in Sensor Networks

Haykin/Array Processing, 2010

Research paper thumbnail of A linear iterative algorithm for distributed sensor localization

2008 42nd Asilomar Conference on Signals, Systems and Computers, 2008

... 2008. [12] Usman Khan, Soummya Kar, and José MF Mour, “Distributed algorithms in sensor netwo... more ... 2008. [12] Usman Khan, Soummya Kar, and José MF Mour, “Distributed algorithms in sensor networks,” in Handbook on sensor and array processing, Simon Haykin and KJ Ray Liu, Eds. Wily-Interscience, New York, NY, 2009, to appear, 33 pages. ...

Research paper thumbnail of Higher dimensional consensus algorithms in sensor networks

2009 IEEE International Conference on Acoustics, Speech and Signal Processing, 2009

This paper introduces higher dimensional consensus, a framework to capture a number of different,... more This paper introduces higher dimensional consensus, a framework to capture a number of different, but, related distributed, iterative, linear algorithms of interest in sensor networks. We show that, by suitably choosing the iteration matrix of the higher dimensional consensus, we can capture, besides the standard average-consensus, a broad range of applications, including sensor localization, leader-follower, and distributed Jacobi algorithm. We work with the concept of anchors and explicitly derive the consensus subspace and provide the dimension of the limiting state of the sensors.

Research paper thumbnail of Distributed iterate-collapse inversion (DICI) algorithm for L-banded matrices

2008 IEEE International Conference on Acoustics, Speech and Signal Processing, 2008

In this paper, we present a distributed algorithm to invert L−banded matrices that are symmetric ... more In this paper, we present a distributed algorithm to invert L−banded matrices that are symmetric positive definite (SPD), when the submatrices in the band are distributed among several processing nodes. We provide a distributed iterate-collapse inversion (DICI) algorithm that converges, at each node, to the corresponding submatrices in the inverse of the L−banded matrix. The computational complexity of the DICI algorithm to invert an SPD L−banded n × n matrix can be shown at each node to be independent of the size, n, of the matrix. Local information exchange is carried out after each iteration to guarantee convergence. We apply this algorithm to invert the information matrices in a computationally efficient distributed implementation of the Kalman filter and show its application towards inverting arbitrary sparse SPD matrices.

Research paper thumbnail of On the stability and optimality of distributed Kalman filters with finite-time data fusion

Proceedings of the 2011 American Control Conference, 2011

Page 1. On the stability and optimality of distributed Kalman filters with finite-time data fusio... more Page 1. On the stability and optimality of distributed Kalman filters with finite-time data fusion ∗ Usman A. Khan‡, Ali Jadbabaie† Abstract—In this paper, we consider distributed estimation for discrete-time, linear systems, with ...

Research paper thumbnail of On connectivity, observability, and stability in distributed estimation

49th IEEE Conference on Decision and Control (CDC), 2010

Page 1. On connectivity, observability, and stability in distributed estimation ∗ Usman A. Khan†,... more Page 1. On connectivity, observability, and stability in distributed estimation ∗ Usman A. Khan†, Soummya Kar‡, Ali Jadbabaie†, José MF Moura‡ Abstract—We introduce a new model of social learning and distributed estimation ...

Research paper thumbnail of Coordinated networked estimation strategies using structured systems theory

IEEE Conference on Decision and Control and European Control Conference, 2011

Page 1. Coordinated networked estimation strategies using structured systems theory Usman A. Khan... more Page 1. Coordinated networked estimation strategies using structured systems theory Usman A. Khan† and Ali Jadbabaie‡ Abstract—In this paper, we consider linear networked es-timation strategies using the results from structured systems theory. ...

Research paper thumbnail of Collaborative scalar-gain estimators for potentially unstable social dynamics with limited communication

Research paper thumbnail of Higher dimensional consensus: learning in large-scale networks

IEEE Transactions on Signal Processing, 2010

The paper presents higher dimension consensus (HDC) for large-scale networks. HDC generalizes the... more The paper presents higher dimension consensus (HDC) for large-scale networks. HDC generalizes the well-known average-consensus algorithm. It divides the nodes of the large-scale network into anchors and sensors. Anchors are nodes whose states are fixed over the HDC iterations, whereas sensors are nodes that update their states as a linear combination of the neighboring states. Under appropriate conditions, we show that the sensors' states converge to a linear combination of the anchors' states. Through the concept of anchors, HDC captures in a unified framework several interesting network tasks, including distributed sensor localization, leader-follower, distributed Jacobi to solve linear systems of algebraic equations, and, of course, average-consensus.

Research paper thumbnail of Cooperation for aggregating complex electric power networks to ensure system observability

International Conference on Infrastructure Systems and Services: Developing 21st Century Infrastructure Networks, 2008

Page 1. Cooperation for aggregating complex electric power networks to ensure system observabilit... more Page 1. Cooperation for aggregating complex electric power networks to ensure system observability Usman A. Khan, Student Member, IEEE, Marija D. Ilic, Fellow, IEEE, and José MF Moura, Fellow, IEEE Carnegie Mellon University ...

Research paper thumbnail of Higher Dimensional Consensus: Learning in Large-Scale Networks

Computing Research Repository, 2009

The paper presents higher dimension consensus (HDC) for large-scale networks. HDC generalizes the... more The paper presents higher dimension consensus (HDC) for large-scale networks. HDC generalizes the well-known average-consensus algorithm. It divides the nodes of the large-scale network into anchors and sensors. Anchors are nodes whose states are fixed over the HDC iterations, whereas sensors are nodes that update their states as a linear combination of the neighboring states. Under appropriate conditions, we show that the sensor states converge to a linear combination of the anchor states. Through the concept of anchors, HDC captures in a unified framework several interesting network tasks, including distributed sensor localization, leader-follower, distributed Jacobi to solve linear systems of algebraic equations, and, of course, average-consensus. In many network applications, it is of interest to learn the weights of the distributed linear algorithm so that the sensors converge to a desired state. We term this inverse problem the HDC learning problem. We pose learning in HDC as a constrained non-convex optimization problem, which we cast in the framework of multi-objective optimization (MOP) and to which we apply Pareto optimality. We prove analytically relevant properties of the MOP solutions and of the Pareto front from which we derive the solution to learning in HDC. Finally, the paper shows how the MOP approach resolves interesting tradeoffs (speed of convergence versus quality of the final state) arising in learning in HDC in resource constrained networks.

Research paper thumbnail of On connectivity, observability, and stability in distributed estimation

Conference on Decision and Control, 2010

We introduce a new model of social learning and distributed estimation in which the state to be e... more We introduce a new model of social learning and distributed estimation in which the state to be estimated is governed by a potentially unstable linear model driven by noise. The state is observed by a network of agents, each with its own linear noisy observation models. We assume the state to be globally observable, but no agent is able to estimate the state with its own observations alone. We propose a single consensus-step estimator that consists of an innovation step and a consensus step, both performed at the same time-step. We show that if the instability of the dynamics is strictly less than the Network Tracking Capacity (NTC), a function of network connectivity and the observation matrices, the single consensus-step estimator results in a bounded estimation error. We further quantify the trade-off between: (i) (in)stability of the parameter dynamics, (ii) connectivity of the underlying network, and (iii) the observation structure, in the context of single timescale algorithms. This contrasts with prior work on distributed estimation that either assumes scalar dynamics (which removes local observability issues) or assumes that enough iterates can be carried out for the consensus to converge between each innovation (observation) update.

Research paper thumbnail of Model Distribution for Distributed Kalman Filters: A Graph Theoretic Approach

Asilomar Conference on Signals, Systems & Computers, 2007

This paper discusses the distributed Kalman filter problem for the state estimation of sparse lar... more This paper discusses the distributed Kalman filter problem for the state estimation of sparse large-scale systems monitored by sensor networks. With limited computing resources at each sensor, no sensor has the ability to replicate locally the entire large-scale statespace model. We investigate techniques to distribute the model, i.e., to have at each sensor low-dimensional coupled local models that are computationally viable and provide accurate representation of the local states. We implement local Kalman filters over these coupled reduced models. We use system digraphs and cut-point sets for model distribution. Under certain conditions, the local Kalman filters asymptotically guarantee the performance of the centralized Kalman filter.

Research paper thumbnail of Distributing the Kalman Filter for Large-Scale Systems

IEEE Transactions on Signal Processing, 2008

This paper presents a distributed Kalman filter to estimate the state of a sparsely connected, la... more This paper presents a distributed Kalman filter to estimate the state of a sparsely connected, large-scale, n -dimensional, dynamical system monitored by a network of N sensors. Local Kalman filters are implemented on nl-dimensional subsystems, nl Lt n, obtained by spatially decomposing the large-scale system. The distributed Kalman filter is optimal under an Lth order Gauss-Markov approximation to the centralized filter. We quantify the information loss due to this Lth-order approximation by the divergence, which decreases as L increases. The order of the approximation L leads to a bound on the dimension of the subsystems, hence, providing a criterion for subsystem selection. The (approximated) centralized Riccati and Lyapunov equations are computed iteratively with only local communication and low-order computation by a distributed iterate collapse inversion (DICI) algorithm. We fuse the observations that are common among the local Kalman filters using bipartite fusion graphs and consensus averaging algorithms. The proposed algorithm achieves full distribution of the Kalman filter. Nowhere in the network, storage, communication, or computation of n-dimensional vectors and matrices is required; only nl Lt n dimensional vectors and matrices are communicated or used in the local computations at the sensors. In other words, knowledge of the state is itself distributed.

Research paper thumbnail of Distributed Sensor Localization in Euclidean Spaces: Dynamic Environments

, we presented an algorithm to localize sensors in m-dimensional Euclidean space R n with unknown... more , we presented an algorithm to localize sensors in m-dimensional Euclidean space R n with unknown locations assuming the following: 1) there are (m + 1) sensors that know their absolute coordinates-the anchors; 2) each sensor communicates with m + 1 of its neighbors; and 3) the sensors lie in the convex hull of the anchors. The localization algorithm is a generalization of consensus-it is a weighted linear, iterative, and distributed algorithm. The weights are the barycentric coordinates of a sensor with respect to its neighbors, which are computed by the generalized volumes obtained from the intersensor distances in the Cayley-Menger determinants. This paper expands on this work to take advantage of when the number of anchors available possibly exceeds m + 1, a sensor can communicate with all sensors within its radius of communication, and when the network communication topology may be dynamic as, for example, when the network neighborhood structure changes over time. The paper shows that the algorithm converges to the exact sensor locations in the absence of noise.

Research paper thumbnail of Distributed sensor localization in random environments using minimal number of anchor nodes

IEEE Transactions on Signal Processing, 2009

The paper introduces DILOC, a distributed, iterative algorithm to locate M sensors (with unknown ... more The paper introduces DILOC, a distributed, iterative algorithm to locate M sensors (with unknown locations) in Rm, m ges 1, with respect to a minimal number of m + 1 anchors with known locations. The sensors and anchors, nodes in the network, exchange data with their neighbors only; no centralized data processing or communication occurs, nor is there a centralized fusion center to compute the sensors' locations. DILOC uses the barycentric coordinates of a node with respect to its neighbors; these coordinates are computed using the Cayley-Menger determinants, i.e., the determinants of matrices of internode distances. We show convergence of DILOC by associating with it an absorbing Markov chain whose absorbing states are the states of the anchors. We introduce a stochastic approximation version extending DILOC to random environments, i.e., when the communications among nodes is noisy, the communication links among neighbors may fail at random times, and the internodes distances are subject to errors. We show a.s. convergence of the modified DILOC and characterize the error between the true values of the sensors' locations and their final estimates given by DILOC. Numerical studies illustrate DILOC under a variety of deterministic and random operating conditions.

Research paper thumbnail of Distributed Sensor Localization in Random Environments using Minimal Number of Anchor Nodes

Computing Research Repository, 2008

The paper develops DILOC, a distributive, iterative algorithm that locates M sensors in R m , m ≥... more The paper develops DILOC, a distributive, iterative algorithm that locates M sensors in R m , m ≥ 1, with respect to a minimal number of m + 1 anchors with known locations. The sensors exchange data with their neighbors only; no centralized data processing or communication occurs, nor is there centralized knowledge about the sensors' locations. DILOC uses the barycentric coordinates of a sensor with respect to its neighbors that are computed using the Cayley-Menger determinants. These are the determinants of matrices of inter-sensor distances. We show convergence of DILOC by associating with it an absorbing Markov chain whose absorbing states are the anchors.

Research paper thumbnail of On the characterization of distributed observability from first principles

2014 IEEE Global Conference on Signal and Information Processing (GlobalSIP), 2014

Research paper thumbnail of Source localization in complex networks using a frequency-domain approach

2013 IEEE Global Conference on Signal and Information Processing, 2013

Research paper thumbnail of DILAND: An Algorithm for Distributed Sensor Localization with Noisy Distance Measurements

Computing Research Repository, 2009

We present an algorithm for distributed sensor localization with noisy distance measurements (DIL... more We present an algorithm for distributed sensor localization with noisy distance measurements (DILAND) that extends and makes the DLRE more robust. DLRE is a distributed sensor localization algorithm in Rm (m ?? 1) introduced in our previous work (IEEE Trans. Signal Process., vol. 57, no. 5, pp. 2000-2016, May 2009). DILAND operates when: 1) the communication among the sensors is noisy; 2) the communication links in the network may fail with a nonzero probability; and 3) the measurements performed to compute distances among the sensors are corrupted with noise. The sensors (which do not know their locations) lie in the convex hull of at least m + 1 anchors (nodes that know their own locations). Under minimal assumptions on the connectivity and triangulation of each sensor in the network, we show that, under the broad random phenomena described above, DILAND converges almost surely (a.s.) to the exact sensor locations.

Research paper thumbnail of Distributed Algorithms in Sensor Networks

Haykin/Array Processing, 2010

Research paper thumbnail of A linear iterative algorithm for distributed sensor localization

2008 42nd Asilomar Conference on Signals, Systems and Computers, 2008

... 2008. [12] Usman Khan, Soummya Kar, and José MF Mour, “Distributed algorithms in sensor netwo... more ... 2008. [12] Usman Khan, Soummya Kar, and José MF Mour, “Distributed algorithms in sensor networks,” in Handbook on sensor and array processing, Simon Haykin and KJ Ray Liu, Eds. Wily-Interscience, New York, NY, 2009, to appear, 33 pages. ...

Research paper thumbnail of Higher dimensional consensus algorithms in sensor networks

2009 IEEE International Conference on Acoustics, Speech and Signal Processing, 2009

This paper introduces higher dimensional consensus, a framework to capture a number of different,... more This paper introduces higher dimensional consensus, a framework to capture a number of different, but, related distributed, iterative, linear algorithms of interest in sensor networks. We show that, by suitably choosing the iteration matrix of the higher dimensional consensus, we can capture, besides the standard average-consensus, a broad range of applications, including sensor localization, leader-follower, and distributed Jacobi algorithm. We work with the concept of anchors and explicitly derive the consensus subspace and provide the dimension of the limiting state of the sensors.

Research paper thumbnail of Distributed iterate-collapse inversion (DICI) algorithm for L-banded matrices

2008 IEEE International Conference on Acoustics, Speech and Signal Processing, 2008

In this paper, we present a distributed algorithm to invert L−banded matrices that are symmetric ... more In this paper, we present a distributed algorithm to invert L−banded matrices that are symmetric positive definite (SPD), when the submatrices in the band are distributed among several processing nodes. We provide a distributed iterate-collapse inversion (DICI) algorithm that converges, at each node, to the corresponding submatrices in the inverse of the L−banded matrix. The computational complexity of the DICI algorithm to invert an SPD L−banded n × n matrix can be shown at each node to be independent of the size, n, of the matrix. Local information exchange is carried out after each iteration to guarantee convergence. We apply this algorithm to invert the information matrices in a computationally efficient distributed implementation of the Kalman filter and show its application towards inverting arbitrary sparse SPD matrices.

Research paper thumbnail of On the stability and optimality of distributed Kalman filters with finite-time data fusion

Proceedings of the 2011 American Control Conference, 2011

Page 1. On the stability and optimality of distributed Kalman filters with finite-time data fusio... more Page 1. On the stability and optimality of distributed Kalman filters with finite-time data fusion ∗ Usman A. Khan‡, Ali Jadbabaie† Abstract—In this paper, we consider distributed estimation for discrete-time, linear systems, with ...

Research paper thumbnail of On connectivity, observability, and stability in distributed estimation

49th IEEE Conference on Decision and Control (CDC), 2010

Page 1. On connectivity, observability, and stability in distributed estimation ∗ Usman A. Khan†,... more Page 1. On connectivity, observability, and stability in distributed estimation ∗ Usman A. Khan†, Soummya Kar‡, Ali Jadbabaie†, José MF Moura‡ Abstract—We introduce a new model of social learning and distributed estimation ...

Research paper thumbnail of Coordinated networked estimation strategies using structured systems theory

IEEE Conference on Decision and Control and European Control Conference, 2011

Page 1. Coordinated networked estimation strategies using structured systems theory Usman A. Khan... more Page 1. Coordinated networked estimation strategies using structured systems theory Usman A. Khan† and Ali Jadbabaie‡ Abstract—In this paper, we consider linear networked es-timation strategies using the results from structured systems theory. ...

Research paper thumbnail of Collaborative scalar-gain estimators for potentially unstable social dynamics with limited communication

Research paper thumbnail of Higher dimensional consensus: learning in large-scale networks

IEEE Transactions on Signal Processing, 2010

The paper presents higher dimension consensus (HDC) for large-scale networks. HDC generalizes the... more The paper presents higher dimension consensus (HDC) for large-scale networks. HDC generalizes the well-known average-consensus algorithm. It divides the nodes of the large-scale network into anchors and sensors. Anchors are nodes whose states are fixed over the HDC iterations, whereas sensors are nodes that update their states as a linear combination of the neighboring states. Under appropriate conditions, we show that the sensors' states converge to a linear combination of the anchors' states. Through the concept of anchors, HDC captures in a unified framework several interesting network tasks, including distributed sensor localization, leader-follower, distributed Jacobi to solve linear systems of algebraic equations, and, of course, average-consensus.

Research paper thumbnail of Cooperation for aggregating complex electric power networks to ensure system observability

International Conference on Infrastructure Systems and Services: Developing 21st Century Infrastructure Networks, 2008

Page 1. Cooperation for aggregating complex electric power networks to ensure system observabilit... more Page 1. Cooperation for aggregating complex electric power networks to ensure system observability Usman A. Khan, Student Member, IEEE, Marija D. Ilic, Fellow, IEEE, and José MF Moura, Fellow, IEEE Carnegie Mellon University ...

Research paper thumbnail of Higher Dimensional Consensus: Learning in Large-Scale Networks

Computing Research Repository, 2009

The paper presents higher dimension consensus (HDC) for large-scale networks. HDC generalizes the... more The paper presents higher dimension consensus (HDC) for large-scale networks. HDC generalizes the well-known average-consensus algorithm. It divides the nodes of the large-scale network into anchors and sensors. Anchors are nodes whose states are fixed over the HDC iterations, whereas sensors are nodes that update their states as a linear combination of the neighboring states. Under appropriate conditions, we show that the sensor states converge to a linear combination of the anchor states. Through the concept of anchors, HDC captures in a unified framework several interesting network tasks, including distributed sensor localization, leader-follower, distributed Jacobi to solve linear systems of algebraic equations, and, of course, average-consensus. In many network applications, it is of interest to learn the weights of the distributed linear algorithm so that the sensors converge to a desired state. We term this inverse problem the HDC learning problem. We pose learning in HDC as a constrained non-convex optimization problem, which we cast in the framework of multi-objective optimization (MOP) and to which we apply Pareto optimality. We prove analytically relevant properties of the MOP solutions and of the Pareto front from which we derive the solution to learning in HDC. Finally, the paper shows how the MOP approach resolves interesting tradeoffs (speed of convergence versus quality of the final state) arising in learning in HDC in resource constrained networks.

Research paper thumbnail of On connectivity, observability, and stability in distributed estimation

Conference on Decision and Control, 2010

We introduce a new model of social learning and distributed estimation in which the state to be e... more We introduce a new model of social learning and distributed estimation in which the state to be estimated is governed by a potentially unstable linear model driven by noise. The state is observed by a network of agents, each with its own linear noisy observation models. We assume the state to be globally observable, but no agent is able to estimate the state with its own observations alone. We propose a single consensus-step estimator that consists of an innovation step and a consensus step, both performed at the same time-step. We show that if the instability of the dynamics is strictly less than the Network Tracking Capacity (NTC), a function of network connectivity and the observation matrices, the single consensus-step estimator results in a bounded estimation error. We further quantify the trade-off between: (i) (in)stability of the parameter dynamics, (ii) connectivity of the underlying network, and (iii) the observation structure, in the context of single timescale algorithms. This contrasts with prior work on distributed estimation that either assumes scalar dynamics (which removes local observability issues) or assumes that enough iterates can be carried out for the consensus to converge between each innovation (observation) update.

Research paper thumbnail of Model Distribution for Distributed Kalman Filters: A Graph Theoretic Approach

Asilomar Conference on Signals, Systems & Computers, 2007

This paper discusses the distributed Kalman filter problem for the state estimation of sparse lar... more This paper discusses the distributed Kalman filter problem for the state estimation of sparse large-scale systems monitored by sensor networks. With limited computing resources at each sensor, no sensor has the ability to replicate locally the entire large-scale statespace model. We investigate techniques to distribute the model, i.e., to have at each sensor low-dimensional coupled local models that are computationally viable and provide accurate representation of the local states. We implement local Kalman filters over these coupled reduced models. We use system digraphs and cut-point sets for model distribution. Under certain conditions, the local Kalman filters asymptotically guarantee the performance of the centralized Kalman filter.

Research paper thumbnail of Distributing the Kalman Filter for Large-Scale Systems

IEEE Transactions on Signal Processing, 2008

This paper presents a distributed Kalman filter to estimate the state of a sparsely connected, la... more This paper presents a distributed Kalman filter to estimate the state of a sparsely connected, large-scale, n -dimensional, dynamical system monitored by a network of N sensors. Local Kalman filters are implemented on nl-dimensional subsystems, nl Lt n, obtained by spatially decomposing the large-scale system. The distributed Kalman filter is optimal under an Lth order Gauss-Markov approximation to the centralized filter. We quantify the information loss due to this Lth-order approximation by the divergence, which decreases as L increases. The order of the approximation L leads to a bound on the dimension of the subsystems, hence, providing a criterion for subsystem selection. The (approximated) centralized Riccati and Lyapunov equations are computed iteratively with only local communication and low-order computation by a distributed iterate collapse inversion (DICI) algorithm. We fuse the observations that are common among the local Kalman filters using bipartite fusion graphs and consensus averaging algorithms. The proposed algorithm achieves full distribution of the Kalman filter. Nowhere in the network, storage, communication, or computation of n-dimensional vectors and matrices is required; only nl Lt n dimensional vectors and matrices are communicated or used in the local computations at the sensors. In other words, knowledge of the state is itself distributed.

Research paper thumbnail of Distributed Sensor Localization in Euclidean Spaces: Dynamic Environments

, we presented an algorithm to localize sensors in m-dimensional Euclidean space R n with unknown... more , we presented an algorithm to localize sensors in m-dimensional Euclidean space R n with unknown locations assuming the following: 1) there are (m + 1) sensors that know their absolute coordinates-the anchors; 2) each sensor communicates with m + 1 of its neighbors; and 3) the sensors lie in the convex hull of the anchors. The localization algorithm is a generalization of consensus-it is a weighted linear, iterative, and distributed algorithm. The weights are the barycentric coordinates of a sensor with respect to its neighbors, which are computed by the generalized volumes obtained from the intersensor distances in the Cayley-Menger determinants. This paper expands on this work to take advantage of when the number of anchors available possibly exceeds m + 1, a sensor can communicate with all sensors within its radius of communication, and when the network communication topology may be dynamic as, for example, when the network neighborhood structure changes over time. The paper shows that the algorithm converges to the exact sensor locations in the absence of noise.

Research paper thumbnail of Distributed sensor localization in random environments using minimal number of anchor nodes

IEEE Transactions on Signal Processing, 2009

The paper introduces DILOC, a distributed, iterative algorithm to locate M sensors (with unknown ... more The paper introduces DILOC, a distributed, iterative algorithm to locate M sensors (with unknown locations) in Rm, m ges 1, with respect to a minimal number of m + 1 anchors with known locations. The sensors and anchors, nodes in the network, exchange data with their neighbors only; no centralized data processing or communication occurs, nor is there a centralized fusion center to compute the sensors' locations. DILOC uses the barycentric coordinates of a node with respect to its neighbors; these coordinates are computed using the Cayley-Menger determinants, i.e., the determinants of matrices of internode distances. We show convergence of DILOC by associating with it an absorbing Markov chain whose absorbing states are the states of the anchors. We introduce a stochastic approximation version extending DILOC to random environments, i.e., when the communications among nodes is noisy, the communication links among neighbors may fail at random times, and the internodes distances are subject to errors. We show a.s. convergence of the modified DILOC and characterize the error between the true values of the sensors' locations and their final estimates given by DILOC. Numerical studies illustrate DILOC under a variety of deterministic and random operating conditions.

Research paper thumbnail of Distributed Sensor Localization in Random Environments using Minimal Number of Anchor Nodes

Computing Research Repository, 2008

The paper develops DILOC, a distributive, iterative algorithm that locates M sensors in R m , m ≥... more The paper develops DILOC, a distributive, iterative algorithm that locates M sensors in R m , m ≥ 1, with respect to a minimal number of m + 1 anchors with known locations. The sensors exchange data with their neighbors only; no centralized data processing or communication occurs, nor is there centralized knowledge about the sensors' locations. DILOC uses the barycentric coordinates of a sensor with respect to its neighbors that are computed using the Cayley-Menger determinants. These are the determinants of matrices of inter-sensor distances. We show convergence of DILOC by associating with it an absorbing Markov chain whose absorbing states are the anchors.

Research paper thumbnail of On the characterization of distributed observability from first principles

2014 IEEE Global Conference on Signal and Information Processing (GlobalSIP), 2014

Research paper thumbnail of Source localization in complex networks using a frequency-domain approach

2013 IEEE Global Conference on Signal and Information Processing, 2013