Dynamic Graph Learning Based on Graph Laplacian (original) (raw)

Dynamic Graph Learning: A Structure-Driven Approach

Mathematics, 2021

The purpose of this paper is to infer a dynamic graph as a global (collective) model of time-varying measurements at a set of network nodes. This model captures both pairwise as well as higher order interactions (i.e., more than two nodes) among the nodes. The motivation of this work lies in the search for a connectome model which properly captures brain functionality across all regions of the brain, and possibly at individual neurons. We formulate it as an optimization problem, a quadratic objective functional and tensor information of observed node signals over short time intervals. The proper regularization constraints reflect the graph smoothness and other dynamics involving the underlying graph’s Laplacian, as well as the time evolution smoothness of the underlying graph. The resulting joint optimization is solved by a continuous relaxation of the weight parameters and an introduced novel gradient-projection scheme. While the work may be applicable to any time-evolving data set...

DBGSL: Dynamic Brain Graph Structure Learning

arXiv (Cornell University), 2022

Recently, graph neural networks (GNNs) have shown success at learning representations of brain graphs derived from functional magnetic resonance imaging (fMRI) data. The majority of existing GNN methods, however, assume brain graphs are static over time and the graph adjacency matrix is known prior to model training. These assumptions are at odds with neuroscientific evidence that brain graphs are time-varying with a connectivity structure that depends on the choice of functional connectivity measure. Noisy brain graphs that do not truly represent the underling fMRI data can have a detrimental impact on the performance of GNNs. As a solution, we propose Dynamic Brain Graph Structure Learning (DBGSL), a novel method for learning the optimal time-varying dependency structure of fMRI data induced by a downstream prediction task. Experiments demonstrate DBGSL achieves state-of-the-art performance for sex classification using real-world resting-state and task fMRI data. Moreover, analysis of the learnt dynamic graphs highlights predictionrelated brain regions which align with existing neuroscience literature.

Graph neural fields: A framework for spatiotemporal dynamical models on the human connectome

PLOS Computational Biology, 2021

Tools from the field of graph signal processing, in particular the graph Laplacian operator, have recently been successfully applied to the investigation of structure-function relationships in the human brain. The eigenvectors of the human connectome graph Laplacian, dubbed “connectome harmonics”, have been shown to relate to the functionally relevant resting-state networks. Whole-brain modelling of brain activity combines structural connectivity with local dynamical models to provide insight into the large-scale functional organization of the human brain. In this study, we employ the graph Laplacian and its properties to define and implement a large class of neural activity models directly on the human connectome. These models, consisting of systems of stochastic integrodifferential equations on graphs, are dubbed graph neural fields, in analogy with the well-established continuous neural fields. We obtain analytic predictions for harmonic and temporal power spectra, as well as fun...

Estimating brain’s functional graph from the structural graph’s Laplacian

The interplay between the brain’s function and structure has been of immense interest to the neuroscience and connectomics communities. In this work we develop a simple linear model relating the structural network and the functional network. We propose that the two networks are related by the structural network’s Laplacian up to a shift. The model is simple to implement and gives accurate prediction of function’s eigenvalues at the subject level and its eigenvectors at group level.

Learning Graphs From Smooth and Graph-Stationary Signals With Hidden Variables

IEEE Transactions on Signal and Information Processing over Networks, 2022

Network-topology inference from (vertex) signal observations is a prominent problem across data-science and engineering disciplines. Most existing schemes assume that observations from all nodes are available, but in many practical environments, only a subset of nodes is accessible. A natural (and sometimes effective) approach is to disregard the role of unobserved nodes, but this ignores latent network effects, deteriorating the quality of the estimated graph. Differently, this paper investigates the problem of inferring the topology of a network from nodal observations while taking into account the presence of hidden (latent) variables. Our schemes assume the number of observed nodes is considerably larger than the number of hidden variables and build on recent graph signal processing models to relate the signals and the underlying graph. Specifically, we go beyond classical correlation and partial correlation approaches and assume that the signals are smooth and/or stationary in the sought graph. The assumptions are codified into different constrained optimization problems, with the presence of hidden variables being explicitly taken into account. Since the resulting problems are ill-conditioned and non-convex, the block matrix structure of the proposed formulations is leveraged and suitable convex-regularized relaxations are presented. Numerical experiments over synthetic and real-world datasets showcase the performance of the developed methods and compare them with existing alternatives.

Graph Signal Processing - Part III: Machine Learning on Graphs, from Graph Topology to Applications

ArXiv, 2020

Many modern data analytics applications on graphs operate on domains where graph topology is not known a priori, and hence its determination becomes part of the problem definition, rather than serving as prior knowledge which aids the problem solution. Part III of this monograph starts by addressing ways to learn graph topology, from the case where the physics of the problem already suggest a possible topology, through to most general cases where the graph topology is learned from the data. A particular emphasis is on graph topology definition based on the correlation and precision matrices of the observed data, combined with additional prior knowledge and structural conditions, such as the smoothness or sparsity of graph connections. For learning sparse graphs (with small number of edges), the least absolute shrinkage and selection operator, known as LASSO is employed, along with its graph specific variant, graphical LASSO. For completeness, both variants of LASSO are derived in an...

A Signal-Processing-Based Approach to Time-Varying Graph Analysis for Dynamic Brain Network Identification

In recent years, there has been a growing need to analyze the functional connectivity of the human brain. Previous studies have focused on extracting static or time-independent functional networks to describe the long-term behavior of brain activity. However, a static network is generally not sufficient to represent the long term communication patterns of the brain and is considered as an unreliable snapshot of functional connectivity. In this paper, we propose a dynamic network summarization approach to describe the time-varying evolution of connectivity patterns in functional brain activity. The proposed approach is based on first identifying key event intervals by quantifying the change in the connectivity patterns across time and then summarizing the activity in each event interval by extracting the most informative network using principal component decomposition. The proposed method is evaluated for characterizing time-varying network dynamics from event-related potential (ERP) data indexing the error-related negativity (ERN) component related to cognitive control. The statistically significant connectivity patterns for each interval are presented to illustrate the dynamic nature of functional connectivity.

Decoding Time-Varying Functional Connectivity Networks via Linear Graph Embedding Methods

Frontiers in computational neuroscience, 2017

An exciting avenue of neuroscientific research involves quantifying the time-varying properties of functional connectivity networks. As a result, many methods have been proposed to estimate the dynamic properties of such networks. However, one of the challenges associated with such methods involves the interpretation and visualization of high-dimensional, dynamic networks. In this work, we employ graph embedding algorithms to provide low-dimensional vector representations of networks, thus facilitating traditional objectives such as visualization, interpretation and classification. We focus on linear graph embedding methods based on principal component analysis and regularized linear discriminant analysis. The proposed graph embedding methods are validated through a series of simulations and applied to fMRI data from the Human Connectome Project.

Modeling Spatio-Temporal Dynamics in Brain Networks: A Comparison of Graph Neural Network Architectures

2021

Comprehending the interplay between spatial and temporal characteristics of neural dynamics can contribute to our understanding of information processing in the human brain. Graph neural networks (GNNs) provide a new possibility to interpret graph structured signals like those observed in complex brain networks. In our study we compare different spatiotemporal GNN architectures and study their ability to replicate neural activity distributions obtained in functional MRI (fMRI) studies. We evaluate the performance of the GNN models on a variety of scenarios in MRI studies and also compare it to a VAR model, which is currently predominantly used for directed functional connectivity analysis. We show that by learning localized functional interactions on the anatomical substrate, GNN based approaches are able to robustly scale to large network studies, even when available data are scarce. By including anatomical connectivity as the physical substrate for information propagation, such GN...

Graph-based Learning under Perturbations via Total Least-Squares

IEEE Transactions on Signal Processing, 2020

Graphs are pervasive in different fields unveiling complex relationships between data. Two major graph-based learning tasks are topology identification and inference of signals over graphs. Among the possible models to explain data interdependencies, structural equation models (SEMs) accommodate a gamut of applications involving topology identification. Obtaining conventional SEMs though requires measurements across nodes. On the other hand, typical signal inference approaches 'blindly trust' a given nominal topology. In practice however, signal or topology perturbations may be present in both tasks, due to model mismatch, outliers, outages or adversarial behavior. To cope with such perturbations, this work introduces a regularized total least-squares (TLS) approach and iterative algorithms with convergence guarantees to solve both tasks. Further generalizations are also considered relying on structured and/or weighted TLS when extra prior information on the perturbation is available. Analyses with simulated and real data corroborate the effectiveness of the novel TLS-based approaches.