Hierarchical multi-resolution mesh networks for brain decoding (original) (raw)

A Hierarchical Multi-resolution Mesh Network for Brain Decoding

arXiv: Neural and Evolutionary Computing, 2016

We propose a new framework, called Hierarchical Multi-resolution Mesh Network (HMMN), which establishes a dynamic brain network for each time resolution of the fMRI signal to represent the underlying cognitive process. The edge weights of the multi-resolution network are then used to train an ensemble learning architecture, called, fuzzy stacked generalization (FSG) for brain decoding. The suggested framework, first, decomposes the fMRI signal into various time resolutions using Wavelet transforms. Then, for each time resolution, a local mesh is formed around each brain region. The locality is defined with respect to a neighborhood system based on functional connectivity. The edge weights of each mesh are estimated by Ridge regression. The local meshes are ensembled to form a dynamic network at each time resolution. In the final step, the edge weights of the networks are used for brain decoding. This task is achieved by fusing the multi-resolution edge weights as the input to the Fuzzy Stacked Generalization (FSG) architecture. Our results on Human Connectome Project task-fMRI dataset reflect that the suggested model, HMMN, can successfully discriminate tasks by extracting complementary information obtained from mesh edge weights of multiple subbands.

A New Representation of fMRI Signal by a Set of Local Meshes for Brain Decoding

IEEE Transactions on Signal and Information Processing over Networks, 2017

How neurons influence each other's firing depends on the strength of synaptic connections among them. Motivated by the highly interconnected structure of the brain, in this study, we propose a computational model to estimate the relationships among voxels and employ them as features for cognitive state classification. We represent the sequence of functional Magnetic Resonance Imaging (fMRI) measurements recorded during a cognitive stimulus by a set of local meshes. Then, we represent the corresponding cognitive state by the edge weights of these meshes each of which is estimated assuming a regularized linear relationship among voxel time series in a predefined locality. The estimated mesh edge weights provide a better representation of information in the brain for cognitive state or task classification. We examine the representative power of our mesh edge weights on visual recognition and emotional memory retrieval experiments by training a Support Vector Machine classifier. Also, we use mesh edge weights as feature vectors of inter-subject classification on Human Connectome Project task fMRI dataset, and test their performance. We observe that mesh edge weights perform better than the popular fMRI features, such as, raw voxel intensity values, pairwise correlations, features extracted using PCA and ICA, for classifying the cognitive states.

Mesh learning for object classification using fMRI measurements

2013 IEEE International Conference on Image Processing, 2013

Machine learning algorithms have been widely used as reliable methods for modeling and classifying cognitive processes using functional Magnetic Resonance Imaging (fMRI) data. In this study, we aim to classify fMRI measurements recorded during an object recognition experiment. Previous studies focus on Multi Voxel Pattern Analysis (MVPA) which feeds a set of active voxels in a concatenated vector form to a machine learning algorithm to train and classify the cognitive processes. In most of the MVPA methods, after an image preprocessing step, the voxel intensity values are fed to a classifier to train and recognize the underlying cognitive process. Sometimes, the fMRI data is further processed for de-noising or feature selection where techniques, such as Generalized Linear Model (GLM), Independent Component Analysis (ICA) or Principal Component Analysis are employed. Although these techniques are proved to be useful in MVPA, they do not model the spatial connectivity among the voxels. In this study, we attempt to represent the local relations among the voxel intensity values by forming a mesh network around each voxel to model the relationship of a voxel and its surroundings. The degree of connectivity of a voxel to its surroundings is represented by the arc weights of each mesh. The arc weights, which are estimated by a linear regression model, are fed to a classifier to discriminate the brain states during an object recognition task. This approach, called Mesh Learning, provides a powerful tool to analyze various cognitive states using fMRI data. Compared to traditional studies which focus either merely on multi-voxel pattern vectors or their reduced-dimension versions, the suggested Mesh Learning provides a better representation of object recognition task. Various machine learning algorithms are tested to compare the suggested Mesh Learning to the state-of-the art MVPA techniques. The performance of the Mesh Learning is shown to be higher than that of the available MVPA techniques.

Mesh Learning for Classifying Cognitive Processes

2012

A relatively recent advance in cognitive neuroscience has been multi-voxel pattern analysis (MVPA), which enables researchers to decode brain states and/or the type of information represented in the brain during a cognitive operation. MVPA methods utilize machine learning algorithms to distinguish among types of information or cognitive states represented in the brain, based on distributed patterns of neural activity. In the current investigation, we propose a new approach for representation of neural data for pattern analysis, namely a Mesh Learning Model. In this approach, at each time instant, a star mesh is formed around each voxel, such that the voxel corresponding to the center node is surrounded by its p-nearest neighbors. The arc weights of each mesh are estimated from the voxel intensity values by least squares method. The estimated arc weights of all the meshes, called Mesh Arc Descriptors (MADs), are then used to train a classifier, such as Neural Networks, k-Nearest Neighbor, Naïve Bayes and Support Vector Machines. The proposed Mesh Model was tested on neuroimaging data acquired via functional magnetic resonance imaging (fMRI) during a recognition memory experiment using categorized word lists, employing a previously established experimental paradigm . Results suggest that the proposed Mesh Learning approach can provide an effective algorithm for pattern analysis of brain activity during cognitive processing.

Modeling the Sequence of Brain Volumes by Local Mesh Models for Brain Decoding

ArXiv, 2016

We represent the sequence of fMRI (Functional Magnetic Resonance Imaging) brain volumes recorded during a cognitive stimulus by a graph which consists of a set of local meshes. The corresponding cognitive process, encoded in the brain, is then represented by these meshes each of which is estimated assuming a linear relationship among the voxel time series in a predefined locality. First, we define the concept of locality in two neighborhood systems, namely, the spatial and functional neighborhoods. Then, we construct spatially and functionally local meshes around each voxel, called seed voxel, by connecting it either to its spatial or functional p-nearest neighbors. The mesh formed around a voxel is a directed sub-graph with a star topology, where the direction of the edges is taken towards the seed voxel at the center of the mesh. We represent the time series recorded at each seed voxel in terms of linear combination of the time series of its p-nearest neighbors in the mesh. The re...

Functional Mesh Learning for pattern analysis of cognitive processes

2013 IEEE 12th International Conference on Cognitive Informatics and Cognitive Computing, 2013

We propose a statistical learning model for classifying cognitive processes based on distributed patterns of neural activation in the brain, acquired via functional magnetic resonance imaging (fMRI). In the proposed learning machine, local meshes are formed around each voxel. The distance between voxels in the mesh is determined by using functional neighborhood concept. In order to define functional neighborhood, the similarities between the time series recorded for voxels are measured and functional connectivity matrices are constructed. Then, the local mesh for each voxel is formed by including the functionally closest neighboring voxels in the mesh. The relationship between the voxels within a mesh is estimated by using a linear regression model. These relationship vectors, called Functional Connectivity aware Local Relational Features (FC-LRF) are then used to train a statistical learning machine. The proposed method was tested on a recognition memory experiment, including data pertaining to encoding and retrieval of words belonging to ten different semantic categories. Two popular classifiers, namely k-Nearest Neighbor and Support Vector Machine, are trained in order to predict the semantic category of the item being retrieved, based on activation patterns during encoding. The classification performance of the Functional Mesh Learning model, which range in 62-68% is superior to the classical multi-voxel pattern analysis (MVPA) methods, which range in 40-48%, for ten semantic categories.

Encoding the Local Connectivity Patterns of fMRI for Cognitive State Classification

ArXiv, 2016

In this work, we propose a novel framework to encode the local connectivity patterns of brain, using Fisher Vectors (FV), Vector of Locally Aggregated Descriptors (VLAD) and Bag-of-Words (BoW) methods. We first obtain local descriptors, called Mesh Arc Descriptors (MADs) from fMRI data, by forming local meshes around anatomical regions, and estimating their relationship within a neighborhood. Then, we extract a dictionary of relationships, called \textit{brain connectivity dictionary} by fitting a generative Gaussian mixture model (GMM) to a set of MADs, and selecting the codewords at the mean of each component of the mixture. Codewords represent the connectivity patterns among anatomical regions. We also encode MADs by VLAD and BoW methods using the k-Means clustering. We classify the cognitive states of Human Connectome Project (HCP) task fMRI dataset, where we train support vector machines (SVM) by the encoded MADs. Results demonstrate that, FV encoding of MADs can be successfull...

Large Scale Functional Connectivity for Brain Decoding

Biomedical Engineering / 817: Robotics Applications, 2014

Functional Magnetic Resonance Imaging (fMRI) data consists of time series for each voxel recorded during a cognitive task. In order to extract useful information from this noisy and redundant data, techniques are proposed to select the voxels that are relevant to the underlying cognitive task. We propose a simple and efficient algorithm for decoding the brain states by modelling the correlation patterns between the voxel time series. For each stimulus during the experiment, a separate functional connectivity matrix is computed in voxel level. The elements in connectivity matrices are then filtered out by making use of a minimum spanning tree formed using a global connectivity matrix for the entire experiment in order to reduce dimensionality. For a recognition memory experiment with nine subjects, functional connectivity matrices are computed for encoding and retrieval phases. The class labels of the retrieval samples are predicted within a k-nearest neighbour space constructed by the traversed entries in the functional connectivity matrices for encoding samples. The proposed method is also adapted to large scale functional connectivity tasks by making use of graphics boards. Classification performance in ten categories is comparable and even better compared to both classical and enhanced methods of multi-voxel pattern analysis techniques.

Discriminative Functional Connectivity Measures for Brain Decoding

ArXiv, 2014

We propose a statistical learning model for classifying cognitive processes based on distributed patterns of neural activation in the brain, acquired via functional magnetic resonance imaging (fMRI). In the proposed learning method, local meshes are formed around each voxel. The distance between voxels in the mesh is determined by using a functional neighbourhood concept. In order to define the functional neighbourhood, the similarities between the time series recorded for voxels are measured and functional connectivity matrices are constructed. Then, the local mesh for each voxel is formed by including the functionally closest neighbouring voxels in the mesh. The relationship between the voxels within a mesh is estimated by using a linear regression model. These relationship vectors, called Functional Connectivity aware Local Relational Features (FC-LRF) are then used to train a statistical learning machine. The proposed method was tested on a recognition memory experiment, includi...

Enhancing Local Linear Models Using Functional Connectivity for Brain State Decoding

International Journal of Cognitive Informatics and Natural Intelligence, 2013

The authors propose a statistical learning model for classifying cognitive processes based on distributed patterns of neural activation in the brain, acquired via functional magnetic resonance imaging (fMRI). In the proposed learning machine, local meshes are formed around each voxel. The distance between voxels in the mesh is determined by using functional neighborhood concept. In order to define functional neighborhood, the similarities between the time series recorded for voxels are measured and functional connectivity matrices are constructed. Then, the local mesh for each voxel is formed by including the functionally closest neighboring voxels in the mesh. The relationship between the voxels within a mesh is estimated by using a linear regression model. These relationship vectors, called Functional Connectivity aware Mesh Arc Descriptors (FC-MAD) are then used to train a statistical learning machine. The proposed method was tested on a recognition memory experiment, including d...