Hamid Soltanian-Zadeh | Henry Ford Health System (original) (raw)

Papers by Hamid Soltanian-Zadeh

Research paper thumbnail of A fast and hardware mimicking analytic CT simulator

2013 IEEE Nuclear Science Symposium and Medical Imaging Conference (2013 NSS/MIC)

Abstract-Different algorithms have been utilized for x-ray computed tomography (CT) simulation b... more Abstract-Different algorithms have been utilized for x-ray computed tomography (CT) simulation based on Monte Carlo technique, analytic calculation, or combination of them. Software packages based on Monte Carlo algorithm provide sophisticated calculations but the time consuming nature of them limits its applicability. Analytic calculation for CT simulation has been also evaluated in recent years. Due to ignoring basic physical processes, analytic methods have limited applications. In this study, a hardware mimicking algorithm has been developed to accurately model the CT imaging chain using analytic calculation. The model includes x-ray spectrum generation according to the pre-defined scanning protocol. The detector is designed to acquire the data either in integral or spectral modes. CT geometry can be used as parallel or fan beam with different sizes. Poisson noise model was applied to the acquired projection data. Varieties of projection-based computerized phantoms have been designed and implemented in the simulator. CT number and background noise of the simulated images have been compared with experimental data. On average, the relative difference between simulated and experimental HUs are 8.3%, 7.5%, and 8.0% for bone; 12.1%, 10.3%, and 7.8% for contrast agent; and 16.6%, 3.6%, and 5.2% for the background at 80 kVp/500 mAs, 120 kVp/250 mAs, and 140 kVp/125 mAs, respectively. The relative difference between simulated and experimental noise values vary between 2% to slightly less than 26%. For scanning and image generation with a computer equipped with Intel Core2 Quad CPU and 2.0 GB of RAM, the simulator takes about 32 seconds for generating a 512×512 single slice image when it is adjusted to acquire 900 projection angles with 20 mm slice thickness and 140kVp/200 mAs scanning protocol. The simulation time is independent of photon intensity.

Research paper thumbnail of Using image-extracted features to determine heart rate and blink duration for driver sleepiness detection

ArXiv, 2019

Heart rate and blink duration are two vital physiological signals which give information about ca... more Heart rate and blink duration are two vital physiological signals which give information about cardiac activity and consciousness. Monitoring these two signals is crucial for various applications such as driver drowsiness detection. As there are several problems posed by the conventional systems to be used for continuous, long-term monitoring, a remote blink and ECG monitoring system can be used as an alternative. For estimating the blink duration, two strategies are used. In the first approach, pictures of open and closed eyes are fed into an Artificial Neural Network (ANN) to decide whether the eyes are open or close. In the second approach, they are classified and labeled using Linear Discriminant Analysis (LDA). The labeled images are then be used to determine the blink duration. For heart rate variability, two strategies are used to evaluate the passing blood volume: Independent Component Analysis (ICA); and a chrominance based method. Eye recognition yielded 78-92% accuracy in...

Research paper thumbnail of Automatic and Manual Segmentation of Hippocampus in Epileptic Patients MRI

ArXiv, 2016

The hippocampus is a seminal structure in the most common surgically-treated form of epilepsy. Ac... more The hippocampus is a seminal structure in the most common surgically-treated form of epilepsy. Accurate segmentation of the hippocampus aids in establishing asymmetry regarding size and signal characteristics in order to disclose the likely site of epileptogenicity. With sufficient refinement, it may ultimately aid in the avoidance of invasive monitoring with its expense and risk for the patient. To this end, a reliable and consistent method for segmentation of the hippocampus from magnetic resonance imaging (MRI) is needed. In this work, we present a systematic and statistical analysis approach for evaluation of automated segmentation methods in order to establish one that reliably approximates the results achieved by manual tracing of the hippocampus.

Research paper thumbnail of Localization of Epileptic Foci Based on Simultaneous EEG–fMRI Data

Frontiers in Neurology

Combining functional magnetic resonance imaging (fMRI) and electroencephalography (EEG) enables a... more Combining functional magnetic resonance imaging (fMRI) and electroencephalography (EEG) enables a non-invasive investigation of the human brain function and evaluation of the correlation of these two important modalities of brain activity. This paper explores recent reports on using advanced simultaneous EEG–fMRI methods proposed to map the regions and networks involved in focal epileptic seizure generation. One of the applications of EEG and fMRI combination as a valuable clinical approach is the pre-surgical evaluation of patients with epilepsy to map and localize the precise brain regions associated with epileptiform activity. In the process of conventional analysis using EEG–fMRI data, the interictal epileptiform discharges (IEDs) are visually extracted from the EEG data to be convolved as binary events with a predefined hemodynamic response function (HRF) to provide a model of epileptiform BOLD activity and use as a regressor for general linear model (GLM) analysis of the fMRI ...

Research paper thumbnail of Prospective Quantitative Neuroimaging Analysis of Putative Temporal Lobe Epilepsy

Frontiers in Neurology

Purpose: A prospective study of individual and combined quantitative imaging applications for lat... more Purpose: A prospective study of individual and combined quantitative imaging applications for lateralizing epileptogenicity was performed in a cohort of consecutive patients with a putative diagnosis of mesial temporal lobe epilepsy (mTLE).Methods: Quantitative metrics were applied to MRI and nuclear medicine imaging studies as part of a comprehensive presurgical investigation. The neuroimaging analytics were conducted remotely to remove bias. All quantitative lateralizing tools were trained using a separate dataset. Outcomes were determined after 2 years. Of those treated, some underwent resection, and others were implanted with a responsive neurostimulation (RNS) device.Results: Forty-eight consecutive cases underwent evaluation using nine attributes of individual or combinations of neuroimaging modalities: 1) hippocampal volume, 2) FLAIR signal, 3) PET profile, 4) multistructural analysis (MSA), 5) multimodal model analysis (MMM), 6) DTI uncertainty analysis, 7) DTI connectivity,...

Research paper thumbnail of Automatic Detection of Coronavirus (COVID-19) from Chest CT Images using VGG16-Based Deep-Learning

2020 27th National and 5th International Iranian Conference on Biomedical Engineering (ICBME)

In recent months, coronavirus disease 2019 (COVID-19) has infected millions of people worldwide. ... more In recent months, coronavirus disease 2019 (COVID-19) has infected millions of people worldwide. In addition to the clinical tests like reverse transcriptionpolymerase chain reaction (RT-PCR), medical imaging techniques such as computed tomography (CT) can be used as a rapid technique to detect and evaluate patients infected by COVID-19. Conventionally, CT-based COVID-19 classification is done by a radiology expert. In this paper, we present a deep learning-based Convolutional Neural Network (CNN) model we developed for the classification of COVID-19 positive patients from healthy subjects using chest CT. We used 10979 chest CT images of 131 patients with COVID-19 and 150 healthy subjects for training, validating, and testing of the proposed model. Evaluation of the results showed the precision of 92%, sensitivity of 90%, specificity of 91%, F1-Score of 0.91, and accuracy of 90%. We have used the regions of infection segmented by a radiologist to increase the generalization and reliability of the results. The plotted heatmaps show that the developed model has focused only on the infected regions of the lungs by COVID-19 to make decisions.

Research paper thumbnail of Cloud-based deep learning of big EEG data for epileptic seizure prediction

2016 IEEE Global Conference on Signal and Information Processing (GlobalSIP)

Developing a Brain-Computer Interface (BCI) for seizure prediction can help epileptic patients ha... more Developing a Brain-Computer Interface (BCI) for seizure prediction can help epileptic patients have a better quality of life. However, there are many difficulties and challenges in developing such a system as a real-life support for patients. Because of the nonstationary nature of EEG signals, normal and seizure patterns vary across different patients. Thus, finding a group of manually extracted features for the prediction task is not practical. Moreover, when using implanted electrodes for brain recording massive amounts of data are produced. This big data calls for the need for safe storage and high computational resources for real-time processing. To address these challenges, a cloud-based BCI system for the analysis of this big EEG data is presented. First, a dimensionality-reduction technique is developed to increase classification accuracy as well as to decrease the communication bandwidth and computation time. Second, following a deep-learning approach, a stacked autoencoder is trained in two steps for unsupervised feature extraction and classification. Third, a cloud-computing solution is proposed for real-time analysis of big EEG data. The results on a benchmark clinical dataset illustrate the superiority of the proposed patientspecific BCI as an alternative method and its expected usefulness in real-life support of epilepsy patients.

Research paper thumbnail of Improved dynamic connection detection power in estimated dynamic functional connectivity considering multivariate dependencies between brain regions

Human Brain Mapping

To estimate dynamic functional connectivity (dFC), the conventional method of sliding window corr... more To estimate dynamic functional connectivity (dFC), the conventional method of sliding window correlation (SWC) suffers from poor performance of dynamic connection detection. This stems from the equal weighting of observations, suboptimal time scale, nonsparse output, and the fact that it is bivariate. To overcome these limitations, we exploited the kernel‐reweighted logistic regression (KELLER) algorithm, a method that is common in genetic studies, to estimate dFC in resting state functional magnetic resonance imaging (rs‐fMRI) data. KELLER can estimate dFC through estimating both spatial and temporal patterns of functional connectivity between brain regions. This paper compares the performance of the proposed KELLER method with current methods (SWC and tapered‐SWC (T‐SWC) with different window lengths) based on both simulated and real rs‐fMRI data. Estimated dFC networks were assessed for detecting dynamically connected brain region pairs with hypothesis testing. Simulation results revealed that KELLER can detect dynamic connections with a statistical power of 87.35% compared with 70.17% and 58.54% associated with T‐SWC (p‐value = .001) and SWC (p‐value <.001), respectively. Results of these different methods applied on real rs‐fMRI data were investigated for two aspects: calculating the similarity between identified mean dynamic pattern and identifying dynamic pattern in default mode network (DMN). In 68% of subjects, the results of T‐SWC with window length of 100 s, among different window lengths, demonstrated the highest similarity to those of KELLER. With regards to DMN, KELLER estimated previously reported dynamic connection pairs between dorsal and ventral DMN while SWC‐based method was unable to detect these dynamic connections.

Research paper thumbnail of Cloud-based deep learning of big EEG data for epileptic seizure prediction

2016 IEEE Global Conference on Signal and Information Processing (GlobalSIP), Dec 1, 2016

Developing a Brain-Computer Interface (BCI) for seizure prediction can help epileptic patients ha... more Developing a Brain-Computer Interface (BCI) for seizure prediction can help epileptic patients have a better quality of life. However, there are many difficulties and challenges in developing such a system as a real-life support for patients. Because of the nonstationary nature of EEG signals, normal and seizure patterns vary across different patients. Thus, finding a group of manually extracted features for the prediction task is not practical. Moreover, when using implanted electrodes for brain recording massive amounts of data are produced. This big data calls for the need for safe storage and high computational resources for real-time processing. To address these challenges, a cloud-based BCI system for the analysis of this big EEG data is presented. First, a dimensionality-reduction technique is developed to increase classification accuracy as well as to decrease the communication bandwidth and computation time. Second, following a deep-learning approach, a stacked autoencoder is trained in two steps for unsupervised feature extraction and classification. Third, a cloud-computing solution is proposed for real-time analysis of big EEG data. The results on a benchmark clinical dataset illustrate the superiority of the proposed patientspecific BCI as an alternative method and its expected usefulness in real-life support of epilepsy patients.

Research paper thumbnail of A Framework for Intracranial Saccular Aneurysm Detection and Quantification using Morphological Analysis of Cerebral Angiograms

IEEE Access

Reliable early prediction of aneurysm rupture can greatly help neurosurgeons to treat aneurysms a... more Reliable early prediction of aneurysm rupture can greatly help neurosurgeons to treat aneurysms at the right time, thus saving lives as well as providing significant cost reduction. Most of the research efforts in this respect involve statistical analysis of collected data or simulation of hemodynamic factors to predict the risk of aneurysmal rupture. Whereas, morphological analysis of cerebral angiogram images for locating and estimating unruptured aneurysms is rarely considered. Since digital subtraction angiography (DSA) is regarded as a standard test by the American Stroke Association and American College of Radiology for identification of aneurysm, this paper aims to perform morphological analysis of DSA to accurately detect saccular aneurysms, precisely determine their sizes, and estimate the probability of their ruptures. The proposed diagnostic framework, intracranial saccular aneurysm detection and quantification, first extracts cerebrovascular structures by denoising angiogram images and delineates regions of interest (ROIs) by using watershed segmentation and distance transformation. Then, it identifies saccular aneurysms among segmented ROIs using multilayer perceptron neural network trained upon robust Haralick texture features, and finally quantifies aneurysm rupture by geometrical analysis of identified aneurysmic ROI. De-identified data set of 59 angiograms is used to evaluate the performance of algorithms for aneurysm detection and risk of rupture quantification. The proposed framework achieves high accuracy of 98% and 86% for aneurysm classification and quantification, respectively. INDEX TERMS Computer-assisted diagnosis, digital subtraction angiography (DSA), intracranial saccular aneurysm, rupture quantification, Haralick features, GLCM, GLRLM, multilayer perceptron (MLP) neural network.

Research paper thumbnail of Computer-Aided Diagnosis System for the Evaluation of Thyroid Nodules on Ultrasonography: Prospective Non-Inferiority Study according to the Experience Level of Radiologists

Korean Journal of Radiology

Objective: To determine whether a computer-aided diagnosis (CAD) system for the evaluation of thy... more Objective: To determine whether a computer-aided diagnosis (CAD) system for the evaluation of thyroid nodules is non-inferior to radiologists with different levels of experience. Materials and Methods: Patients with thyroid nodules with a decisive diagnosis of benign or malignant nodule were consecutively enrolled from November 2017 to September 2018. Three radiologists with different levels of experience (1 month, 4 years, and 7 years) in thyroid ultrasound (US) reviewed the thyroid US with and without using the CAD system. Statistical analyses included non-inferiority testing of the diagnostic accuracy for malignant thyroid nodules between the CAD system and the three radiologists with a non-inferiority margin of 10%, comparison of the diagnostic performance, and the added value of the CAD system to the radiologists. Results: Altogether, 197 patients were included in the study cohort. The diagnostic accuracy of the CAD system (88.5%, 95% confidence interval [CI] = 82.7-92.5) was non-inferior to that of the radiologists with less experience (1 month and 4 year) of thyroid US (83.0%, 95% CI = 76.5-88.0; p < 0.001), whereas it was inferior to that of the experienced radiologist (7 years) (95.8%, 95% CI = 91.4-98.0; p = 0.138). The sensitivity and negative predictive value of the CAD system were significantly higher than those of the less-experienced radiologists were, whereas no significant difference was found with those of the experienced radiologist. A combination of US and the CAD system significantly improved sensitivity and negative predictive value, although the specificity and positive predictive value deteriorated for the less-experienced radiologists. Conclusion: The CAD system may offer support for decision-making in the diagnosis of malignant thyroid nodules for operators who have less experience with thyroid US.

Research paper thumbnail of Enhancing performance of subject-specific models via subject-independent information for SSVEP-based BCIs

PLOS ONE

Recently, steady-state visual evoked potential (SSVEP)-based brain-computer interface (BCI) has a... more Recently, steady-state visual evoked potential (SSVEP)-based brain-computer interface (BCI) has attracted much attention due to its high information transfer rate (ITR) and increasing number of targets. However, the performance of SSVEP-based methods in terms of accuracy and time length required for target detection can be improved. We propose a new canonical correlation analysis (CCA)-based method to integrate subject-specific models and subject-independent information and enhance BCI performance. To optimize hyperparameters for CCA-based model of a specific subject, we propose to use training data of other subjects. An ensemble version of the proposed method is also developed and used for a fair comparison with ensemble task-related component analysis (TRCA). A publicly available 35-subject SSVEP benchmark dataset is used to evaluate different methods. The proposed method is compared with TRCA and extended CCA methods as reference methods. The performance of the methods is evaluated using classification accuracy and ITR. Offline analysis results show that the proposed method reaches highest ITR compared with TRCA and extended CCA. Also, the proposed method significantly improves performance of extended CCA in all conditions and TRCA for time windows greater than 0.3 s. In addition, the proposed method outperforms TRCA for low number of training blocks and electrodes. This study illustrates that adding subject-independent information to subject-specific models can improve the performance of SSVEP-based BCIs.

Research paper thumbnail of Data mining MR image features of select structures for lateralization of mesial temporal lobe epilepsy

PloS one, 2018

This study systematically investigates the predictive power of volumetric imaging feature sets ex... more This study systematically investigates the predictive power of volumetric imaging feature sets extracted from select neuroanatomical sites in lateralizing the epileptogenic focus in mesial temporal lobe epilepsy (mTLE) patients. A cohort of 68 unilateral mTLE patients who had achieved an Engel class I outcome postsurgically was studied retrospectively. The volumes of multiple brain structures were extracted from preoperative magnetic resonance (MR) images in each. The MR image data set consisted of 54 patients with imaging evidence for hippocampal sclerosis (HS-P) and 14 patients without (HS-N). Data mining techniques (i.e., feature extraction, feature selection, machine learning classifiers) were applied to provide measures of the relative contributions of structures and their correlations with one another. After removing redundant correlated structures, a minimum set of structures was determined as a marker for mTLE lateralization. Using a logistic regression classifier, the volum...

Research paper thumbnail of Neonatal brain resting-state functional connectivity imaging modalities

Photoacoustics, 2018

Infancy is the most critical period in human brain development. Studies demonstrate that subtle b... more Infancy is the most critical period in human brain development. Studies demonstrate that subtle brain abnormalities during this state of life may greatly affect the developmental processes of the newborn infants. One of the rapidly developing methods for early characterization of abnormal brain development is functional connectivity of the brain at rest. While the majority of resting-state studies have been conducted using magnetic resonance imaging (MRI), there is clear evidence that resting-state functional connectivity (rs-FC) can also be evaluated using other imaging modalities. The aim of this review is to compare the advantages and limitations of different modalities used for the mapping of infants' brain functional connectivity at rest. In addition, we introduce photoacoustic tomography, a novel functional neuroimaging modality, as a complementary modality for functional mapping of infants' brain.

Research paper thumbnail of Random ensemble learning for EEG classification

Artificial intelligence in medicine, 2018

Real-time detection of seizure activity in epilepsy patients is critical in averting seizure acti... more Real-time detection of seizure activity in epilepsy patients is critical in averting seizure activity and improving patients' quality of life. Accurate evaluation, presurgical assessment, seizure prevention, and emergency alerts all depend on the rapid detection of seizure onset. A new method of feature selection and classification for rapid and precise seizure detection is discussed wherein informative components of electroencephalogram (EEG)-derived data are extracted and an automatic method is presented using infinite independent component analysis (I-ICA) to select independent features. The feature space is divided into subspaces via random selection and multichannel support vector machines (SVMs) are used to classify these subspaces. The result of each classifier is then combined by majority voting to establish the final output. In addition, a random subspace ensemble using a combination of SVM, multilayer perceptron (MLP) neural network and an extended k-nearest neighbors ...

Research paper thumbnail of Spatiotemporal features of DCE-MRI for breast cancer diagnosis

Computer methods and programs in biomedicine, 2018

Breast cancer is a major cause of mortality among women if not treated in early stages. Previous ... more Breast cancer is a major cause of mortality among women if not treated in early stages. Previous works developed non-invasive diagnosis methods using imaging data, focusing on specific sets of features that can be called spatial features or temporal features. However, limited set of features carry limited information, requiring complex classification methods to diagnose the disease. For non-invasive diagnosis, different imaging modalities can be used. DCE-MRI is one of the best imaging techniques that provides temporal information about the kinetics of the contrast agent in suspicious lesions along with acceptable spatial resolution. We have extracted and studied a comprehensive set of features from spatiotemporal space to obtain maximum available information from the DCE-MRI data. Then, we have applied a feature fusion technique to remove common information and extract a feature set with maximum information to be used by a simple classification method. We have also implemented conv...

Research paper thumbnail of A time local subset feature selection for prediction of sudden cardiac death from ECG signal

Medical & biological engineering & computing, Jan 14, 2017

Prediction of sudden cardiac death continues to gain universal attention as a promising approach ... more Prediction of sudden cardiac death continues to gain universal attention as a promising approach to saving millions of lives threatened by sudden cardiac death (SCD). This study attempts to promote the literature from mere feature extraction analysis to developing strategies for manipulating the extracted features to target improvement of classification accuracy. To this end, a novel approach to local feature subset selection is applied using meticulous methodologies developed in previous studies of this team for extracting features from non-linear, time-frequency, and classical processes. We are therefore enabled to select features that differ from one another in each 1-min interval before the incident. Using the proposed algorithm, SCD can be predicted 12 min before the onset; thus, more propitious results are achieved. Additionally, through defining a utility function and employing statistical analysis, the alarm threshold has effectively been determined as 83%. Having selected t...

Research paper thumbnail of Improved prediction of outcome in Parkinson's disease using radiomics analysis of longitudinal DAT SPECT images

NeuroImage: Clinical

No disease modifying therapies for Parkinson's disease (PD) have been found effective to date. To... more No disease modifying therapies for Parkinson's disease (PD) have been found effective to date. To properly power clinical trials for discovery of such therapies, the ability to predict outcome in PD is critical, and there is a significant need for discovery of prognostic biomarkers of PD. Dopamine transporter (DAT) SPECT imaging is widely used for diagnostic purposes in PD. In the present work, we aimed to evaluate whether longitudinal DAT SPECT imaging can significantly improve prediction of outcome in PD patients. In particular, we investigated whether radiomics analysis of DAT SPECT images, in addition to use of conventional non-imaging and imaging measures, could be used to predict motor severity at year 4 in PD subjects. We selected 64 PD subjects (38 male, 26 female; age at baseline (year 0): 61.9 ± 7.3, range [46,78]) from the Parkinson's Progressive Marker Initiative (PPMI) database. Inclusion criteria included (i) having had at least 2 SPECT scans at years 0 and 1 acquired on a similar scanner, (ii) having undergone a high-resolution 3 T MRI scan, and (iii) having motor assessment (MDS-UPDRS-III) available in year 4 used as outcome measure. Image analysis included automatic region-of-interest (ROI) extraction on MRI images, registration of SPECT images onto the corresponding MRI images, and extraction of radiomic features. Non-imaging predictors included demographics, disease duration as well as motor and non-motor clinical measures in years 0 and 1. The image predictors included 92 radiomic features extracted from the caudate, putamen, and ventral striatum of DAT SPECT images at years 0 and 1 to quantify heterogeneity and texture in uptake. Random forest (RF) analysis with 5000 trees was used to combine both non-imaging and imaging variables to predict motor outcome (UPDRS-III: 27.3 ± 14.7, range [3,77]). The RF prediction was evaluated using leave-one-out cross-validation. Our results demonstrated that addition of radiomic features to conventional measures significantly improved (p < 0.001) prediction of outcome, reducing the absolute error of predicting MDS-UPDRS-III from 9.00 ± 0.88 to 4.12 ± 0.43. This shows that radiomics analysis of DAT SPECT images has a significant potential towards development of effective prognostic biomarkers in PD.

Research paper thumbnail of Structured and Sparse Canonical Correlation Analysis as a Brain-Wide Multi-Modal Data Fusion Approach

IEEE Transactions on Medical Imaging, 2017

Multi-modal data fusion has recently emerged as a comprehensive neuroimaging analysis approach, w... more Multi-modal data fusion has recently emerged as a comprehensive neuroimaging analysis approach, which usually uses canonical correlation analysis (CCA). However, the current CCA-based fusion approaches face problems like high-dimensionality, multi-collinearity, unimodal feature selection, asymmetry, and loss of spatial information in reshaping the imaging data into vectors. This paper proposes a structured and sparse CCA (ssCCA) technique as a novel CCA method to overcome the above problems. To investigate the performance of the proposed algorithm, we have compared three data fusion techniques: standard CCA, regularized CCA, and ssCCA, and evaluated their ability to detect multi-modal data associations. We have used simulations to compare the performance of these approaches and probe the effects of non-negativity constraint, the dimensionality of features, sample size, and noise power. The results demonstrate that ssCCA outperforms the existing standard and regularized CCA-based fusion approaches. We have also applied the methods to real functional magnetic resonance imaging (fMRI) and structural MRI data of Alzheimer's disease (AD) patients (n = 34) and healthy control (HC) subjects (n = 42) from the ADNI database. The results illustrate that the proposed unsupervised technique differentiates the transition pattern between the subject-course of AD patients and HC subjects Manuscript

Research paper thumbnail of Multiscale cancer modeling: In the line of fast simulation and chemotherapy

Mathematical and Computer Modelling, 2009

Although Multiscale Cancer Modeling has a realistic view in the process of tumor growth, its nume... more Although Multiscale Cancer Modeling has a realistic view in the process of tumor growth, its numerical algorithm is time consuming. Therefore, it is problematic to run and to find the best treatment plan for chemotherapy, even in case of a small size of tissue. Using an artificial neural network, this paper simulates the multiscale cancer model faster than its numerical algorithm. In order to find the best treatment plan, it suggests applying a simpler avascular model called Gompertz. By using these proposed methods, multiscale cancer modeling may be extendable to chemotherapy for a realistic size of tissue. In order to simulate multiscale model, a hierarchical neural network called Nested Hierarchical Self Organizing Map (NHSOM) is used. The basis of the NHSOM is an enhanced version of SOM, with an adaptive vigilance parameter. Corresponding parameter and the overall bottom-up design guarantee the quality of clustering, and the embedded top-down architecture reduces computational complexity. Although by applying NHSOM, the process of simulation runs faster compared with that of the numerical algorithm, it is not possible to check a simple search space. As a result, a set containing the best treatment plans of a simpler model (Gompertz) is used. Additionally, it is assumed in this paper, that the distribution of drug in vessels has a linear relation with the blood flow rate. The technical advantage of this assumption is that by using a simple linear relation, a given diffusion of a drug dosage may be scaled to the desired one. By extracting a proper feature vector from the multiscale model and using NHSOM, applying the scaled-best treatment plans of Gompertz model is done for a small size of tissue. In addition, simulating the effect of stress reduction on normal tissue after chemotherapy is another advantage of using NHSOM, which is a kind of ''emergent''.

Research paper thumbnail of A fast and hardware mimicking analytic CT simulator

2013 IEEE Nuclear Science Symposium and Medical Imaging Conference (2013 NSS/MIC)

Abstract-Different algorithms have been utilized for x-ray computed tomography (CT) simulation b... more Abstract-Different algorithms have been utilized for x-ray computed tomography (CT) simulation based on Monte Carlo technique, analytic calculation, or combination of them. Software packages based on Monte Carlo algorithm provide sophisticated calculations but the time consuming nature of them limits its applicability. Analytic calculation for CT simulation has been also evaluated in recent years. Due to ignoring basic physical processes, analytic methods have limited applications. In this study, a hardware mimicking algorithm has been developed to accurately model the CT imaging chain using analytic calculation. The model includes x-ray spectrum generation according to the pre-defined scanning protocol. The detector is designed to acquire the data either in integral or spectral modes. CT geometry can be used as parallel or fan beam with different sizes. Poisson noise model was applied to the acquired projection data. Varieties of projection-based computerized phantoms have been designed and implemented in the simulator. CT number and background noise of the simulated images have been compared with experimental data. On average, the relative difference between simulated and experimental HUs are 8.3%, 7.5%, and 8.0% for bone; 12.1%, 10.3%, and 7.8% for contrast agent; and 16.6%, 3.6%, and 5.2% for the background at 80 kVp/500 mAs, 120 kVp/250 mAs, and 140 kVp/125 mAs, respectively. The relative difference between simulated and experimental noise values vary between 2% to slightly less than 26%. For scanning and image generation with a computer equipped with Intel Core2 Quad CPU and 2.0 GB of RAM, the simulator takes about 32 seconds for generating a 512×512 single slice image when it is adjusted to acquire 900 projection angles with 20 mm slice thickness and 140kVp/200 mAs scanning protocol. The simulation time is independent of photon intensity.

Research paper thumbnail of Using image-extracted features to determine heart rate and blink duration for driver sleepiness detection

ArXiv, 2019

Heart rate and blink duration are two vital physiological signals which give information about ca... more Heart rate and blink duration are two vital physiological signals which give information about cardiac activity and consciousness. Monitoring these two signals is crucial for various applications such as driver drowsiness detection. As there are several problems posed by the conventional systems to be used for continuous, long-term monitoring, a remote blink and ECG monitoring system can be used as an alternative. For estimating the blink duration, two strategies are used. In the first approach, pictures of open and closed eyes are fed into an Artificial Neural Network (ANN) to decide whether the eyes are open or close. In the second approach, they are classified and labeled using Linear Discriminant Analysis (LDA). The labeled images are then be used to determine the blink duration. For heart rate variability, two strategies are used to evaluate the passing blood volume: Independent Component Analysis (ICA); and a chrominance based method. Eye recognition yielded 78-92% accuracy in...

Research paper thumbnail of Automatic and Manual Segmentation of Hippocampus in Epileptic Patients MRI

ArXiv, 2016

The hippocampus is a seminal structure in the most common surgically-treated form of epilepsy. Ac... more The hippocampus is a seminal structure in the most common surgically-treated form of epilepsy. Accurate segmentation of the hippocampus aids in establishing asymmetry regarding size and signal characteristics in order to disclose the likely site of epileptogenicity. With sufficient refinement, it may ultimately aid in the avoidance of invasive monitoring with its expense and risk for the patient. To this end, a reliable and consistent method for segmentation of the hippocampus from magnetic resonance imaging (MRI) is needed. In this work, we present a systematic and statistical analysis approach for evaluation of automated segmentation methods in order to establish one that reliably approximates the results achieved by manual tracing of the hippocampus.

Research paper thumbnail of Localization of Epileptic Foci Based on Simultaneous EEG–fMRI Data

Frontiers in Neurology

Combining functional magnetic resonance imaging (fMRI) and electroencephalography (EEG) enables a... more Combining functional magnetic resonance imaging (fMRI) and electroencephalography (EEG) enables a non-invasive investigation of the human brain function and evaluation of the correlation of these two important modalities of brain activity. This paper explores recent reports on using advanced simultaneous EEG–fMRI methods proposed to map the regions and networks involved in focal epileptic seizure generation. One of the applications of EEG and fMRI combination as a valuable clinical approach is the pre-surgical evaluation of patients with epilepsy to map and localize the precise brain regions associated with epileptiform activity. In the process of conventional analysis using EEG–fMRI data, the interictal epileptiform discharges (IEDs) are visually extracted from the EEG data to be convolved as binary events with a predefined hemodynamic response function (HRF) to provide a model of epileptiform BOLD activity and use as a regressor for general linear model (GLM) analysis of the fMRI ...

Research paper thumbnail of Prospective Quantitative Neuroimaging Analysis of Putative Temporal Lobe Epilepsy

Frontiers in Neurology

Purpose: A prospective study of individual and combined quantitative imaging applications for lat... more Purpose: A prospective study of individual and combined quantitative imaging applications for lateralizing epileptogenicity was performed in a cohort of consecutive patients with a putative diagnosis of mesial temporal lobe epilepsy (mTLE).Methods: Quantitative metrics were applied to MRI and nuclear medicine imaging studies as part of a comprehensive presurgical investigation. The neuroimaging analytics were conducted remotely to remove bias. All quantitative lateralizing tools were trained using a separate dataset. Outcomes were determined after 2 years. Of those treated, some underwent resection, and others were implanted with a responsive neurostimulation (RNS) device.Results: Forty-eight consecutive cases underwent evaluation using nine attributes of individual or combinations of neuroimaging modalities: 1) hippocampal volume, 2) FLAIR signal, 3) PET profile, 4) multistructural analysis (MSA), 5) multimodal model analysis (MMM), 6) DTI uncertainty analysis, 7) DTI connectivity,...

Research paper thumbnail of Automatic Detection of Coronavirus (COVID-19) from Chest CT Images using VGG16-Based Deep-Learning

2020 27th National and 5th International Iranian Conference on Biomedical Engineering (ICBME)

In recent months, coronavirus disease 2019 (COVID-19) has infected millions of people worldwide. ... more In recent months, coronavirus disease 2019 (COVID-19) has infected millions of people worldwide. In addition to the clinical tests like reverse transcriptionpolymerase chain reaction (RT-PCR), medical imaging techniques such as computed tomography (CT) can be used as a rapid technique to detect and evaluate patients infected by COVID-19. Conventionally, CT-based COVID-19 classification is done by a radiology expert. In this paper, we present a deep learning-based Convolutional Neural Network (CNN) model we developed for the classification of COVID-19 positive patients from healthy subjects using chest CT. We used 10979 chest CT images of 131 patients with COVID-19 and 150 healthy subjects for training, validating, and testing of the proposed model. Evaluation of the results showed the precision of 92%, sensitivity of 90%, specificity of 91%, F1-Score of 0.91, and accuracy of 90%. We have used the regions of infection segmented by a radiologist to increase the generalization and reliability of the results. The plotted heatmaps show that the developed model has focused only on the infected regions of the lungs by COVID-19 to make decisions.

Research paper thumbnail of Cloud-based deep learning of big EEG data for epileptic seizure prediction

2016 IEEE Global Conference on Signal and Information Processing (GlobalSIP)

Developing a Brain-Computer Interface (BCI) for seizure prediction can help epileptic patients ha... more Developing a Brain-Computer Interface (BCI) for seizure prediction can help epileptic patients have a better quality of life. However, there are many difficulties and challenges in developing such a system as a real-life support for patients. Because of the nonstationary nature of EEG signals, normal and seizure patterns vary across different patients. Thus, finding a group of manually extracted features for the prediction task is not practical. Moreover, when using implanted electrodes for brain recording massive amounts of data are produced. This big data calls for the need for safe storage and high computational resources for real-time processing. To address these challenges, a cloud-based BCI system for the analysis of this big EEG data is presented. First, a dimensionality-reduction technique is developed to increase classification accuracy as well as to decrease the communication bandwidth and computation time. Second, following a deep-learning approach, a stacked autoencoder is trained in two steps for unsupervised feature extraction and classification. Third, a cloud-computing solution is proposed for real-time analysis of big EEG data. The results on a benchmark clinical dataset illustrate the superiority of the proposed patientspecific BCI as an alternative method and its expected usefulness in real-life support of epilepsy patients.

Research paper thumbnail of Improved dynamic connection detection power in estimated dynamic functional connectivity considering multivariate dependencies between brain regions

Human Brain Mapping

To estimate dynamic functional connectivity (dFC), the conventional method of sliding window corr... more To estimate dynamic functional connectivity (dFC), the conventional method of sliding window correlation (SWC) suffers from poor performance of dynamic connection detection. This stems from the equal weighting of observations, suboptimal time scale, nonsparse output, and the fact that it is bivariate. To overcome these limitations, we exploited the kernel‐reweighted logistic regression (KELLER) algorithm, a method that is common in genetic studies, to estimate dFC in resting state functional magnetic resonance imaging (rs‐fMRI) data. KELLER can estimate dFC through estimating both spatial and temporal patterns of functional connectivity between brain regions. This paper compares the performance of the proposed KELLER method with current methods (SWC and tapered‐SWC (T‐SWC) with different window lengths) based on both simulated and real rs‐fMRI data. Estimated dFC networks were assessed for detecting dynamically connected brain region pairs with hypothesis testing. Simulation results revealed that KELLER can detect dynamic connections with a statistical power of 87.35% compared with 70.17% and 58.54% associated with T‐SWC (p‐value = .001) and SWC (p‐value <.001), respectively. Results of these different methods applied on real rs‐fMRI data were investigated for two aspects: calculating the similarity between identified mean dynamic pattern and identifying dynamic pattern in default mode network (DMN). In 68% of subjects, the results of T‐SWC with window length of 100 s, among different window lengths, demonstrated the highest similarity to those of KELLER. With regards to DMN, KELLER estimated previously reported dynamic connection pairs between dorsal and ventral DMN while SWC‐based method was unable to detect these dynamic connections.

Research paper thumbnail of Cloud-based deep learning of big EEG data for epileptic seizure prediction

2016 IEEE Global Conference on Signal and Information Processing (GlobalSIP), Dec 1, 2016

Developing a Brain-Computer Interface (BCI) for seizure prediction can help epileptic patients ha... more Developing a Brain-Computer Interface (BCI) for seizure prediction can help epileptic patients have a better quality of life. However, there are many difficulties and challenges in developing such a system as a real-life support for patients. Because of the nonstationary nature of EEG signals, normal and seizure patterns vary across different patients. Thus, finding a group of manually extracted features for the prediction task is not practical. Moreover, when using implanted electrodes for brain recording massive amounts of data are produced. This big data calls for the need for safe storage and high computational resources for real-time processing. To address these challenges, a cloud-based BCI system for the analysis of this big EEG data is presented. First, a dimensionality-reduction technique is developed to increase classification accuracy as well as to decrease the communication bandwidth and computation time. Second, following a deep-learning approach, a stacked autoencoder is trained in two steps for unsupervised feature extraction and classification. Third, a cloud-computing solution is proposed for real-time analysis of big EEG data. The results on a benchmark clinical dataset illustrate the superiority of the proposed patientspecific BCI as an alternative method and its expected usefulness in real-life support of epilepsy patients.

Research paper thumbnail of A Framework for Intracranial Saccular Aneurysm Detection and Quantification using Morphological Analysis of Cerebral Angiograms

IEEE Access

Reliable early prediction of aneurysm rupture can greatly help neurosurgeons to treat aneurysms a... more Reliable early prediction of aneurysm rupture can greatly help neurosurgeons to treat aneurysms at the right time, thus saving lives as well as providing significant cost reduction. Most of the research efforts in this respect involve statistical analysis of collected data or simulation of hemodynamic factors to predict the risk of aneurysmal rupture. Whereas, morphological analysis of cerebral angiogram images for locating and estimating unruptured aneurysms is rarely considered. Since digital subtraction angiography (DSA) is regarded as a standard test by the American Stroke Association and American College of Radiology for identification of aneurysm, this paper aims to perform morphological analysis of DSA to accurately detect saccular aneurysms, precisely determine their sizes, and estimate the probability of their ruptures. The proposed diagnostic framework, intracranial saccular aneurysm detection and quantification, first extracts cerebrovascular structures by denoising angiogram images and delineates regions of interest (ROIs) by using watershed segmentation and distance transformation. Then, it identifies saccular aneurysms among segmented ROIs using multilayer perceptron neural network trained upon robust Haralick texture features, and finally quantifies aneurysm rupture by geometrical analysis of identified aneurysmic ROI. De-identified data set of 59 angiograms is used to evaluate the performance of algorithms for aneurysm detection and risk of rupture quantification. The proposed framework achieves high accuracy of 98% and 86% for aneurysm classification and quantification, respectively. INDEX TERMS Computer-assisted diagnosis, digital subtraction angiography (DSA), intracranial saccular aneurysm, rupture quantification, Haralick features, GLCM, GLRLM, multilayer perceptron (MLP) neural network.

Research paper thumbnail of Computer-Aided Diagnosis System for the Evaluation of Thyroid Nodules on Ultrasonography: Prospective Non-Inferiority Study according to the Experience Level of Radiologists

Korean Journal of Radiology

Objective: To determine whether a computer-aided diagnosis (CAD) system for the evaluation of thy... more Objective: To determine whether a computer-aided diagnosis (CAD) system for the evaluation of thyroid nodules is non-inferior to radiologists with different levels of experience. Materials and Methods: Patients with thyroid nodules with a decisive diagnosis of benign or malignant nodule were consecutively enrolled from November 2017 to September 2018. Three radiologists with different levels of experience (1 month, 4 years, and 7 years) in thyroid ultrasound (US) reviewed the thyroid US with and without using the CAD system. Statistical analyses included non-inferiority testing of the diagnostic accuracy for malignant thyroid nodules between the CAD system and the three radiologists with a non-inferiority margin of 10%, comparison of the diagnostic performance, and the added value of the CAD system to the radiologists. Results: Altogether, 197 patients were included in the study cohort. The diagnostic accuracy of the CAD system (88.5%, 95% confidence interval [CI] = 82.7-92.5) was non-inferior to that of the radiologists with less experience (1 month and 4 year) of thyroid US (83.0%, 95% CI = 76.5-88.0; p < 0.001), whereas it was inferior to that of the experienced radiologist (7 years) (95.8%, 95% CI = 91.4-98.0; p = 0.138). The sensitivity and negative predictive value of the CAD system were significantly higher than those of the less-experienced radiologists were, whereas no significant difference was found with those of the experienced radiologist. A combination of US and the CAD system significantly improved sensitivity and negative predictive value, although the specificity and positive predictive value deteriorated for the less-experienced radiologists. Conclusion: The CAD system may offer support for decision-making in the diagnosis of malignant thyroid nodules for operators who have less experience with thyroid US.

Research paper thumbnail of Enhancing performance of subject-specific models via subject-independent information for SSVEP-based BCIs

PLOS ONE

Recently, steady-state visual evoked potential (SSVEP)-based brain-computer interface (BCI) has a... more Recently, steady-state visual evoked potential (SSVEP)-based brain-computer interface (BCI) has attracted much attention due to its high information transfer rate (ITR) and increasing number of targets. However, the performance of SSVEP-based methods in terms of accuracy and time length required for target detection can be improved. We propose a new canonical correlation analysis (CCA)-based method to integrate subject-specific models and subject-independent information and enhance BCI performance. To optimize hyperparameters for CCA-based model of a specific subject, we propose to use training data of other subjects. An ensemble version of the proposed method is also developed and used for a fair comparison with ensemble task-related component analysis (TRCA). A publicly available 35-subject SSVEP benchmark dataset is used to evaluate different methods. The proposed method is compared with TRCA and extended CCA methods as reference methods. The performance of the methods is evaluated using classification accuracy and ITR. Offline analysis results show that the proposed method reaches highest ITR compared with TRCA and extended CCA. Also, the proposed method significantly improves performance of extended CCA in all conditions and TRCA for time windows greater than 0.3 s. In addition, the proposed method outperforms TRCA for low number of training blocks and electrodes. This study illustrates that adding subject-independent information to subject-specific models can improve the performance of SSVEP-based BCIs.

Research paper thumbnail of Data mining MR image features of select structures for lateralization of mesial temporal lobe epilepsy

PloS one, 2018

This study systematically investigates the predictive power of volumetric imaging feature sets ex... more This study systematically investigates the predictive power of volumetric imaging feature sets extracted from select neuroanatomical sites in lateralizing the epileptogenic focus in mesial temporal lobe epilepsy (mTLE) patients. A cohort of 68 unilateral mTLE patients who had achieved an Engel class I outcome postsurgically was studied retrospectively. The volumes of multiple brain structures were extracted from preoperative magnetic resonance (MR) images in each. The MR image data set consisted of 54 patients with imaging evidence for hippocampal sclerosis (HS-P) and 14 patients without (HS-N). Data mining techniques (i.e., feature extraction, feature selection, machine learning classifiers) were applied to provide measures of the relative contributions of structures and their correlations with one another. After removing redundant correlated structures, a minimum set of structures was determined as a marker for mTLE lateralization. Using a logistic regression classifier, the volum...

Research paper thumbnail of Neonatal brain resting-state functional connectivity imaging modalities

Photoacoustics, 2018

Infancy is the most critical period in human brain development. Studies demonstrate that subtle b... more Infancy is the most critical period in human brain development. Studies demonstrate that subtle brain abnormalities during this state of life may greatly affect the developmental processes of the newborn infants. One of the rapidly developing methods for early characterization of abnormal brain development is functional connectivity of the brain at rest. While the majority of resting-state studies have been conducted using magnetic resonance imaging (MRI), there is clear evidence that resting-state functional connectivity (rs-FC) can also be evaluated using other imaging modalities. The aim of this review is to compare the advantages and limitations of different modalities used for the mapping of infants' brain functional connectivity at rest. In addition, we introduce photoacoustic tomography, a novel functional neuroimaging modality, as a complementary modality for functional mapping of infants' brain.

Research paper thumbnail of Random ensemble learning for EEG classification

Artificial intelligence in medicine, 2018

Real-time detection of seizure activity in epilepsy patients is critical in averting seizure acti... more Real-time detection of seizure activity in epilepsy patients is critical in averting seizure activity and improving patients' quality of life. Accurate evaluation, presurgical assessment, seizure prevention, and emergency alerts all depend on the rapid detection of seizure onset. A new method of feature selection and classification for rapid and precise seizure detection is discussed wherein informative components of electroencephalogram (EEG)-derived data are extracted and an automatic method is presented using infinite independent component analysis (I-ICA) to select independent features. The feature space is divided into subspaces via random selection and multichannel support vector machines (SVMs) are used to classify these subspaces. The result of each classifier is then combined by majority voting to establish the final output. In addition, a random subspace ensemble using a combination of SVM, multilayer perceptron (MLP) neural network and an extended k-nearest neighbors ...

Research paper thumbnail of Spatiotemporal features of DCE-MRI for breast cancer diagnosis

Computer methods and programs in biomedicine, 2018

Breast cancer is a major cause of mortality among women if not treated in early stages. Previous ... more Breast cancer is a major cause of mortality among women if not treated in early stages. Previous works developed non-invasive diagnosis methods using imaging data, focusing on specific sets of features that can be called spatial features or temporal features. However, limited set of features carry limited information, requiring complex classification methods to diagnose the disease. For non-invasive diagnosis, different imaging modalities can be used. DCE-MRI is one of the best imaging techniques that provides temporal information about the kinetics of the contrast agent in suspicious lesions along with acceptable spatial resolution. We have extracted and studied a comprehensive set of features from spatiotemporal space to obtain maximum available information from the DCE-MRI data. Then, we have applied a feature fusion technique to remove common information and extract a feature set with maximum information to be used by a simple classification method. We have also implemented conv...

Research paper thumbnail of A time local subset feature selection for prediction of sudden cardiac death from ECG signal

Medical & biological engineering & computing, Jan 14, 2017

Prediction of sudden cardiac death continues to gain universal attention as a promising approach ... more Prediction of sudden cardiac death continues to gain universal attention as a promising approach to saving millions of lives threatened by sudden cardiac death (SCD). This study attempts to promote the literature from mere feature extraction analysis to developing strategies for manipulating the extracted features to target improvement of classification accuracy. To this end, a novel approach to local feature subset selection is applied using meticulous methodologies developed in previous studies of this team for extracting features from non-linear, time-frequency, and classical processes. We are therefore enabled to select features that differ from one another in each 1-min interval before the incident. Using the proposed algorithm, SCD can be predicted 12 min before the onset; thus, more propitious results are achieved. Additionally, through defining a utility function and employing statistical analysis, the alarm threshold has effectively been determined as 83%. Having selected t...

Research paper thumbnail of Improved prediction of outcome in Parkinson's disease using radiomics analysis of longitudinal DAT SPECT images

NeuroImage: Clinical

No disease modifying therapies for Parkinson's disease (PD) have been found effective to date. To... more No disease modifying therapies for Parkinson's disease (PD) have been found effective to date. To properly power clinical trials for discovery of such therapies, the ability to predict outcome in PD is critical, and there is a significant need for discovery of prognostic biomarkers of PD. Dopamine transporter (DAT) SPECT imaging is widely used for diagnostic purposes in PD. In the present work, we aimed to evaluate whether longitudinal DAT SPECT imaging can significantly improve prediction of outcome in PD patients. In particular, we investigated whether radiomics analysis of DAT SPECT images, in addition to use of conventional non-imaging and imaging measures, could be used to predict motor severity at year 4 in PD subjects. We selected 64 PD subjects (38 male, 26 female; age at baseline (year 0): 61.9 ± 7.3, range [46,78]) from the Parkinson's Progressive Marker Initiative (PPMI) database. Inclusion criteria included (i) having had at least 2 SPECT scans at years 0 and 1 acquired on a similar scanner, (ii) having undergone a high-resolution 3 T MRI scan, and (iii) having motor assessment (MDS-UPDRS-III) available in year 4 used as outcome measure. Image analysis included automatic region-of-interest (ROI) extraction on MRI images, registration of SPECT images onto the corresponding MRI images, and extraction of radiomic features. Non-imaging predictors included demographics, disease duration as well as motor and non-motor clinical measures in years 0 and 1. The image predictors included 92 radiomic features extracted from the caudate, putamen, and ventral striatum of DAT SPECT images at years 0 and 1 to quantify heterogeneity and texture in uptake. Random forest (RF) analysis with 5000 trees was used to combine both non-imaging and imaging variables to predict motor outcome (UPDRS-III: 27.3 ± 14.7, range [3,77]). The RF prediction was evaluated using leave-one-out cross-validation. Our results demonstrated that addition of radiomic features to conventional measures significantly improved (p < 0.001) prediction of outcome, reducing the absolute error of predicting MDS-UPDRS-III from 9.00 ± 0.88 to 4.12 ± 0.43. This shows that radiomics analysis of DAT SPECT images has a significant potential towards development of effective prognostic biomarkers in PD.

Research paper thumbnail of Structured and Sparse Canonical Correlation Analysis as a Brain-Wide Multi-Modal Data Fusion Approach

IEEE Transactions on Medical Imaging, 2017

Multi-modal data fusion has recently emerged as a comprehensive neuroimaging analysis approach, w... more Multi-modal data fusion has recently emerged as a comprehensive neuroimaging analysis approach, which usually uses canonical correlation analysis (CCA). However, the current CCA-based fusion approaches face problems like high-dimensionality, multi-collinearity, unimodal feature selection, asymmetry, and loss of spatial information in reshaping the imaging data into vectors. This paper proposes a structured and sparse CCA (ssCCA) technique as a novel CCA method to overcome the above problems. To investigate the performance of the proposed algorithm, we have compared three data fusion techniques: standard CCA, regularized CCA, and ssCCA, and evaluated their ability to detect multi-modal data associations. We have used simulations to compare the performance of these approaches and probe the effects of non-negativity constraint, the dimensionality of features, sample size, and noise power. The results demonstrate that ssCCA outperforms the existing standard and regularized CCA-based fusion approaches. We have also applied the methods to real functional magnetic resonance imaging (fMRI) and structural MRI data of Alzheimer's disease (AD) patients (n = 34) and healthy control (HC) subjects (n = 42) from the ADNI database. The results illustrate that the proposed unsupervised technique differentiates the transition pattern between the subject-course of AD patients and HC subjects Manuscript

Research paper thumbnail of Multiscale cancer modeling: In the line of fast simulation and chemotherapy

Mathematical and Computer Modelling, 2009

Although Multiscale Cancer Modeling has a realistic view in the process of tumor growth, its nume... more Although Multiscale Cancer Modeling has a realistic view in the process of tumor growth, its numerical algorithm is time consuming. Therefore, it is problematic to run and to find the best treatment plan for chemotherapy, even in case of a small size of tissue. Using an artificial neural network, this paper simulates the multiscale cancer model faster than its numerical algorithm. In order to find the best treatment plan, it suggests applying a simpler avascular model called Gompertz. By using these proposed methods, multiscale cancer modeling may be extendable to chemotherapy for a realistic size of tissue. In order to simulate multiscale model, a hierarchical neural network called Nested Hierarchical Self Organizing Map (NHSOM) is used. The basis of the NHSOM is an enhanced version of SOM, with an adaptive vigilance parameter. Corresponding parameter and the overall bottom-up design guarantee the quality of clustering, and the embedded top-down architecture reduces computational complexity. Although by applying NHSOM, the process of simulation runs faster compared with that of the numerical algorithm, it is not possible to check a simple search space. As a result, a set containing the best treatment plans of a simpler model (Gompertz) is used. Additionally, it is assumed in this paper, that the distribution of drug in vessels has a linear relation with the blood flow rate. The technical advantage of this assumption is that by using a simple linear relation, a given diffusion of a drug dosage may be scaled to the desired one. By extracting a proper feature vector from the multiscale model and using NHSOM, applying the scaled-best treatment plans of Gompertz model is done for a small size of tissue. In addition, simulating the effect of stress reduction on normal tissue after chemotherapy is another advantage of using NHSOM, which is a kind of ''emergent''.