Uwe Kruger | Rensselaer Polytechnic Institute (original) (raw)
Papers by Uwe Kruger
IFAC Proceedings Volumes, 2004
This paper describes the application of nonlinear principal component analysis (NLPCA) for the de... more This paper describes the application of nonlinear principal component analysis (NLPCA) for the detection of air leaks in the intake manifold of internal combustion engines. Such faults, if undetected, can lead to inefficient engine operation, result in expensive repairs and cause an increased level of exhaust emissions. Using a VW 1.9L TDI diesel engine, several data sets were recorded that describe nonnal steady-state behaviour and the influence of air leaks at various speed and pedal positions. These data sets were then analysed using an NLPCA based monitoring scheme. It was found that even small air leaks could be detected.
2017 Chinese Automation Congress (CAC), 2017
Monitoring of dynamic industrial process has been increasingly important due to more and more str... more Monitoring of dynamic industrial process has been increasingly important due to more and more strict safety and reliability requirements. Popular methods like time lagged arrangement-based and subspace-based approaches exhibit good performance in fault detection, however, they suffer from difficulty in accurately isolating faulty variables and diagnosing fault types. To alleviate this difficulty, this article considers a state space model whose joint probability is decomposed hierarchically into the multiplication of several conditional densities and a low dimensional density. Two nonparametric kernel density estimation methods are used to estimate these decomposed densities. By analyzing which density exceeds the confidence limit, information on faulty variables and fault types can be obtained. Application study to simulation examples show that the proposed method is more efficient in isolating and diagnosing process fault than competitive methods.
Introduction into Multivariate Statistical Process Control (MSPC)
15th International Meeting on Fully Three-Dimensional Image Reconstruction in Radiology and Nuclear Medicine, 2019
Over the past few years, deep neural networks have made significant processes in denoising low-do... more Over the past few years, deep neural networks have made significant processes in denoising low-dose CT images. A trained denoising network, however, may not generalize very well to different dose levels, which follows from the dose-dependent noise distribution. To address this practically, a trained network requires re-training to be applied to a new dose level, which limits the generalization abilities of deep neural networks for clinical applications. This article introduces a deep learning approach that does not require re-training and relies on a transfer learning strategy. More precisely, the transfer learning framework utilizes a progressive denoising model, where an elementary neural network serves as a basic denoising unit. The basic units are then cascaded to successively process towards a denoising task; i.e. the output of one network unit is the input to the next basic unit. The denoised image is then a linear combination of outputs of the individual network units. To demonstrate the application of this transfer learning approach, a basic CNN unit is trained using the Mayo low- dose CT dataset. Then, the linear parameters of the successive denoising units are trained using a different image dataset, i.e. the MGH low-dose CT dataset, containing CT images that were acquired at four different dose levels. Compared to a commercial iterative reconstruction approach, the transfer learning framework produced a substantially better denoising performance.
There are a number of excellent tools for multivariable and nonlinear classification and regressi... more There are a number of excellent tools for multivariable and nonlinear classification and regression tasks that are commonly employed in chemical process control and related disciplines such as chemometrics. However, these tools have not made a broad impact in biomedical and human health applications, where the majority of researchers resort to traditional, linear, univariate statistics to attempt to describe very complex phenomena. In complicated diseases such as autism spectrum disorder, it is imperative that researchers look beyond these traditional techniques and embrace more appropriate statistical techniques in order to better describe and report their findings. Furthermore, the disparity in findings between research groups encourages the use of validation procedures such as cross-validation to better ensure that results generalize to new data sets. This work showcases the use of partial least squares and Fisher discriminant analysis, as well as their kernel counterparts, for b...
Nature Machine Intelligence, 2019
Commercial iterative reconstruction techniques help to reduce the radiation dose of computed tomo... more Commercial iterative reconstruction techniques help to reduce the radiation dose of computed tomography (CT), but altered image appearance and artefacts can limit their adoptability and potential use. Deep learning has been investigated for low-dose CT (LDCT). Here, we design a modularized neural network for LDCT and compare it with commercial iterative reconstruction methods from three leading CT vendors. Although popular networks are trained for an end-to-end mapping, our network performs an end-to-process mapping so that intermediate denoised images are obtained with associated noise reduction directions towards a final denoised image. The learned workflow allows radiologists in the loop to optimize the denoising depth in a task-specific fashion. Our network was trained with the Mayo LDCT Dataset and tested on separate chest and abdominal CT exams from Massachusetts General Hospital. The best deep learning reconstructions were systematically compared to the best iterative reconst...
International Journal of Computer Assisted Radiology and Surgery, 2021
Purpose Severity scoring is a key step in managing patients with COVID-19 pneumonia. However, man... more Purpose Severity scoring is a key step in managing patients with COVID-19 pneumonia. However, manual quantitative analysis by radiologists is a time-consuming task, while qualitative evaluation may be fast but highly subjective. This study aims to develop artificial intelligence (AI)-based methods to quantify disease severity and predict COVID-19 patient outcome. Methods We develop an AI-based framework that employs deep neural networks to efficiently segment lung lobes and pulmonary opacities. The volume ratio of pulmonary opacities inside each lung lobe gives the severity scores of the lobes, which are then used to predict ICU admission and mortality with three different machine learning methods. The developed methods were evaluated on datasets from two hospitals (site A: Firoozgar Hospital, Iran, 105 patients; site B: Massachusetts General Hospital, USA, 88 patients). Results AI-based severity scores are strongly associated with those evaluated by radiologists (Spearman's rank correlation 0.837, p < 0.001). Using AI-based scores produced significantly higher (p < 0.05) area under the ROC curve (AUC) values. The developed AI method achieved the best performance of AUC = 0.813 (95% CI [0.729, 0.886]) in predicting ICU admission and AUC = 0.741 (95% CI [0.640, 0.837]) in mortality estimation on the two datasets. Conclusions Accurate severity scores can be obtained using the developed AI methods over chest CT images. The computed severity scores achieved better performance than radiologists in predicting COVID-19 patient outcome by consistently quantifying image features. Such developed techniques of severity assessment may be extended to other lung diseases beyond the current pandemic.
Visual Computing for Industry, Biomedicine, and Art, 2020
One example of an artificial intelligence ethical dilemma is the autonomous vehicle situation pre... more One example of an artificial intelligence ethical dilemma is the autonomous vehicle situation presented by Massachusetts Institute of Technology researchers in the Moral Machine Experiment. To solve such dilemmas, the MIT researchers used a classic statistical method known as the hierarchical Bayesian (HB) model. This paper builds upon previous work for modeling moral decision making, applies a deep learning method to learn human ethics in this context, and compares it to the HB approach. These methods were tested to predict moral decisions of simulated populations of Moral Machine participants. Overall, test results indicate that deep neural networks can be effective in learning the group morality of a population through observation, and outperform the Bayesian model in the cases of model mismatches.
Journal of Surgical Research, 2020
Background: Discriminating performance of learners with varying experience is essential to develo... more Background: Discriminating performance of learners with varying experience is essential to developing and validating a surgical simulator. For rare and emergent procedures such as cricothyrotomy (CCT), the criteria to establish such groups are unclear. This study is to investigate the impact of surgeons' actual CCT experience on their VR simulator performance and to determine the minimum number of actual CCTs that significantly discriminates simulator scores. Our hypothesis is that surgeons that performed more actual CCT cases would perform better on a VR CCT simulator.
Machine Vision and Applications, 2020
The establishment of image correspondence through robust image registration is critical to many c... more The establishment of image correspondence through robust image registration is critical to many clinical tasks such as image fusion, organ atlas creation, and tumor growth monitoring, and is a very challenging problem. Since the beginning of the recent deep learning renaissance, the medical imaging research community has developed deep learning based approaches and achieved the stateof-the-art in many applications, including image registration. The rapid adoption of deep learning for image registration applications over the past few years necessitates a comprehensive summary and outlook, which is the main scope of this survey. This requires placing a focus on the different research areas as well as highlighting challenges that practitioners face. This survey, therefore, outlines the evolution of deep learning based medical image registration in the context of both research challenges and relevant innovations in the past few years. Further, this survey highlights future research directions to show how this field may be possibly moved forward to the next level.
Journal of Biomechanics, 2017
Osteoarthritis (OA) is a degenerative joint disease resulting in the deterioration of articular c... more Osteoarthritis (OA) is a degenerative joint disease resulting in the deterioration of articular cartilage, a tissue with minimal ability to self-repair. Early diagnosis of OA with non-invasive imaging techniques such as magnetic resonance imaging (MRI) could provide an opportunity to intervene and slow or reverse this degeneration process. This study examines the classification of degradation states using MRI measurements. Enzymatic degradation was used to specifically target proteoglycans alone, collagen alone and both cartilage components sequentially. The resulting degradation was evaluated using MRI imaging techniques (T 1 , T 2 , diffusion tensor imaging, and gadolinium enhanced T 1) and derived measures of water, glycosaminoglycan and collagen content. We compared the classification ability of full thickness averages of these parameters with zonal averages (superficial, medial, and deep). Finally, we determined minimum variables sets to identify the smallest number of variables that allowed for complete separation of all degradation groups and ranked them by impact on the separation. Zonal analysis was much more sensitive than full thickness averages and allowed perfect separation of all four groups. Superficial zone cartilage was more sensitive to enzymatic degradation than the medial or deep zone, or the full thickness average. Variable ranking consistently identified collagen content and organization as the most impactful variables in the classification algorithm. The aim of this study is to classify cartilage degradation using only non-invasive MRI parameters that could be applied to OA diagnosis. Our results highlight the importance of zonal variation in the diagnosis of cartilage degeneration. Our novel, non-invasive collagen content measurement was crucial for complete separation of degraded groups from control cartilage. These findings have significant implications for clinical cartilage MRI for disease diagnosis.
IEEE Transactions on Cybernetics, 2018
This paper introduces a framework to deal with the distribution of descriptive features, which pr... more This paper introduces a framework to deal with the distribution of descriptive features, which preserves the advantages of the vectorial representation and computational efficiency of histogram-based techniques, and inherits the rigorous theoretical guarantee and competitive performance of metric-based ones. The methods developed under this framework describe the underlying distribution of a set of features as a vectorial feature by utilizing random features. Moreover, the proposed methods asymptotically converge to metric-based methods in terms of the similarity and distance and, depending on a specific kernel function, reduce to histogram-based methods. The experimental results show the benefits of a comparable performance on categorization tasks compared to conventional metric-based methods at a significantly reduced computational cost.
Frontiers in Cellular Neuroscience, 2018
Multivariate Analysis of Autism Treatments treatment, including placebo, into the regression anal... more Multivariate Analysis of Autism Treatments treatment, including placebo, into the regression analysis yields an R 2 of 0.471 after cross-validation when using changes in six metabolic measurements as predictors. These results are suggestive of an ability to effectively improve pathway-wide FOCM/TS metabolic and behavioral abnormalities in ASD with clinical treatment.
IFAC Proceedings Volumes, 2003
IEEE transactions on cybernetics, Jan 24, 2015
Space partitioning trees, which sequentially divide and subdivide a space into disjoint subsets u... more Space partitioning trees, which sequentially divide and subdivide a space into disjoint subsets using splitting hyperplanes, play a key role in accelerating the query of samples in the cybernetics and computer vision domains. Associated methods, however, suffer from the curse of dimensionality or stringent assumptions on the data distribution. This paper presents a new concept, termed kernel dimension reduction-tree (KDR-tree), that relies on linear projections computed based on an unsupervised kernel dimension reduction approach. The proposed concept does not rely on any assumption on the data distribution and can capture higher-order statistical information encapsulated within the data. This paper then develops two variants of the KDR-tree concept: 1) to handle residual data [i.e., the residual-based KDR-tree (rKDR-tree) algorithm] and 2) to cope with larger datasets, [i.e., the sampling-based KDR-tree (sKDR-tree) algorithm]. By directly comparing the KDR-tree concept to competiti...
npj Science of Learning, 2021
Online education is important in the COVID-19 pandemic, but online exam at individual homes invit... more Online education is important in the COVID-19 pandemic, but online exam at individual homes invites students to cheat in various ways, especially collusion. While physical proctoring is impossible during social distancing, online proctoring is costly, compromises privacy, and can lead to prevailing collusion. Here we develop an optimization-based anti-collusion approach for distanced online testing (DOT) by minimizing the collusion gain, which can be coupled with other techniques for cheating prevention. With prior knowledge of student competences, our DOT technology optimizes sequences of questions and assigns them to students in synchronized time slots, reducing the collusion gain by 2–3 orders of magnitude relative to the conventional exam in which students receive their common questions simultaneously. Our DOT theory allows control of the collusion gain to a sufficiently low level. Our recent final exam in the DOT format has been successful, as evidenced by statistical tests and...
Optics in the Life Sciences Congress, 2017
2015 41st Annual Northeast Biomedical Engineering Conference (NEBEC), 2015
ABSTRACT An important objective in stem cell research is controlling differentiation of pluripote... more ABSTRACT An important objective in stem cell research is controlling differentiation of pluripotent stem cells to a desired fate. Previous research in this area has focused on directing differentiation by manipulating morphogens and substrate material/mechanics. However, cell-cell signaling, whether by contact or paracrine signaling, also influences differentiation. One promising direction by which cell-cell signaling can be manipulated is through cellular patterning. Using mouse embryonic stem cells (mESCs) as a model system, this study investigates patterning mESCs in colonies of controlled size and spacing, to examine the effect of patterning on differentiation. Laser direct-write was used to pattern mESCs in an array of small colonies, and cells were permitted to spontaneously differentiate for five days. Expression levels of seven select genes were compared to those of randomly seeded mESCs, and mESCs from conventional hanging drop culture. Analysis of variance showed significant differences in some genes examined, including mesoderm and ectoderm markers, indicating that the initial spatial arrangement of cells influences differentiation of pluripotent stem cells. A multivariate linear discriminant analysis was used to classify input populations, and suggested how genes may be affected by spatial patterning.
IEEE Transactions on Biomedical Engineering, 2020
Currently, there is a dearth of objective metrics for assessing bi-manual motor skills, which are... more Currently, there is a dearth of objective metrics for assessing bi-manual motor skills, which are critical for high-stakes professions such as surgery. Recently, functional near-infrared spectroscopy (fNIRS) has been shown to be effective at classifying motor task types, which can be potentially used for assessing motor performance level. In this work, we use fNIRS data for predicting the performance scores in a standardized bi-manual motor task used in surgical certification and propose a deep-learning framework 'Brain-NET' to extract features from the fNIRS data. Our results demonstrate that the Brain-NET is able to predict bi-manual surgical motor skills based on neuroimaging data accurately (R 2 = 0.73). Furthermore, the classification ability of the Brain-NET model is demonstrated based on receiver operating characteristic (ROC) curves and area under the curve (AUC) values of 0.91. Hence, these results establish that fNIRS associated with deep learning analysis is a promising method for a bedside, quick and cost-effective assessment of bimanual skill levels.
Research in Autism Spectrum Disorders, 2018
Compliance with Ethical Standards All procedures performed in studies involving human participant... more Compliance with Ethical Standards All procedures performed in studies involving human participants were in accordance with ethical standards of the institutional and/or national research committee and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards. Informed consent was obtained from all individual participants included in the study. There have been no changes to author affiliation subsequent to the time of the study.
IFAC Proceedings Volumes, 2004
This paper describes the application of nonlinear principal component analysis (NLPCA) for the de... more This paper describes the application of nonlinear principal component analysis (NLPCA) for the detection of air leaks in the intake manifold of internal combustion engines. Such faults, if undetected, can lead to inefficient engine operation, result in expensive repairs and cause an increased level of exhaust emissions. Using a VW 1.9L TDI diesel engine, several data sets were recorded that describe nonnal steady-state behaviour and the influence of air leaks at various speed and pedal positions. These data sets were then analysed using an NLPCA based monitoring scheme. It was found that even small air leaks could be detected.
2017 Chinese Automation Congress (CAC), 2017
Monitoring of dynamic industrial process has been increasingly important due to more and more str... more Monitoring of dynamic industrial process has been increasingly important due to more and more strict safety and reliability requirements. Popular methods like time lagged arrangement-based and subspace-based approaches exhibit good performance in fault detection, however, they suffer from difficulty in accurately isolating faulty variables and diagnosing fault types. To alleviate this difficulty, this article considers a state space model whose joint probability is decomposed hierarchically into the multiplication of several conditional densities and a low dimensional density. Two nonparametric kernel density estimation methods are used to estimate these decomposed densities. By analyzing which density exceeds the confidence limit, information on faulty variables and fault types can be obtained. Application study to simulation examples show that the proposed method is more efficient in isolating and diagnosing process fault than competitive methods.
Introduction into Multivariate Statistical Process Control (MSPC)
15th International Meeting on Fully Three-Dimensional Image Reconstruction in Radiology and Nuclear Medicine, 2019
Over the past few years, deep neural networks have made significant processes in denoising low-do... more Over the past few years, deep neural networks have made significant processes in denoising low-dose CT images. A trained denoising network, however, may not generalize very well to different dose levels, which follows from the dose-dependent noise distribution. To address this practically, a trained network requires re-training to be applied to a new dose level, which limits the generalization abilities of deep neural networks for clinical applications. This article introduces a deep learning approach that does not require re-training and relies on a transfer learning strategy. More precisely, the transfer learning framework utilizes a progressive denoising model, where an elementary neural network serves as a basic denoising unit. The basic units are then cascaded to successively process towards a denoising task; i.e. the output of one network unit is the input to the next basic unit. The denoised image is then a linear combination of outputs of the individual network units. To demonstrate the application of this transfer learning approach, a basic CNN unit is trained using the Mayo low- dose CT dataset. Then, the linear parameters of the successive denoising units are trained using a different image dataset, i.e. the MGH low-dose CT dataset, containing CT images that were acquired at four different dose levels. Compared to a commercial iterative reconstruction approach, the transfer learning framework produced a substantially better denoising performance.
There are a number of excellent tools for multivariable and nonlinear classification and regressi... more There are a number of excellent tools for multivariable and nonlinear classification and regression tasks that are commonly employed in chemical process control and related disciplines such as chemometrics. However, these tools have not made a broad impact in biomedical and human health applications, where the majority of researchers resort to traditional, linear, univariate statistics to attempt to describe very complex phenomena. In complicated diseases such as autism spectrum disorder, it is imperative that researchers look beyond these traditional techniques and embrace more appropriate statistical techniques in order to better describe and report their findings. Furthermore, the disparity in findings between research groups encourages the use of validation procedures such as cross-validation to better ensure that results generalize to new data sets. This work showcases the use of partial least squares and Fisher discriminant analysis, as well as their kernel counterparts, for b...
Nature Machine Intelligence, 2019
Commercial iterative reconstruction techniques help to reduce the radiation dose of computed tomo... more Commercial iterative reconstruction techniques help to reduce the radiation dose of computed tomography (CT), but altered image appearance and artefacts can limit their adoptability and potential use. Deep learning has been investigated for low-dose CT (LDCT). Here, we design a modularized neural network for LDCT and compare it with commercial iterative reconstruction methods from three leading CT vendors. Although popular networks are trained for an end-to-end mapping, our network performs an end-to-process mapping so that intermediate denoised images are obtained with associated noise reduction directions towards a final denoised image. The learned workflow allows radiologists in the loop to optimize the denoising depth in a task-specific fashion. Our network was trained with the Mayo LDCT Dataset and tested on separate chest and abdominal CT exams from Massachusetts General Hospital. The best deep learning reconstructions were systematically compared to the best iterative reconst...
International Journal of Computer Assisted Radiology and Surgery, 2021
Purpose Severity scoring is a key step in managing patients with COVID-19 pneumonia. However, man... more Purpose Severity scoring is a key step in managing patients with COVID-19 pneumonia. However, manual quantitative analysis by radiologists is a time-consuming task, while qualitative evaluation may be fast but highly subjective. This study aims to develop artificial intelligence (AI)-based methods to quantify disease severity and predict COVID-19 patient outcome. Methods We develop an AI-based framework that employs deep neural networks to efficiently segment lung lobes and pulmonary opacities. The volume ratio of pulmonary opacities inside each lung lobe gives the severity scores of the lobes, which are then used to predict ICU admission and mortality with three different machine learning methods. The developed methods were evaluated on datasets from two hospitals (site A: Firoozgar Hospital, Iran, 105 patients; site B: Massachusetts General Hospital, USA, 88 patients). Results AI-based severity scores are strongly associated with those evaluated by radiologists (Spearman's rank correlation 0.837, p < 0.001). Using AI-based scores produced significantly higher (p < 0.05) area under the ROC curve (AUC) values. The developed AI method achieved the best performance of AUC = 0.813 (95% CI [0.729, 0.886]) in predicting ICU admission and AUC = 0.741 (95% CI [0.640, 0.837]) in mortality estimation on the two datasets. Conclusions Accurate severity scores can be obtained using the developed AI methods over chest CT images. The computed severity scores achieved better performance than radiologists in predicting COVID-19 patient outcome by consistently quantifying image features. Such developed techniques of severity assessment may be extended to other lung diseases beyond the current pandemic.
Visual Computing for Industry, Biomedicine, and Art, 2020
One example of an artificial intelligence ethical dilemma is the autonomous vehicle situation pre... more One example of an artificial intelligence ethical dilemma is the autonomous vehicle situation presented by Massachusetts Institute of Technology researchers in the Moral Machine Experiment. To solve such dilemmas, the MIT researchers used a classic statistical method known as the hierarchical Bayesian (HB) model. This paper builds upon previous work for modeling moral decision making, applies a deep learning method to learn human ethics in this context, and compares it to the HB approach. These methods were tested to predict moral decisions of simulated populations of Moral Machine participants. Overall, test results indicate that deep neural networks can be effective in learning the group morality of a population through observation, and outperform the Bayesian model in the cases of model mismatches.
Journal of Surgical Research, 2020
Background: Discriminating performance of learners with varying experience is essential to develo... more Background: Discriminating performance of learners with varying experience is essential to developing and validating a surgical simulator. For rare and emergent procedures such as cricothyrotomy (CCT), the criteria to establish such groups are unclear. This study is to investigate the impact of surgeons' actual CCT experience on their VR simulator performance and to determine the minimum number of actual CCTs that significantly discriminates simulator scores. Our hypothesis is that surgeons that performed more actual CCT cases would perform better on a VR CCT simulator.
Machine Vision and Applications, 2020
The establishment of image correspondence through robust image registration is critical to many c... more The establishment of image correspondence through robust image registration is critical to many clinical tasks such as image fusion, organ atlas creation, and tumor growth monitoring, and is a very challenging problem. Since the beginning of the recent deep learning renaissance, the medical imaging research community has developed deep learning based approaches and achieved the stateof-the-art in many applications, including image registration. The rapid adoption of deep learning for image registration applications over the past few years necessitates a comprehensive summary and outlook, which is the main scope of this survey. This requires placing a focus on the different research areas as well as highlighting challenges that practitioners face. This survey, therefore, outlines the evolution of deep learning based medical image registration in the context of both research challenges and relevant innovations in the past few years. Further, this survey highlights future research directions to show how this field may be possibly moved forward to the next level.
Journal of Biomechanics, 2017
Osteoarthritis (OA) is a degenerative joint disease resulting in the deterioration of articular c... more Osteoarthritis (OA) is a degenerative joint disease resulting in the deterioration of articular cartilage, a tissue with minimal ability to self-repair. Early diagnosis of OA with non-invasive imaging techniques such as magnetic resonance imaging (MRI) could provide an opportunity to intervene and slow or reverse this degeneration process. This study examines the classification of degradation states using MRI measurements. Enzymatic degradation was used to specifically target proteoglycans alone, collagen alone and both cartilage components sequentially. The resulting degradation was evaluated using MRI imaging techniques (T 1 , T 2 , diffusion tensor imaging, and gadolinium enhanced T 1) and derived measures of water, glycosaminoglycan and collagen content. We compared the classification ability of full thickness averages of these parameters with zonal averages (superficial, medial, and deep). Finally, we determined minimum variables sets to identify the smallest number of variables that allowed for complete separation of all degradation groups and ranked them by impact on the separation. Zonal analysis was much more sensitive than full thickness averages and allowed perfect separation of all four groups. Superficial zone cartilage was more sensitive to enzymatic degradation than the medial or deep zone, or the full thickness average. Variable ranking consistently identified collagen content and organization as the most impactful variables in the classification algorithm. The aim of this study is to classify cartilage degradation using only non-invasive MRI parameters that could be applied to OA diagnosis. Our results highlight the importance of zonal variation in the diagnosis of cartilage degeneration. Our novel, non-invasive collagen content measurement was crucial for complete separation of degraded groups from control cartilage. These findings have significant implications for clinical cartilage MRI for disease diagnosis.
IEEE Transactions on Cybernetics, 2018
This paper introduces a framework to deal with the distribution of descriptive features, which pr... more This paper introduces a framework to deal with the distribution of descriptive features, which preserves the advantages of the vectorial representation and computational efficiency of histogram-based techniques, and inherits the rigorous theoretical guarantee and competitive performance of metric-based ones. The methods developed under this framework describe the underlying distribution of a set of features as a vectorial feature by utilizing random features. Moreover, the proposed methods asymptotically converge to metric-based methods in terms of the similarity and distance and, depending on a specific kernel function, reduce to histogram-based methods. The experimental results show the benefits of a comparable performance on categorization tasks compared to conventional metric-based methods at a significantly reduced computational cost.
Frontiers in Cellular Neuroscience, 2018
Multivariate Analysis of Autism Treatments treatment, including placebo, into the regression anal... more Multivariate Analysis of Autism Treatments treatment, including placebo, into the regression analysis yields an R 2 of 0.471 after cross-validation when using changes in six metabolic measurements as predictors. These results are suggestive of an ability to effectively improve pathway-wide FOCM/TS metabolic and behavioral abnormalities in ASD with clinical treatment.
IFAC Proceedings Volumes, 2003
IEEE transactions on cybernetics, Jan 24, 2015
Space partitioning trees, which sequentially divide and subdivide a space into disjoint subsets u... more Space partitioning trees, which sequentially divide and subdivide a space into disjoint subsets using splitting hyperplanes, play a key role in accelerating the query of samples in the cybernetics and computer vision domains. Associated methods, however, suffer from the curse of dimensionality or stringent assumptions on the data distribution. This paper presents a new concept, termed kernel dimension reduction-tree (KDR-tree), that relies on linear projections computed based on an unsupervised kernel dimension reduction approach. The proposed concept does not rely on any assumption on the data distribution and can capture higher-order statistical information encapsulated within the data. This paper then develops two variants of the KDR-tree concept: 1) to handle residual data [i.e., the residual-based KDR-tree (rKDR-tree) algorithm] and 2) to cope with larger datasets, [i.e., the sampling-based KDR-tree (sKDR-tree) algorithm]. By directly comparing the KDR-tree concept to competiti...
npj Science of Learning, 2021
Online education is important in the COVID-19 pandemic, but online exam at individual homes invit... more Online education is important in the COVID-19 pandemic, but online exam at individual homes invites students to cheat in various ways, especially collusion. While physical proctoring is impossible during social distancing, online proctoring is costly, compromises privacy, and can lead to prevailing collusion. Here we develop an optimization-based anti-collusion approach for distanced online testing (DOT) by minimizing the collusion gain, which can be coupled with other techniques for cheating prevention. With prior knowledge of student competences, our DOT technology optimizes sequences of questions and assigns them to students in synchronized time slots, reducing the collusion gain by 2–3 orders of magnitude relative to the conventional exam in which students receive their common questions simultaneously. Our DOT theory allows control of the collusion gain to a sufficiently low level. Our recent final exam in the DOT format has been successful, as evidenced by statistical tests and...
Optics in the Life Sciences Congress, 2017
2015 41st Annual Northeast Biomedical Engineering Conference (NEBEC), 2015
ABSTRACT An important objective in stem cell research is controlling differentiation of pluripote... more ABSTRACT An important objective in stem cell research is controlling differentiation of pluripotent stem cells to a desired fate. Previous research in this area has focused on directing differentiation by manipulating morphogens and substrate material/mechanics. However, cell-cell signaling, whether by contact or paracrine signaling, also influences differentiation. One promising direction by which cell-cell signaling can be manipulated is through cellular patterning. Using mouse embryonic stem cells (mESCs) as a model system, this study investigates patterning mESCs in colonies of controlled size and spacing, to examine the effect of patterning on differentiation. Laser direct-write was used to pattern mESCs in an array of small colonies, and cells were permitted to spontaneously differentiate for five days. Expression levels of seven select genes were compared to those of randomly seeded mESCs, and mESCs from conventional hanging drop culture. Analysis of variance showed significant differences in some genes examined, including mesoderm and ectoderm markers, indicating that the initial spatial arrangement of cells influences differentiation of pluripotent stem cells. A multivariate linear discriminant analysis was used to classify input populations, and suggested how genes may be affected by spatial patterning.
IEEE Transactions on Biomedical Engineering, 2020
Currently, there is a dearth of objective metrics for assessing bi-manual motor skills, which are... more Currently, there is a dearth of objective metrics for assessing bi-manual motor skills, which are critical for high-stakes professions such as surgery. Recently, functional near-infrared spectroscopy (fNIRS) has been shown to be effective at classifying motor task types, which can be potentially used for assessing motor performance level. In this work, we use fNIRS data for predicting the performance scores in a standardized bi-manual motor task used in surgical certification and propose a deep-learning framework 'Brain-NET' to extract features from the fNIRS data. Our results demonstrate that the Brain-NET is able to predict bi-manual surgical motor skills based on neuroimaging data accurately (R 2 = 0.73). Furthermore, the classification ability of the Brain-NET model is demonstrated based on receiver operating characteristic (ROC) curves and area under the curve (AUC) values of 0.91. Hence, these results establish that fNIRS associated with deep learning analysis is a promising method for a bedside, quick and cost-effective assessment of bimanual skill levels.
Research in Autism Spectrum Disorders, 2018
Compliance with Ethical Standards All procedures performed in studies involving human participant... more Compliance with Ethical Standards All procedures performed in studies involving human participants were in accordance with ethical standards of the institutional and/or national research committee and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards. Informed consent was obtained from all individual participants included in the study. There have been no changes to author affiliation subsequent to the time of the study.