Gunnar Ratsch | Memorial Sloan-Kettering Cancer Center (original) (raw)
Address: New York City
less
Uploads
Papers by Gunnar Ratsch
Cornell University - arXiv, Jul 9, 2019
Multivariate time series with missing values are common in areas such as healthcare and finance, ... more Multivariate time series with missing values are common in areas such as healthcare and finance, and have grown in number and complexity over the years. This raises the question whether deep learning methodologies can outperform classical data imputation methods in this domain. However, naïve applications of deep learning fall short in giving reliable confidence estimates and lack interpretability. We propose a new deep sequential latent variable model for dimensionality reduction and data imputation. Our modeling assumption is simple and interpretable: the high dimensional time series has a lower-dimensional representation which evolves smoothly in time according to a Gaussian process. The non-linear dimensionality reduction in the presence of missing data is achieved using a VAE approach with a novel structured variational approximation. We demonstrate that our approach outperforms several classical and deep learning-based data imputation methods on high-dimensional data from the domains of computer vision and healthcare, while additionally improving the smoothness of the imputations and providing interpretable uncertainty estimates.
Cornell University - arXiv, Sep 28, 2019
Metagenomic studies have increasingly utilized sequencing technologies in order to analyze DNA fr... more Metagenomic studies have increasingly utilized sequencing technologies in order to analyze DNA fragments found in environmental samples. One important step in this analysis is the taxonomic classification of the DNA fragments. Conventional read classification methods require large databases and vast amounts of memory to run, with recent deep learning methods suffering from very large model sizes. We therefore aim to develop a more memory-efficient technique for taxonomic classification. A task of particular interest is abundance estimation in metagenomic samples. Current attempts rely on classifying single DNA reads independently from each other and are therefore agnostic to co-occurence patterns between taxa. In this work, we also attempt to take these patterns into account. We develop a novel memory-efficient read classification technique, combining deep learning and locality-sensitive hashing. We show that this approach outperforms conventional mapping-based and other deep learning methods for single-read taxonomic classification when restricting all methods to a fixed memory footprint. Moreover, we formulate the task of abundance estimation as a Multiple Instance Learning (MIL) problem and we extend current deep learning architectures with two different types of permutation-invariant MIL pooling layers: a) deepsets and b) attention-based pooling. We illustrate that our architectures can exploit the co-occurrence of species in metagenomic read sets and outperform the single-read architectures in predicting the distribution over taxa at higher taxonomic ranks.
人工知能学会論文誌 Transactions of the Japanese Society For Artificial Intelligence Ai, Nov 1, 2001
Empirical Inference, 2013
We present an overview of the field of regularization-based multi-task learning, which is a relat... more We present an overview of the field of regularization-based multi-task learning, which is a relatively recent offshoot of statistical machine learning. We discuss the foundations as well as some of the recent advances of the field, including strategies for learning or refining the measure of task relatedness. We present an example from the application domain of Computational Biology, where multi-task learning has been successfully applied, and give some practical guidelines for assessing a priori, for a given dataset, whether or not multi-task learning is likely to pay off.
Cornell University - arXiv, Jul 9, 2019
Multivariate time series with missing values are common in areas such as healthcare and finance, ... more Multivariate time series with missing values are common in areas such as healthcare and finance, and have grown in number and complexity over the years. This raises the question whether deep learning methodologies can outperform classical data imputation methods in this domain. However, naïve applications of deep learning fall short in giving reliable confidence estimates and lack interpretability. We propose a new deep sequential latent variable model for dimensionality reduction and data imputation. Our modeling assumption is simple and interpretable: the high dimensional time series has a lower-dimensional representation which evolves smoothly in time according to a Gaussian process. The non-linear dimensionality reduction in the presence of missing data is achieved using a VAE approach with a novel structured variational approximation. We demonstrate that our approach outperforms several classical and deep learning-based data imputation methods on high-dimensional data from the domains of computer vision and healthcare, while additionally improving the smoothness of the imputations and providing interpretable uncertainty estimates.
Cornell University - arXiv, Sep 28, 2019
Metagenomic studies have increasingly utilized sequencing technologies in order to analyze DNA fr... more Metagenomic studies have increasingly utilized sequencing technologies in order to analyze DNA fragments found in environmental samples. One important step in this analysis is the taxonomic classification of the DNA fragments. Conventional read classification methods require large databases and vast amounts of memory to run, with recent deep learning methods suffering from very large model sizes. We therefore aim to develop a more memory-efficient technique for taxonomic classification. A task of particular interest is abundance estimation in metagenomic samples. Current attempts rely on classifying single DNA reads independently from each other and are therefore agnostic to co-occurence patterns between taxa. In this work, we also attempt to take these patterns into account. We develop a novel memory-efficient read classification technique, combining deep learning and locality-sensitive hashing. We show that this approach outperforms conventional mapping-based and other deep learning methods for single-read taxonomic classification when restricting all methods to a fixed memory footprint. Moreover, we formulate the task of abundance estimation as a Multiple Instance Learning (MIL) problem and we extend current deep learning architectures with two different types of permutation-invariant MIL pooling layers: a) deepsets and b) attention-based pooling. We illustrate that our architectures can exploit the co-occurrence of species in metagenomic read sets and outperform the single-read architectures in predicting the distribution over taxa at higher taxonomic ranks.
人工知能学会論文誌 Transactions of the Japanese Society For Artificial Intelligence Ai, Nov 1, 2001
Empirical Inference, 2013
We present an overview of the field of regularization-based multi-task learning, which is a relat... more We present an overview of the field of regularization-based multi-task learning, which is a relatively recent offshoot of statistical machine learning. We discuss the foundations as well as some of the recent advances of the field, including strategies for learning or refining the measure of task relatedness. We present an example from the application domain of Computational Biology, where multi-task learning has been successfully applied, and give some practical guidelines for assessing a priori, for a given dataset, whether or not multi-task learning is likely to pay off.