Gunnar Ratsch | Memorial Sloan-Kettering Cancer Center (original) (raw)

Gunnar Ratsch

Address: New York City

less

Uploads

Papers by Gunnar Ratsch

Research paper thumbnail of GP-VAE: Deep Probabilistic Time Series Imputation

Cornell University - arXiv, Jul 9, 2019

Multivariate time series with missing values are common in areas such as healthcare and finance, ... more Multivariate time series with missing values are common in areas such as healthcare and finance, and have grown in number and complexity over the years. This raises the question whether deep learning methodologies can outperform classical data imputation methods in this domain. However, naïve applications of deep learning fall short in giving reliable confidence estimates and lack interpretability. We propose a new deep sequential latent variable model for dimensionality reduction and data imputation. Our modeling assumption is simple and interpretable: the high dimensional time series has a lower-dimensional representation which evolves smoothly in time according to a Gaussian process. The non-linear dimensionality reduction in the presence of missing data is achieved using a VAE approach with a novel structured variational approximation. We demonstrate that our approach outperforms several classical and deep learning-based data imputation methods on high-dimensional data from the domains of computer vision and healthcare, while additionally improving the smoothness of the imputations and providing interpretable uncertainty estimates.

Research paper thumbnail of META$^\mathbf{2}$: Memory-efficient taxonomic classification and abundance estimation for metagenomics with deep learning

Cornell University - arXiv, Sep 28, 2019

Metagenomic studies have increasingly utilized sequencing technologies in order to analyze DNA fr... more Metagenomic studies have increasingly utilized sequencing technologies in order to analyze DNA fragments found in environmental samples. One important step in this analysis is the taxonomic classification of the DNA fragments. Conventional read classification methods require large databases and vast amounts of memory to run, with recent deep learning methods suffering from very large model sizes. We therefore aim to develop a more memory-efficient technique for taxonomic classification. A task of particular interest is abundance estimation in metagenomic samples. Current attempts rely on classifying single DNA reads independently from each other and are therefore agnostic to co-occurence patterns between taxa. In this work, we also attempt to take these patterns into account. We develop a novel memory-efficient read classification technique, combining deep learning and locality-sensitive hashing. We show that this approach outperforms conventional mapping-based and other deep learning methods for single-read taxonomic classification when restricting all methods to a fixed memory footprint. Moreover, we formulate the task of abundance estimation as a Multiple Instance Learning (MIL) problem and we extend current deep learning architectures with two different types of permutation-invariant MIL pooling layers: a) deepsets and b) attention-based pooling. We illustrate that our architectures can exploit the co-occurrence of species in metagenomic read sets and outperform the single-read architectures in predicting the distribution over taxa at higher taxonomic ranks.

Research paper thumbnail of 直観的な学習制御パラメータを有する Arcing アルゴリズム

人工知能学会論文誌 Transactions of the Japanese Society For Artificial Intelligence Ai, Nov 1, 2001

Research paper thumbnail of Sch olkopf

Research paper thumbnail of A. J.(1999) Input Space Versus Feature Space in Kernel-Based Methods

Research paper thumbnail of Systematic evaluation of spliced aligners for RNA-seq data

Research paper thumbnail of MIKA S RATSCH G, et al. An introduction to kernel-based learning algorithms

Research paper thumbnail of Boosting, Margins and Beyond

Research paper thumbnail of Advanced Methods for Sequence Analysis in Bioinformatics

Research paper thumbnail of Muller. Soft Margins for AdaBoost

Research paper thumbnail of An Asymptotic Analysis and Improvement of AdaBoost in the Binary Classification Case

Research paper thumbnail of Kernel fisher discriminant analysis

Research paper thumbnail of mTiM: margin-based transcript mapping from RNA-seq

Research paper thumbnail of Multi-task Learning for Computational Biology: Overview and Outlook

Empirical Inference, 2013

We present an overview of the field of regularization-based multi-task learning, which is a relat... more We present an overview of the field of regularization-based multi-task learning, which is a relatively recent offshoot of statistical machine learning. We discuss the foundations as well as some of the recent advances of the field, including strategies for learning or refining the measure of task relatedness. We present an example from the application domain of Computational Biology, where multi-task learning has been successfully applied, and give some practical guidelines for assessing a priori, for a given dataset, whether or not multi-task learning is likely to pay off.

Research paper thumbnail of Schö lkopf, B., and Rä tsch, G.(2008). Support vector machines and kernels for computational biology

Research paper thumbnail of Machine Leanring Open Source Software, 2007

Research paper thumbnail of G. Raetsch, G.(2008). Support vector machines and kernels for computational biology

Research paper thumbnail of Ida, and G. Rtsch. Large scale learning with string kernels

Research paper thumbnail of Schö lkofp, B., Rä tsch, G., and Weigel, D.(2008). At-TAX: a whole genome tiling array resource for developmental expression analysis and transcript identification in Arabidopsis thaliana

Research paper thumbnail of Nips workshop on machine learning open source software

Research paper thumbnail of GP-VAE: Deep Probabilistic Time Series Imputation

Cornell University - arXiv, Jul 9, 2019

Multivariate time series with missing values are common in areas such as healthcare and finance, ... more Multivariate time series with missing values are common in areas such as healthcare and finance, and have grown in number and complexity over the years. This raises the question whether deep learning methodologies can outperform classical data imputation methods in this domain. However, naïve applications of deep learning fall short in giving reliable confidence estimates and lack interpretability. We propose a new deep sequential latent variable model for dimensionality reduction and data imputation. Our modeling assumption is simple and interpretable: the high dimensional time series has a lower-dimensional representation which evolves smoothly in time according to a Gaussian process. The non-linear dimensionality reduction in the presence of missing data is achieved using a VAE approach with a novel structured variational approximation. We demonstrate that our approach outperforms several classical and deep learning-based data imputation methods on high-dimensional data from the domains of computer vision and healthcare, while additionally improving the smoothness of the imputations and providing interpretable uncertainty estimates.

Research paper thumbnail of META$^\mathbf{2}$: Memory-efficient taxonomic classification and abundance estimation for metagenomics with deep learning

Cornell University - arXiv, Sep 28, 2019

Metagenomic studies have increasingly utilized sequencing technologies in order to analyze DNA fr... more Metagenomic studies have increasingly utilized sequencing technologies in order to analyze DNA fragments found in environmental samples. One important step in this analysis is the taxonomic classification of the DNA fragments. Conventional read classification methods require large databases and vast amounts of memory to run, with recent deep learning methods suffering from very large model sizes. We therefore aim to develop a more memory-efficient technique for taxonomic classification. A task of particular interest is abundance estimation in metagenomic samples. Current attempts rely on classifying single DNA reads independently from each other and are therefore agnostic to co-occurence patterns between taxa. In this work, we also attempt to take these patterns into account. We develop a novel memory-efficient read classification technique, combining deep learning and locality-sensitive hashing. We show that this approach outperforms conventional mapping-based and other deep learning methods for single-read taxonomic classification when restricting all methods to a fixed memory footprint. Moreover, we formulate the task of abundance estimation as a Multiple Instance Learning (MIL) problem and we extend current deep learning architectures with two different types of permutation-invariant MIL pooling layers: a) deepsets and b) attention-based pooling. We illustrate that our architectures can exploit the co-occurrence of species in metagenomic read sets and outperform the single-read architectures in predicting the distribution over taxa at higher taxonomic ranks.

Research paper thumbnail of 直観的な学習制御パラメータを有する Arcing アルゴリズム

人工知能学会論文誌 Transactions of the Japanese Society For Artificial Intelligence Ai, Nov 1, 2001

Research paper thumbnail of Sch olkopf

Research paper thumbnail of A. J.(1999) Input Space Versus Feature Space in Kernel-Based Methods

Research paper thumbnail of Systematic evaluation of spliced aligners for RNA-seq data

Research paper thumbnail of MIKA S RATSCH G, et al. An introduction to kernel-based learning algorithms

Research paper thumbnail of Boosting, Margins and Beyond

Research paper thumbnail of Advanced Methods for Sequence Analysis in Bioinformatics

Research paper thumbnail of Muller. Soft Margins for AdaBoost

Research paper thumbnail of An Asymptotic Analysis and Improvement of AdaBoost in the Binary Classification Case

Research paper thumbnail of Kernel fisher discriminant analysis

Research paper thumbnail of mTiM: margin-based transcript mapping from RNA-seq

Research paper thumbnail of Multi-task Learning for Computational Biology: Overview and Outlook

Empirical Inference, 2013

We present an overview of the field of regularization-based multi-task learning, which is a relat... more We present an overview of the field of regularization-based multi-task learning, which is a relatively recent offshoot of statistical machine learning. We discuss the foundations as well as some of the recent advances of the field, including strategies for learning or refining the measure of task relatedness. We present an example from the application domain of Computational Biology, where multi-task learning has been successfully applied, and give some practical guidelines for assessing a priori, for a given dataset, whether or not multi-task learning is likely to pay off.

Research paper thumbnail of Schö lkopf, B., and Rä tsch, G.(2008). Support vector machines and kernels for computational biology

Research paper thumbnail of Machine Leanring Open Source Software, 2007

Research paper thumbnail of G. Raetsch, G.(2008). Support vector machines and kernels for computational biology

Research paper thumbnail of Ida, and G. Rtsch. Large scale learning with string kernels

Research paper thumbnail of Schö lkofp, B., Rä tsch, G., and Weigel, D.(2008). At-TAX: a whole genome tiling array resource for developmental expression analysis and transcript identification in Arabidopsis thaliana

Research paper thumbnail of Nips workshop on machine learning open source software

Log In