Ozan Öktem - Academia.edu (original) (raw)

Papers by Ozan Öktem

Research paper thumbnail of Deep learning-based segmentation of multisite disease in ovarian cancer

European Radiology Experimental, Dec 6, 2023

Research paper thumbnail of Neural incomplete factorization: learning preconditioners for the conjugate gradient method

arXiv (Cornell University), May 25, 2023

Finding suitable preconditioners to accelerate iterative solution methods, such as the conjugate ... more Finding suitable preconditioners to accelerate iterative solution methods, such as the conjugate gradient method, is an active area of research. In this paper, we develop a computationally efficient data-driven approach to replace the typically hand-engineered algorithms with neural networks. Optimizing the condition number of the linear system directly is computationally infeasible. Instead, our method generates an incomplete factorization of the matrix and is, therefore, referred to as neural incomplete factorization (NeuralIF). For efficient training, we utilize a stochastic approximation of the Frobenius loss which only requires matrix-vector multiplications. At the core of our method is a novel messagepassing block, inspired by sparse matrix theory, that aligns with the objective of finding a sparse factorization of the matrix. By replacing conventional preconditioners used within the conjugate gradient method by data-driven models based on graph neural networks, we accelerate the iterative solving procedure. We evaluate our proposed method on both a synthetic and a real-world problem arising from scientific computing and show its ability to reduce the solving time while remaining computationally efficient.

Research paper thumbnail of Publications, etc. 2008

Research paper thumbnail of Deep Learning for Material Decomposition in Photon-Counting CT

arXiv (Cornell University), Aug 5, 2022

Photon-counting CT (PCCT) offers improved diagnostic performance through better spatial and energ... more Photon-counting CT (PCCT) offers improved diagnostic performance through better spatial and energy resolution, but developing high-quality image reconstruction methods that can deal with these large datasets is challenging. Model-based solutions incorporate models of the physical acquisition in order to reconstruct more accurate images, but are dependent on an accurate forward operator and present difficulties with finding good regularization. Another approach is deep-learning reconstruction, which has shown great promise in CT. However, fully data-driven solutions typically need large amounts of training data and lack interpretability. To combine the benefits of both methods, while minimizing their respective drawbacks, it is desirable to develop reconstruction algorithms that combine both model-based and data-driven approaches. In this work, we present a novel deep-learning solution for material decomposition in PCCT, based on an unrolled/unfolded iterative network. We evaluate two cases: a learned post-processing, which implicitly utilizes model knowledge, and a learned gradient-descent, which has explicit model-based components in the architecture. With our proposed techniques, we solve a challenging PCCT simulation case: three-material decomposition in abdomen imaging with low dose, iodine contrast, and a very small training sample support. In this scenario, our approach outperforms a maximum likelihood estimation, a variational method, as well as a fully-learned network. Index Terms-Deep learning, photon-counting CT, unrolled gradientdescent, ill-conditioned inverse problems, three-material decomposition.

Research paper thumbnail of 3D helical CT reconstruction with memory efficient invertible Learned Primal-Dual method

arXiv (Cornell University), May 24, 2022

Deep learning based computed tomography (CT) reconstruction has demonstrated outstanding performa... more Deep learning based computed tomography (CT) reconstruction has demonstrated outstanding performance on simulated 2D low-dose CT data. This applies in particular to domain adapted neural networks, which incorporate a handcrafted physics model for CT imaging. Empirical evidence shows that employing such architectures reduces the demand for training data and improves upon generalisation. However, their training requires large computational resources that quickly become prohibitive in 3D helical CT, which is the most common acquisition geometry used for medical imaging. Furthermore, clinical data also comes with other challenges not accounted for in simulations, like errors in flux measurement, resolution mismatch and, most importantly, the absence of the real ground truth. The necessity to have a computationally feasible training combined with the need to address these issues has made it difficult to evaluate deep learning based reconstruction on clinical 3D helical CT. This paper modifies a domain adapted neural network architecture, the Learned Primal-Dual (LPD), so that it can be trained and applied to reconstruction in this setting. We achieve this by splitting the helical trajectory into sections and applying the unrolled LPD iterations to those sections sequentially. To the best of our knowledge, this work is the first to apply an unrolled deep learning architecture for reconstruction on full-sized clinical data, like those in the Low dose CT image and projection data set (LDCT). Moreover, training and testing is done on a single GPU card with 24GB of memory.

Research paper thumbnail of Calibrating Ensembles for Scalable Uncertainty Quantification in Deep Learning-based Medical Segmentation

arXiv (Cornell University), Sep 20, 2022

Uncertainty quantification in automated image analysis is highly desired in many applications. Ty... more Uncertainty quantification in automated image analysis is highly desired in many applications. Typically, machine learning models in classification or segmentation are only developed to provide binary answers; however, quantifying the uncertainty of the models can play a critical role for example in active learning or machine human interaction. Uncertainty quantification is especially difficult when using deep learning-based models, which are the state-of-the-art in many imaging applications. The current uncertainty quantification approaches do not scale well in highdimensional real-world problems. Scalable solutions often rely on classical techniques, such as dropout, during inference or training ensembles of identical models with different random seeds to obtain a posterior distribution. In this paper, we show that these approaches fail to approximate the classification probability. On the contrary, we propose a scalable and intuitive framework to calibrate ensembles of deep learning models to produce uncertainty quantification measurements that approximate the classification probability. On unseen test data, we demonstrate improved calibration, sensitivity (in two out of three cases) and precision when being compared with the standard approaches. We further motivate the

Research paper thumbnail of Reply to Wang and Yu: Both electron lambda tomography and interior tomography have their uses

Proceedings of the National Academy of Sciences of the United States of America, May 12, 2010

Research paper thumbnail of A deep learning one-step solution to material image reconstruction in photon counting spectral CT

Medical Imaging 2022: Physics of Medical Imaging, Mar 31, 2022

Research paper thumbnail of Adversarially learned iterative reconstruction for imaging inverse problems

arXiv (Cornell University), Mar 30, 2021

In numerous practical applications, especially in medical image reconstruction, it is often infea... more In numerous practical applications, especially in medical image reconstruction, it is often infeasible to obtain a large ensemble of ground-truth/measurement pairs for supervised learning. Therefore, it is imperative to develop unsupervised learning protocols that are competitive with supervised approaches in performance. Motivated by the maximum-likelihood principle, we propose an unsupervised learning framework for solving ill-posed inverse problems. Instead of seeking pixel-wise proximity between the reconstructed and the ground-truth images, the proposed approach learns an iterative reconstruction network whose output matches the ground-truth in distribution. Considering tomographic reconstruction as an application, we demonstrate that the proposed unsupervised approach not only performs on par with its supervised variant in terms of objective quality measures, but also successfully circumvents the issue of over-smoothing that supervised approaches tend to suffer from. The improvement in reconstruction quality comes at the expense of higher training complexity, but, once trained, the reconstruction time remains the same as its supervised counterpart.

Research paper thumbnail of Regularizing Orientation Estimation in Cryogenic Electron Microscopy Three-Dimensional Map Refinement through Measure-Based Lifting over Riemannian Manifolds

Siam Journal on Imaging Sciences, Aug 10, 2023

Research paper thumbnail of Regularising orientation estimation in Cryo-EM 3D map refinement through measure-based lifting over Riemannian manifolds

arXiv (Cornell University), Sep 7, 2022

Motivated by the trade-off between noise-robustness and data-consistency for joint 3D map reconst... more Motivated by the trade-off between noise-robustness and data-consistency for joint 3D map reconstruction and rotation estimation in single particle cryogenic-electron microscopy (Cryo-EM), we propose ellipsoidal support lifting (ESL), a measure-based lifting scheme for regularising and approximating the global minimiser of a smooth function over a Riemannian manifold. Under a uniqueness assumption on the minimiser we show several theoretical results, in particular well-posedness of the method and an error bound due to the induced bias with respect to the global minimiser. Additionally, we use the developed theory to integrate the measure-based lifting scheme into an alternating update method for joint homogeneous 3D map reconstruction and rotation estimation, where typically tens of thousands of manifold-valued minimisation problems have to be solved and where regularisation is necessary because of the high noise levels in the data. The joint recovery method is used to test both the theoretical predictions and algorithmic performance through numerical experiments with Cryo-EM data. In particular, the induced bias due to the regularising effect of ESL empirically estimates better rotations, i.e., rotations closer to the ground truth, than global optimisation would.

Research paper thumbnail of Learned Reconstruction Methods With Convergence Guarantees: A survey of concepts and applications

IEEE Signal Processing Magazine, 2023

Research paper thumbnail of Learned convex regularizers for inverse problems

arXiv (Cornell University), Aug 6, 2020

We consider the variational reconstruction framework for inverse problems and propose to learn a ... more We consider the variational reconstruction framework for inverse problems and propose to learn a data-adaptive input-convex neural network (ICNN) as the regularization functional. The ICNN-based convex regularizer is trained adversarially to discern ground-truth images from unregularized reconstructions. Convexity of the regularizer is desirable since (i) one can establish analytical convergence guarantees for the corresponding variational reconstruction problem and (ii) devise efficient and provable algorithms for reconstruction. In particular, we show that the optimal solution to the variational problem converges to the ground-truth if the penalty parameter decays sub-linearly with respect to the norm of the noise. Further, we prove the existence of a sub-gradient-based algorithm that leads to a monotonically decreasing error in the parameter space with iterations. To demonstrate the performance of our approach for solving inverse problems, we consider the tasks of deblurring natural images and reconstructing images in computed tomography (CT), and show that the proposed convex regularizer is at least competitive with and sometimes superior to state-of-the-art data-driven techniques for inverse problems.

Research paper thumbnail of Spectral decomposition of atomic structures in heterogeneous cryo-EM

Inverse Problems

We consider the problem of recovering the three-dimensional atomic structure of a flexible macrom... more We consider the problem of recovering the three-dimensional atomic structure of a flexible macromolecule from a heterogeneous cryogenic electron microscopy (cryo-EM) dataset. The dataset contains noisy tomographic projections of the electrostatic potential of the macromolecule, taken from different viewing directions, and in the heterogeneous case, each cryo-EM image corresponds to a different conformation of the macromolecule. Under the assumption that the macromolecule can be modelled as a chain, or discrete curve (as it is for instance the case for a protein backbone with a single chain of amino-acids), we introduce a method to estimate the deformation of the atomic model with respect to a given conformation, which is assumed to be known a priori. Our method consists on estimating the torsion and bond angles of the atomic model in each conformation as a linear combination of the eigenfunctions of the Laplace operator in the manifold of conformations. These eigenfunctions can be a...

Research paper thumbnail of Odlgroup/Odl: Odl 0.7.0

This release is a big one as it includes the cumulative work over a period of 1 1/2 years. It is ... more This release is a big one as it includes the cumulative work over a period of 1 1/2 years. It is planned to be the last release before version 1.0.0 where we expect to land a number of exciting new features. What follows are the <strong>highlights</strong> of the release. For a more detailed list of all changes, please refer to the release notes in the documentation. Native multi-indexing of ODL space elements The <code>DiscreteLpElement</code> and <code>Tensor</code> (renamed from <code>FnBaseVector</code>) data structures now natively support almost all kinds of Numpy "fancy" indexing. At the same time, the spaces <code>DiscreteLp</code> and <code>Tensorspace</code> (renamed from <code>FnBase</code>) have more advanced indexing capabilities as well. Up to few exceptions, <code>elem[indices] in space[indices]</code> is always fulfilled. Alongside, <code>ProductSpace</c...

Research paper thumbnail of Operator Discretization Library (Odl)

Operator Discretization Library (ODL) is a Python library for fast prototyping focusing on (but n... more Operator Discretization Library (ODL) is a Python library for fast prototyping focusing on (but not restricted to) inverse problems. The main intent of ODL is to enable mathematicians and applied scientists to use different numerical methods on real-world problems without having to implement all necessary parts from the bottom up.<br> This is reached by an <strong>Operator</strong> structure which encapsulates all application-specific parts, and a high-level formulation of solvers which usually expect an operator, data and additional parameters.<br> The main advantages of this approach are that Different problems can be solved with the same method (e.g. TV regularization) by simply switching operator and data. The same problem can be solved with different methods by simply calling into different solvers. Solvers and application-specific code need to be written only once, in one place, and can be tested individually. Adding new applications or solution methods...

Research paper thumbnail of Iterated variational regularization combined with componentwise regularization

Research paper thumbnail of Range characterization of the generalized exponential Radon transform

Research paper thumbnail of Comparing Range Characterizations of the exponential Radon Transform

Research paper thumbnail of The object classes and the discretization problem

Research paper thumbnail of Deep learning-based segmentation of multisite disease in ovarian cancer

European Radiology Experimental, Dec 6, 2023

Research paper thumbnail of Neural incomplete factorization: learning preconditioners for the conjugate gradient method

arXiv (Cornell University), May 25, 2023

Finding suitable preconditioners to accelerate iterative solution methods, such as the conjugate ... more Finding suitable preconditioners to accelerate iterative solution methods, such as the conjugate gradient method, is an active area of research. In this paper, we develop a computationally efficient data-driven approach to replace the typically hand-engineered algorithms with neural networks. Optimizing the condition number of the linear system directly is computationally infeasible. Instead, our method generates an incomplete factorization of the matrix and is, therefore, referred to as neural incomplete factorization (NeuralIF). For efficient training, we utilize a stochastic approximation of the Frobenius loss which only requires matrix-vector multiplications. At the core of our method is a novel messagepassing block, inspired by sparse matrix theory, that aligns with the objective of finding a sparse factorization of the matrix. By replacing conventional preconditioners used within the conjugate gradient method by data-driven models based on graph neural networks, we accelerate the iterative solving procedure. We evaluate our proposed method on both a synthetic and a real-world problem arising from scientific computing and show its ability to reduce the solving time while remaining computationally efficient.

Research paper thumbnail of Publications, etc. 2008

Research paper thumbnail of Deep Learning for Material Decomposition in Photon-Counting CT

arXiv (Cornell University), Aug 5, 2022

Photon-counting CT (PCCT) offers improved diagnostic performance through better spatial and energ... more Photon-counting CT (PCCT) offers improved diagnostic performance through better spatial and energy resolution, but developing high-quality image reconstruction methods that can deal with these large datasets is challenging. Model-based solutions incorporate models of the physical acquisition in order to reconstruct more accurate images, but are dependent on an accurate forward operator and present difficulties with finding good regularization. Another approach is deep-learning reconstruction, which has shown great promise in CT. However, fully data-driven solutions typically need large amounts of training data and lack interpretability. To combine the benefits of both methods, while minimizing their respective drawbacks, it is desirable to develop reconstruction algorithms that combine both model-based and data-driven approaches. In this work, we present a novel deep-learning solution for material decomposition in PCCT, based on an unrolled/unfolded iterative network. We evaluate two cases: a learned post-processing, which implicitly utilizes model knowledge, and a learned gradient-descent, which has explicit model-based components in the architecture. With our proposed techniques, we solve a challenging PCCT simulation case: three-material decomposition in abdomen imaging with low dose, iodine contrast, and a very small training sample support. In this scenario, our approach outperforms a maximum likelihood estimation, a variational method, as well as a fully-learned network. Index Terms-Deep learning, photon-counting CT, unrolled gradientdescent, ill-conditioned inverse problems, three-material decomposition.

Research paper thumbnail of 3D helical CT reconstruction with memory efficient invertible Learned Primal-Dual method

arXiv (Cornell University), May 24, 2022

Deep learning based computed tomography (CT) reconstruction has demonstrated outstanding performa... more Deep learning based computed tomography (CT) reconstruction has demonstrated outstanding performance on simulated 2D low-dose CT data. This applies in particular to domain adapted neural networks, which incorporate a handcrafted physics model for CT imaging. Empirical evidence shows that employing such architectures reduces the demand for training data and improves upon generalisation. However, their training requires large computational resources that quickly become prohibitive in 3D helical CT, which is the most common acquisition geometry used for medical imaging. Furthermore, clinical data also comes with other challenges not accounted for in simulations, like errors in flux measurement, resolution mismatch and, most importantly, the absence of the real ground truth. The necessity to have a computationally feasible training combined with the need to address these issues has made it difficult to evaluate deep learning based reconstruction on clinical 3D helical CT. This paper modifies a domain adapted neural network architecture, the Learned Primal-Dual (LPD), so that it can be trained and applied to reconstruction in this setting. We achieve this by splitting the helical trajectory into sections and applying the unrolled LPD iterations to those sections sequentially. To the best of our knowledge, this work is the first to apply an unrolled deep learning architecture for reconstruction on full-sized clinical data, like those in the Low dose CT image and projection data set (LDCT). Moreover, training and testing is done on a single GPU card with 24GB of memory.

Research paper thumbnail of Calibrating Ensembles for Scalable Uncertainty Quantification in Deep Learning-based Medical Segmentation

arXiv (Cornell University), Sep 20, 2022

Uncertainty quantification in automated image analysis is highly desired in many applications. Ty... more Uncertainty quantification in automated image analysis is highly desired in many applications. Typically, machine learning models in classification or segmentation are only developed to provide binary answers; however, quantifying the uncertainty of the models can play a critical role for example in active learning or machine human interaction. Uncertainty quantification is especially difficult when using deep learning-based models, which are the state-of-the-art in many imaging applications. The current uncertainty quantification approaches do not scale well in highdimensional real-world problems. Scalable solutions often rely on classical techniques, such as dropout, during inference or training ensembles of identical models with different random seeds to obtain a posterior distribution. In this paper, we show that these approaches fail to approximate the classification probability. On the contrary, we propose a scalable and intuitive framework to calibrate ensembles of deep learning models to produce uncertainty quantification measurements that approximate the classification probability. On unseen test data, we demonstrate improved calibration, sensitivity (in two out of three cases) and precision when being compared with the standard approaches. We further motivate the

Research paper thumbnail of Reply to Wang and Yu: Both electron lambda tomography and interior tomography have their uses

Proceedings of the National Academy of Sciences of the United States of America, May 12, 2010

Research paper thumbnail of A deep learning one-step solution to material image reconstruction in photon counting spectral CT

Medical Imaging 2022: Physics of Medical Imaging, Mar 31, 2022

Research paper thumbnail of Adversarially learned iterative reconstruction for imaging inverse problems

arXiv (Cornell University), Mar 30, 2021

In numerous practical applications, especially in medical image reconstruction, it is often infea... more In numerous practical applications, especially in medical image reconstruction, it is often infeasible to obtain a large ensemble of ground-truth/measurement pairs for supervised learning. Therefore, it is imperative to develop unsupervised learning protocols that are competitive with supervised approaches in performance. Motivated by the maximum-likelihood principle, we propose an unsupervised learning framework for solving ill-posed inverse problems. Instead of seeking pixel-wise proximity between the reconstructed and the ground-truth images, the proposed approach learns an iterative reconstruction network whose output matches the ground-truth in distribution. Considering tomographic reconstruction as an application, we demonstrate that the proposed unsupervised approach not only performs on par with its supervised variant in terms of objective quality measures, but also successfully circumvents the issue of over-smoothing that supervised approaches tend to suffer from. The improvement in reconstruction quality comes at the expense of higher training complexity, but, once trained, the reconstruction time remains the same as its supervised counterpart.

Research paper thumbnail of Regularizing Orientation Estimation in Cryogenic Electron Microscopy Three-Dimensional Map Refinement through Measure-Based Lifting over Riemannian Manifolds

Siam Journal on Imaging Sciences, Aug 10, 2023

Research paper thumbnail of Regularising orientation estimation in Cryo-EM 3D map refinement through measure-based lifting over Riemannian manifolds

arXiv (Cornell University), Sep 7, 2022

Motivated by the trade-off between noise-robustness and data-consistency for joint 3D map reconst... more Motivated by the trade-off between noise-robustness and data-consistency for joint 3D map reconstruction and rotation estimation in single particle cryogenic-electron microscopy (Cryo-EM), we propose ellipsoidal support lifting (ESL), a measure-based lifting scheme for regularising and approximating the global minimiser of a smooth function over a Riemannian manifold. Under a uniqueness assumption on the minimiser we show several theoretical results, in particular well-posedness of the method and an error bound due to the induced bias with respect to the global minimiser. Additionally, we use the developed theory to integrate the measure-based lifting scheme into an alternating update method for joint homogeneous 3D map reconstruction and rotation estimation, where typically tens of thousands of manifold-valued minimisation problems have to be solved and where regularisation is necessary because of the high noise levels in the data. The joint recovery method is used to test both the theoretical predictions and algorithmic performance through numerical experiments with Cryo-EM data. In particular, the induced bias due to the regularising effect of ESL empirically estimates better rotations, i.e., rotations closer to the ground truth, than global optimisation would.

Research paper thumbnail of Learned Reconstruction Methods With Convergence Guarantees: A survey of concepts and applications

IEEE Signal Processing Magazine, 2023

Research paper thumbnail of Learned convex regularizers for inverse problems

arXiv (Cornell University), Aug 6, 2020

We consider the variational reconstruction framework for inverse problems and propose to learn a ... more We consider the variational reconstruction framework for inverse problems and propose to learn a data-adaptive input-convex neural network (ICNN) as the regularization functional. The ICNN-based convex regularizer is trained adversarially to discern ground-truth images from unregularized reconstructions. Convexity of the regularizer is desirable since (i) one can establish analytical convergence guarantees for the corresponding variational reconstruction problem and (ii) devise efficient and provable algorithms for reconstruction. In particular, we show that the optimal solution to the variational problem converges to the ground-truth if the penalty parameter decays sub-linearly with respect to the norm of the noise. Further, we prove the existence of a sub-gradient-based algorithm that leads to a monotonically decreasing error in the parameter space with iterations. To demonstrate the performance of our approach for solving inverse problems, we consider the tasks of deblurring natural images and reconstructing images in computed tomography (CT), and show that the proposed convex regularizer is at least competitive with and sometimes superior to state-of-the-art data-driven techniques for inverse problems.

Research paper thumbnail of Spectral decomposition of atomic structures in heterogeneous cryo-EM

Inverse Problems

We consider the problem of recovering the three-dimensional atomic structure of a flexible macrom... more We consider the problem of recovering the three-dimensional atomic structure of a flexible macromolecule from a heterogeneous cryogenic electron microscopy (cryo-EM) dataset. The dataset contains noisy tomographic projections of the electrostatic potential of the macromolecule, taken from different viewing directions, and in the heterogeneous case, each cryo-EM image corresponds to a different conformation of the macromolecule. Under the assumption that the macromolecule can be modelled as a chain, or discrete curve (as it is for instance the case for a protein backbone with a single chain of amino-acids), we introduce a method to estimate the deformation of the atomic model with respect to a given conformation, which is assumed to be known a priori. Our method consists on estimating the torsion and bond angles of the atomic model in each conformation as a linear combination of the eigenfunctions of the Laplace operator in the manifold of conformations. These eigenfunctions can be a...

Research paper thumbnail of Odlgroup/Odl: Odl 0.7.0

This release is a big one as it includes the cumulative work over a period of 1 1/2 years. It is ... more This release is a big one as it includes the cumulative work over a period of 1 1/2 years. It is planned to be the last release before version 1.0.0 where we expect to land a number of exciting new features. What follows are the <strong>highlights</strong> of the release. For a more detailed list of all changes, please refer to the release notes in the documentation. Native multi-indexing of ODL space elements The <code>DiscreteLpElement</code> and <code>Tensor</code> (renamed from <code>FnBaseVector</code>) data structures now natively support almost all kinds of Numpy "fancy" indexing. At the same time, the spaces <code>DiscreteLp</code> and <code>Tensorspace</code> (renamed from <code>FnBase</code>) have more advanced indexing capabilities as well. Up to few exceptions, <code>elem[indices] in space[indices]</code> is always fulfilled. Alongside, <code>ProductSpace</c...

Research paper thumbnail of Operator Discretization Library (Odl)

Operator Discretization Library (ODL) is a Python library for fast prototyping focusing on (but n... more Operator Discretization Library (ODL) is a Python library for fast prototyping focusing on (but not restricted to) inverse problems. The main intent of ODL is to enable mathematicians and applied scientists to use different numerical methods on real-world problems without having to implement all necessary parts from the bottom up.<br> This is reached by an <strong>Operator</strong> structure which encapsulates all application-specific parts, and a high-level formulation of solvers which usually expect an operator, data and additional parameters.<br> The main advantages of this approach are that Different problems can be solved with the same method (e.g. TV regularization) by simply switching operator and data. The same problem can be solved with different methods by simply calling into different solvers. Solvers and application-specific code need to be written only once, in one place, and can be tested individually. Adding new applications or solution methods...

Research paper thumbnail of Iterated variational regularization combined with componentwise regularization

Research paper thumbnail of Range characterization of the generalized exponential Radon transform

Research paper thumbnail of Comparing Range Characterizations of the exponential Radon Transform

Research paper thumbnail of The object classes and the discretization problem

Research paper thumbnail of Adversarial Regularizers in Inverse Problems

Research paper thumbnail of Learning to solve inverse problems using {Wasserstein} loss

Wavelets and their associated transforms are highly efficient when approximating and analyzing on... more Wavelets and their associated transforms are highly efficient when approximating and analyzing one-dimensional signals. However, multivariate signals such as images or videos typically exhibit curvilinear singularities, which wavelets are provably deficient of sparsely approximating and also of analyzing in the sense of, for instance, detecting their direction. Shearlets are a directional representation system extending the wavelet framework, which overcomes those deficiencies. Similar to wavelets, shearlets allow a faithful implementation and fast associated transforms. In this paper, we will introduce a comprehensive carefully documented software package coined ShearLab 3D (www.ShearLab.org) and discuss its algorithmic details. This package provides MATLAB code for a novel faithful algorithmic realization of the 2D and 3D shearlet transform (and their inverses) associated with compactly supported universal shearlet systems incorporating the option of using CUDA. We will present extensive numerical experiments in 2D and 3D concerning denoising, inpainting, and feature extraction, comparing the performance of ShearLab 3D with similar transform-based algorithms such as curvelets, contourlets, or surfacelets. In the spirit of reproducible reseaerch, all scripts are accessible on www.ShearLab.org.

Research paper thumbnail of Task adapted reconstruction for inverse problems

Research paper thumbnail of Deep posterior sampling: Uncertainty quantification for large scale inverse problems

Research paper thumbnail of Spatiotemporal {PET} reconstruction using {ML-EM} with learned diffeomorphic deformation

Research paper thumbnail of Group Equivariant Convolutional Networks

Research paper thumbnail of Extension of Separately Analytic Functions and Applications to Range Characterization of the {E}xponential {R}adon Transform

Research paper thumbnail of Electron tomography: A short overview with an emphasis on the absorption potential model for the forward problem

Research paper thumbnail of A component-wise iterated relative entropy regularization method with updated prior and regularization parameter

Research paper thumbnail of Inversion of the {X}-ray transform from limited angle parallel beam region of interest data with applications to electron tomography

Research paper thumbnail of Molecular cryo-electron tomography of vitreous tissue sections: current challenges

Research paper thumbnail of Electron lambda-tomography

Research paper thumbnail of Simulation of Transmission Electron Microscope Images of Biological Specimens

Research paper thumbnail of Spectral transfer from phase to intensity in Fresnel diffraction

Research paper thumbnail of Shape-based image reconstruction using linearized deformations

Research paper thumbnail of Measuring true localization accuracy in super resolution microscopy with {DNA}-origami nanostructures

Research paper thumbnail of Tunable {A}mpere phase plate for low dose imaging of biomolecular complexes

Wavelets and their associated transforms are highly efficient when approximating and analyzing on... more Wavelets and their associated transforms are highly efficient when approximating and analyzing one-dimensional signals. However, multivariate signals such as images or videos typically exhibit curvilinear singularities, which wavelets are provably deficient of sparsely approximating and also of analyzing in the sense of, for instance, detecting their direction. Shearlets are a directional representation system extending the wavelet framework, which overcomes those deficiencies. Similar to wavelets, shearlets allow a faithful implementation and fast associated transforms. In this paper, we will introduce a comprehensive carefully documented software package coined ShearLab 3D (www.ShearLab.org) and discuss its algorithmic details. This package provides MATLAB code for a novel faithful algorithmic realization of the 2D and 3D shearlet transform (and their inverses) associated with compactly supported universal shearlet systems incorporating the option of using CUDA. We will present extensive numerical experiments in 2D and 3D concerning denoising, inpainting, and feature extraction, comparing the performance of ShearLab 3D with similar transform-based algorithms such as curvelets, contourlets, or surfacelets. In the spirit of reproducible reseaerch, all scripts are accessible on www.ShearLab.org.

Research paper thumbnail of Image formation modeling in cryo-electron microscopy

Research paper thumbnail of Solving ill-posed inverse problems using iterative deep neural networks

Research paper thumbnail of Learned Primal-dual Reconstruction

Research paper thumbnail of Task adapted reconstruction for inverse problems

Wavelets and their associated transforms are highly efficient when approximating and analyzing on... more Wavelets and their associated transforms are highly efficient when approximating and analyzing one-dimensional signals. However, multivariate signals such as images or videos typically exhibit curvilinear singularities, which wavelets are provably deficient of sparsely approximating and also of analyzing in the sense of, for instance, detecting their direction. Shearlets are a directional representation system extending the wavelet framework, which overcomes those deficiencies. Similar to wavelets, shearlets allow a faithful implementation and fast associated transforms. In this paper, we will introduce a comprehensive carefully documented software package coined ShearLab 3D (www.ShearLab.org) and discuss its algorithmic details. This package provides MATLAB code for a novel faithful algorithmic realization of the 2D and 3D shearlet transform (and their inverses) associated with compactly supported universal shearlet systems incorporating the option of using CUDA. We will present extensive numerical experiments in 2D and 3D concerning denoising, inpainting, and feature extraction, comparing the performance of ShearLab 3D with similar transform-based algorithms such as curvelets, contourlets, or surfacelets. In the spirit of reproducible reseaerch, all scripts are accessible on www.ShearLab.org.

Research paper thumbnail of Data-driven nonsmooth optimization

Research paper thumbnail of Image reconstruction through metamorphosis

Research paper thumbnail of Indirect image registration with large diffeomorphic deformations

Research paper thumbnail of Deep {Bayesian} Inversion: Computational uncertainty quantification for large scale inverse problems

Research paper thumbnail of Reordering for improving global {Arnoldi-Tikhonov} method in image restoration problems

Research paper thumbnail of Reconstruction methods in electron tomography

Research paper thumbnail of Infinite Dimensional Optimization Models and {PDE}s for Dejittering

In this paper we do a systematic investigation of continu- ous methods for pixel, line pixel and ... more In this paper we do a systematic investigation of continu- ous methods for pixel, line pixel and line dejittering. The basis for these investigations are the discrete line dejittering algorithm of Nikolova and the partial differential equation of Lenzen et al for pixel dejittering. To put these two different worlds in perspective we find infinite dimensional optimization algorithms linking to the finite dimensional optimization problems and formal flows associated with the infinite dimensional opti- mization problems. Two different kinds of optimization problems will be considered: Dejittering algorithms for determining the displacement and displacement error correction formulations, which correct the jittered image, without estimating the jitter. As a by-product we find novel vari- ational methods for displacement error regularization and unify them into one family. The second novelty is a comprehensive comparison of the different models for different types of jitter, in terms of efficiency of reconstruction and numerical complexity.

Research paper thumbnail of Mathematics of electron tomography

This survey starts with a brief description of the scientific relevance of electron tomography in... more This survey starts with a brief description of the scientific relevance of electron tomography in life sciences followed by a survey of image formation models. In the latter, the scattering of electrons against a specimen is modeled by the Schr{\"o}dinger equation, and the image formation model is completed by adding a description of the transmission electron microscope optics and detector. Electron tomography can then be phrased as an inverse scattering problem and attention is now turned to describing mathematical approaches for solving that reconstruction problem. This part starts out by explaining challenges associated with the aforementioned inverse problem, such as the extremely low signal-to-noise ratio in the data and the severe ill-posedness due to incomplete data, which naturally brings up the issue of choosing a regularization method for reconstruction. Here, the review surveys both methods that have been developed, as well as pointing to new promising approaches. Some of the regularization methods are also tested on simulated and experimental data. As a final note, this is not a traditional mathematical review in the sense that focus here is on the application to electron tomography rather than on describing mathematical techniques that underly proofs of key theorems.

Research paper thumbnail of Accessing the molecular organization of the stratum corneum using high-resolution electron microscopy and computer simulation

Research paper thumbnail of Image reconstruction in dynamic inverse problems with temporal models

Research paper thumbnail of Prototyping with {ODL} and {ASTRA}: Extending the {TVR-DART} algorithm

Research paper thumbnail of Recent Approaches for Using Machine Learning in Image Reconstruction

Research paper thumbnail of Operator Discretization Library {(ODL)

Research paper thumbnail of Extension of separately analytic functions and applications to range characterization of the exponential {R}adon transform

Research paper thumbnail of Extension of separately analytic functions and applications to mathematical tomography