Suhas Sreehari | Purdue University (original) (raw)
Papers by Suhas Sreehari
arXiv (Cornell University), Nov 28, 2016
Many material and biological samples in scientific imaging are characterized by non-local repeati... more Many material and biological samples in scientific imaging are characterized by non-local repeating structures. These are studied using scanning/transmission electron microscopy and electron tomography. Sparse sampling of individual pixels in a 2D image acquisition geometry, or sparse sampling of projection images with large tilt increments in a tomography experiment, or low-resolution data acquisition can enable high speed data acquisition and minimize sample damage caused by the electron beam. However, accurate reconstructions from such sparse/low-resolution data is often challenging. In this work, we present algorithms for electron tomographic reconstruction, sparse image interpolation (or inpainting), and super-resolution that exploits the non-local redundancy in images. We adapt a framework, termed plug-and-play priors, to solve these imaging problems in a regularized inversion setting. The power of the plug-and-play approach is that it allows a wide array of modern denoising algorithms to be used as a prior model for a variety of inverse problems. We also present sufficient mathematical conditions that ensure convergence of the plug-and-play approach, and we use these insights to design a new non-local means denoising algorithm. In the end, we look at 4x, 8x, and 16x super-resolution reconstruction using library-based non-local means (LB-NLM) denoiser as a prior model within plug-and-play, to accurately characterize high-resolution textures and edge features, using high-resolution library patches acquired over a small field-of-view of the microscopy sample. Finally, we demonstrate that our algorithms produces higher quality reconstructions on both simulated and real electron microscope data, along with improved convergence properties compared to other methods
arXiv (Cornell University), Nov 28, 2016
arXiv (Cornell University), May 24, 2017
International Journal of Cyber-Security and Digital Forensics, 2012
Microscopy and Microanalysis, Jul 1, 2016
Originally published in Optics Express on 06 October 2014 (oe-22-20-24224
Originally published in Optics Express on 06 October 2014 (oe-22-20-24224
Originally published in Optics Express on 06 October 2014 (oe-22-20-24224
Many material and biological samples in scientific imaging are characterized by non-local repeati... more Many material and biological samples in scientific imaging are characterized by non-local repeating structures. These are studied using scanning electron microscopy and electron tomography. Sparse sampling of individual pixels in a 2D image acquisition geometry, or sparse sampling of projection images with large tilt increments in a tomography experiment, can enable high speed data acquisition and minimize sample damage caused by the electron beam. In this paper, we present an algorithm for electron tomographic reconstruction and sparse image interpolation that exploits the non-local redundancy in images. We adapt a framework, termed plug-and-play (P&P) priors, to solve these imaging problems in a regularized inversion setting. The power of the P&P approach is that it allows a wide array of modern denoising algorithms to be used as a "prior model" for tomography and image interpolation. We also present sufficient mathematical conditions that ensure convergence of the P&P a...
Regularized inversion methods for image reconstruction are used widely due to their tractability ... more Regularized inversion methods for image reconstruction are used widely due to their tractability and ability to combine complex physical sensor models with useful regularity criteria. Such methods motivated the recently developed Plug-and-Play prior method, which provides a framework to use advanced denoising algorithms as regularizers in inversion. However, the need to formulate regularized inversion as the solution to an optimization problem limits the possible regularity conditions and physical sensor models. In this paper, we introduce Consensus Equilibrium (CE), which generalizes regularized inversion to include a much wider variety of both forward components and prior components without the need for either to be expressed with a cost function. CE is based on the solution of a set of equilibrium equations that balance data fit and regularity. In this framework, the problem of MAP estimation in regularized inversion is replaced by the problem of solving these equilibrium equatio...
International Journal of Cyber-Security and Digital Forensics, 2012
This paper attempts to speed-up the modular reduction as an independent step of modular multiplic... more This paper attempts to speed-up the modular reduction as an independent step of modular multiplication, which is the central operation in public-key cryptosystems. Based on the properties of Mersenne and Quasi-Mersenne primes, we have described four distinct sets of moduli which are responsible for converting the single-precision multiplication prevalent in many of today's techniques into an addition operation and a few simple shift operations. We propose a novel revision to the Modified Barrett algorithm presented in [3]. With the backing of the special moduli sets, the proposed algorithm is shown to outperform (speed-wise) the Modified Barrett algorithm by 80% for operands of length 700 bits, the least speed-up being around 70% for smaller operands, in the range of around 100 bits. Keywords–Large integer modular reduction; Mersenne primes; Quasi-Mersenne primes; Barrett-based reduction.
Many datasets in important fields like healthcare and finance are often in a tabular format, wher... more Many datasets in important fields like healthcare and finance are often in a tabular format, where each observation is expressed as a vector of various feature values. While there exist several competitive algorithms such as random forests and gradient boosting, convolutional neural networks (CNNs) are making tremendous strides in terms of new research and applications. In order to exploit the power of convolution neural networks for these tabular datasets, we propose two vector-to-image transformations. One is a direct transformation, while the other is an indirect mechanism to first modulate the latent space of a trained generative adversarial network (GAN) with the observation vectors and then generate the images using the generator. On both simulated and real datasets, we show that CNNs trained on images based on our proposed transforms lead to better predictive performance compared to random forests and neural networks that are trained on the raw tabular datasets.
Regularized inversion methods for image reconstruction are used widely due to their tractability ... more Regularized inversion methods for image reconstruction are used widely due to their tractability and their ability to combine complex physical sensor models with useful regularity criteria. Such methods motivated the recently developed Plug-and-Play prior method, which provides a framework to use advanced denoising algorithms as regularizers in inversion. However, the need to formulate regularized inversion as the solution to an optimization problem limits the expressiveness of possible regularity conditions and physical sensor models. In this paper, we introduce the idea of consensus equilibrium (CE), which generalizes regularized inversion to include a much wider variety of both forward (or data fidelity) components and prior (or regularity) components without the need for either to be expressed using a cost function. CE is based on the solution of a set of equilibrium equations that balance data fit and regularity. In this framework, the problem of MAP estimation in regularized i...
Many samples imaged in structural biology and material science contain several similar particles ... more Many samples imaged in structural biology and material science contain several similar particles at random locations and orientations. Model-based iterative reconstruction (MBIR) methods can in principle be used to exploit such redundancies in images through log prior probabilities that accurately account for non-local similarity between the particles. However, determining such a log prior term can be challenging. Several denoising algorithms like non-local means (NLM) successfully capture such non-local redundancies, but the problem is two-fold: NLM is not explicitly formulated as a cost function, and neither can it capture similarity between randomly oriented particles. In this paper, we propose a rotationally-invariant nonlocal means (RINLM) algorithm, and describe a method to implement RINLM as a prior model using a novel framework that we call plug-and-play priors. We introduce the idea of patch pre-rotation to make RINLM computationally tractable. Finally, we showcase image de...
arXiv (Cornell University), Nov 28, 2016
Many material and biological samples in scientific imaging are characterized by non-local repeati... more Many material and biological samples in scientific imaging are characterized by non-local repeating structures. These are studied using scanning/transmission electron microscopy and electron tomography. Sparse sampling of individual pixels in a 2D image acquisition geometry, or sparse sampling of projection images with large tilt increments in a tomography experiment, or low-resolution data acquisition can enable high speed data acquisition and minimize sample damage caused by the electron beam. However, accurate reconstructions from such sparse/low-resolution data is often challenging. In this work, we present algorithms for electron tomographic reconstruction, sparse image interpolation (or inpainting), and super-resolution that exploits the non-local redundancy in images. We adapt a framework, termed plug-and-play priors, to solve these imaging problems in a regularized inversion setting. The power of the plug-and-play approach is that it allows a wide array of modern denoising algorithms to be used as a prior model for a variety of inverse problems. We also present sufficient mathematical conditions that ensure convergence of the plug-and-play approach, and we use these insights to design a new non-local means denoising algorithm. In the end, we look at 4x, 8x, and 16x super-resolution reconstruction using library-based non-local means (LB-NLM) denoiser as a prior model within plug-and-play, to accurately characterize high-resolution textures and edge features, using high-resolution library patches acquired over a small field-of-view of the microscopy sample. Finally, we demonstrate that our algorithms produces higher quality reconstructions on both simulated and real electron microscope data, along with improved convergence properties compared to other methods
arXiv (Cornell University), Nov 28, 2016
arXiv (Cornell University), May 24, 2017
International Journal of Cyber-Security and Digital Forensics, 2012
Microscopy and Microanalysis, Jul 1, 2016
Originally published in Optics Express on 06 October 2014 (oe-22-20-24224
Originally published in Optics Express on 06 October 2014 (oe-22-20-24224
Originally published in Optics Express on 06 October 2014 (oe-22-20-24224
Many material and biological samples in scientific imaging are characterized by non-local repeati... more Many material and biological samples in scientific imaging are characterized by non-local repeating structures. These are studied using scanning electron microscopy and electron tomography. Sparse sampling of individual pixels in a 2D image acquisition geometry, or sparse sampling of projection images with large tilt increments in a tomography experiment, can enable high speed data acquisition and minimize sample damage caused by the electron beam. In this paper, we present an algorithm for electron tomographic reconstruction and sparse image interpolation that exploits the non-local redundancy in images. We adapt a framework, termed plug-and-play (P&P) priors, to solve these imaging problems in a regularized inversion setting. The power of the P&P approach is that it allows a wide array of modern denoising algorithms to be used as a "prior model" for tomography and image interpolation. We also present sufficient mathematical conditions that ensure convergence of the P&P a...
Regularized inversion methods for image reconstruction are used widely due to their tractability ... more Regularized inversion methods for image reconstruction are used widely due to their tractability and ability to combine complex physical sensor models with useful regularity criteria. Such methods motivated the recently developed Plug-and-Play prior method, which provides a framework to use advanced denoising algorithms as regularizers in inversion. However, the need to formulate regularized inversion as the solution to an optimization problem limits the possible regularity conditions and physical sensor models. In this paper, we introduce Consensus Equilibrium (CE), which generalizes regularized inversion to include a much wider variety of both forward components and prior components without the need for either to be expressed with a cost function. CE is based on the solution of a set of equilibrium equations that balance data fit and regularity. In this framework, the problem of MAP estimation in regularized inversion is replaced by the problem of solving these equilibrium equatio...
International Journal of Cyber-Security and Digital Forensics, 2012
This paper attempts to speed-up the modular reduction as an independent step of modular multiplic... more This paper attempts to speed-up the modular reduction as an independent step of modular multiplication, which is the central operation in public-key cryptosystems. Based on the properties of Mersenne and Quasi-Mersenne primes, we have described four distinct sets of moduli which are responsible for converting the single-precision multiplication prevalent in many of today's techniques into an addition operation and a few simple shift operations. We propose a novel revision to the Modified Barrett algorithm presented in [3]. With the backing of the special moduli sets, the proposed algorithm is shown to outperform (speed-wise) the Modified Barrett algorithm by 80% for operands of length 700 bits, the least speed-up being around 70% for smaller operands, in the range of around 100 bits. Keywords–Large integer modular reduction; Mersenne primes; Quasi-Mersenne primes; Barrett-based reduction.
Many datasets in important fields like healthcare and finance are often in a tabular format, wher... more Many datasets in important fields like healthcare and finance are often in a tabular format, where each observation is expressed as a vector of various feature values. While there exist several competitive algorithms such as random forests and gradient boosting, convolutional neural networks (CNNs) are making tremendous strides in terms of new research and applications. In order to exploit the power of convolution neural networks for these tabular datasets, we propose two vector-to-image transformations. One is a direct transformation, while the other is an indirect mechanism to first modulate the latent space of a trained generative adversarial network (GAN) with the observation vectors and then generate the images using the generator. On both simulated and real datasets, we show that CNNs trained on images based on our proposed transforms lead to better predictive performance compared to random forests and neural networks that are trained on the raw tabular datasets.
Regularized inversion methods for image reconstruction are used widely due to their tractability ... more Regularized inversion methods for image reconstruction are used widely due to their tractability and their ability to combine complex physical sensor models with useful regularity criteria. Such methods motivated the recently developed Plug-and-Play prior method, which provides a framework to use advanced denoising algorithms as regularizers in inversion. However, the need to formulate regularized inversion as the solution to an optimization problem limits the expressiveness of possible regularity conditions and physical sensor models. In this paper, we introduce the idea of consensus equilibrium (CE), which generalizes regularized inversion to include a much wider variety of both forward (or data fidelity) components and prior (or regularity) components without the need for either to be expressed using a cost function. CE is based on the solution of a set of equilibrium equations that balance data fit and regularity. In this framework, the problem of MAP estimation in regularized i...
Many samples imaged in structural biology and material science contain several similar particles ... more Many samples imaged in structural biology and material science contain several similar particles at random locations and orientations. Model-based iterative reconstruction (MBIR) methods can in principle be used to exploit such redundancies in images through log prior probabilities that accurately account for non-local similarity between the particles. However, determining such a log prior term can be challenging. Several denoising algorithms like non-local means (NLM) successfully capture such non-local redundancies, but the problem is two-fold: NLM is not explicitly formulated as a cost function, and neither can it capture similarity between randomly oriented particles. In this paper, we propose a rotationally-invariant nonlocal means (RINLM) algorithm, and describe a method to implement RINLM as a prior model using a novel framework that we call plug-and-play priors. We introduce the idea of patch pre-rotation to make RINLM computationally tractable. Finally, we showcase image de...