Suhas Sreehari - Profile on Academia.edu (original) (raw)
Papers by Suhas Sreehari
Bias-mitigated face ID using secure generative transforms
Measuring transform invertibility with computational metrics
arXiv (Cornell University), Nov 28, 2016
Figure 1: An illustration of the proposed multi-resolution data fusion (MDF) system in which a tr... more Figure 1: An illustration of the proposed multi-resolution data fusion (MDF) system in which a transmission electron microscope is specially configured to collect both low resolution data over a large field-of-view (FoV) together with a small set of high resolution patches from the same sample. The MDF algorithm uses the high-resolution patches as an image model within the "plug-and-play" framework to synthesize the 8x interpolated super-resolution output image over the full FoV. We demonstrate the MDF system using an aberration-corrected FEI Titan transmission electron microscope and show that it can dramatically speed up image acquisition and reduce dosage as compared to conventional homogeneous raster-scanning microscopy.
Advanced Prior Modeling for Nano-scale Imaging
Many material and biological samples in scientific imaging are characterized by non-local repeati... more Many material and biological samples in scientific imaging are characterized by non-local repeating structures. These are studied using scanning/transmission electron microscopy and electron tomography. Sparse sampling of individual pixels in a 2D image acquisition geometry, or sparse sampling of projection images with large tilt increments in a tomography experiment, or low-resolution data acquisition can enable high speed data acquisition and minimize sample damage caused by the electron beam. However, accurate reconstructions from such sparse/low-resolution data is often challenging. In this work, we present algorithms for electron tomographic reconstruction, sparse image interpolation (or inpainting), and super-resolution that exploits the non-local redundancy in images. We adapt a framework, termed plug-and-play priors, to solve these imaging problems in a regularized inversion setting. The power of the plug-and-play approach is that it allows a wide array of modern denoising algorithms to be used as a prior model for a variety of inverse problems. We also present sufficient mathematical conditions that ensure convergence of the plug-and-play approach, and we use these insights to design a new non-local means denoising algorithm. In the end, we look at 4x, 8x, and 16x super-resolution reconstruction using library-based non-local means (LB-NLM) denoiser as a prior model within plug-and-play, to accurately characterize high-resolution textures and edge features, using high-resolution library patches acquired over a small field-of-view of the microscopy sample. Finally, we demonstrate that our algorithms produces higher quality reconstructions on both simulated and real electron microscope data, along with improved convergence properties compared to other methods
arXiv (Cornell University), Nov 28, 2016
Figure 1: An illustration of the proposed multi-resolution data fusion (MDF) system in which a tr... more Figure 1: An illustration of the proposed multi-resolution data fusion (MDF) system in which a transmission electron microscope is specially configured to collect both low resolution data over a large field-of-view (FoV) together with a small set of high resolution patches from the same sample. The MDF algorithm uses the high-resolution patches as an image model within the "plug-and-play" framework to synthesize the 8x interpolated super-resolution output image over the full FoV. We demonstrate the MDF system using an aberration-corrected FEI Titan transmission electron microscope and show that it can dramatically speed up image acquisition and reduce dosage as compared to conventional homogeneous raster-scanning microscopy.
arXiv (Cornell University), May 24, 2017
Regularized inversion methods for image reconstruction are used widely due to their tractability ... more Regularized inversion methods for image reconstruction are used widely due to their tractability and their ability to combine complex physical sensor models with useful regularity criteria. Such methods motivated the recently developed Plug-and-Play prior method, which provides a framework to use advanced denoising algorithms as regularizers in inversion. However, the need to formulate regularized inversion as the solution to an optimization problem limits the expressiveness of possible regularity conditions and physical sensor models. In this paper, we introduce the idea of Consensus Equilibrium (CE), which generalizes regularized inversion to include a much wider variety of both forward (or data fidelity) components and prior (or regularity) components without the need for either to be expressed using a cost function. Consensus equilibrium is based on the solution of a set of equilibrium equations that balance data fit and regularity. In this framework, the problem of MAP estimation in regularized inversion is replaced by the problem of solving these equilibrium equations, which can be approached in multiple ways. The key contribution of CE is to provide a novel framework for fusing multiple heterogeneous models of physical sensors or models learned from data. We describe the derivation of the CE equations and prove that the solution of the CE equations generalizes the standard MAP estimate under appropriate circumstances. We also discuss algorithms for solving the CE equations, including a version of the Douglas-Rachford (DR)/ADMM algorithm with a novel form of preconditioning and Newton's method, both standard form and a Jacobian-free form using Krylov subspaces. We give several examples to illustrate the idea of consensus equilibrium and the convergence properties of these algorithms and demonstrate this method on some toy problems and on a denoising example in which we use an array of convolutional neural network denoisers, none of which is tuned to match the noise level in a noisy image but which in consensus can achieve a better result than any of them individually.
International Journal of Cyber-Security and Digital Forensics, 2012
This paper attempts to speed-up the modular reduction as an independent step of modular multiplic... more This paper attempts to speed-up the modular reduction as an independent step of modular multiplication, which is the central operation in public-key cryptosystems. Based on the properties of Mersenne and Quasi-Mersenne primes, we have described four distinct sets of moduli which are responsible for converting the single-precision multiplication prevalent in many of today's techniques into an addition operation and a few simple shift operations. We propose a novel revision to the Modified Barrett algorithm presented in [3]. With the backing of the special moduli sets, the proposed algorithm is shown to outperform (speed-wise) the Modified Barrett algorithm by 80% for operands of length 700 bits, the least speed-up being around 70% for smaller operands, in the range of around 100 bits.
Many samples imaged in structural biology and material science contain many similar particles at ... more Many samples imaged in structural biology and material science contain many similar particles at random locations and orientations. Model-based iterative reconstruction (MBIR) methods can in principle be used to exploit such redundancies in images through log prior probabilities that accurately account for non-local similarity between the particles. However, determining such a log prior term can be challenging. Several denoising algorithms like non-local means (NLM) successfully capture such non-local redundancies, but the problem is twofold: NLM is not explicitly formulated as a cost function, and neither can it capture similarity between randomly oriented particles. In this paper, we propose a rotationally-invariant nonlocal means (RINLM) algorithm, and describe a method to implement RINLM as a prior model using a novel framework that we call plug-and-play priors. We introduce the idea of patch pre-rotation to make RINLM computationally tractable. Finally, we showcase image denoising and 2D tomography results, using the proposed RINLM algorithm, as we highlight high reconstruction quality, image sharpness, and artifact suppression.
Microscopy and Microanalysis, Jul 1, 2016
DECLARATION OF ORIGINALITY I hereby certify that I am the sole author of this thesis and that no ... more DECLARATION OF ORIGINALITY I hereby certify that I am the sole author of this thesis and that no part of this thesis has been published or submitted for publication. I certify that, to the best of my knowledge, my thesis does not infringe upon anyone's copyright nor violate any proprietary rights and that any ideas, techniques, quotations, or any other material from the work of other people included in my thesis, published or otherwise, are fully acknowledged in accordance with the standard referencing practices. Furthermore, to the extent that I have included copyrighted material that surpasses the bounds of fair dealing within the meaning of the Canada Copyright Act, I certify that I have obtained a written permission from the copyright owner(s) to include such material(s) in my thesis and have included copies of such copyright clearances to my appendix. I declare that this is a true copy of my thesis, including any final revisions, as approved by my thesis committee and the Graduate Studies office, and that this thesis has not been
Media 1: High frame-rate multichannel beam-scanning microscopy based on Lissajous trajectories
Originally published in Optics Express on 06 October 2014 (oe-22-20-24224
Media 5: High frame-rate multichannel beam-scanning microscopy based on Lissajous trajectories
Originally published in Optics Express on 06 October 2014 (oe-22-20-24224
Media 2: High frame-rate multichannel beam-scanning microscopy based on Lissajous trajectories
Originally published in Optics Express on 06 October 2014 (oe-22-20-24224
Many material and biological samples in scientific imaging are characterized by non-local repeati... more Many material and biological samples in scientific imaging are characterized by non-local repeating structures. These are studied using scanning electron microscopy and electron tomography. Sparse sampling of individual pixels in a 2D image acquisition geometry, or sparse sampling of projection images with large tilt increments in a tomography experiment, can enable high speed data acquisition and minimize sample damage caused by the electron beam. In this paper, we present an algorithm for electron tomographic reconstruction and sparse image interpolation that exploits the non-local redundancy in images. We adapt a framework, termed plug-and-play (P&P) priors, to solve these imaging problems in a regularized inversion setting. The power of the P&P approach is that it allows a wide array of modern denoising algorithms to be used as a "prior model" for tomography and image interpolation. We also present sufficient mathematical conditions that ensure convergence of the P&P a...
Regularized inversion methods for image reconstruction are used widely due to their tractability ... more Regularized inversion methods for image reconstruction are used widely due to their tractability and ability to combine complex physical sensor models with useful regularity criteria. Such methods motivated the recently developed Plug-and-Play prior method, which provides a framework to use advanced denoising algorithms as regularizers in inversion. However, the need to formulate regularized inversion as the solution to an optimization problem limits the possible regularity conditions and physical sensor models. In this paper, we introduce Consensus Equilibrium (CE), which generalizes regularized inversion to include a much wider variety of both forward components and prior components without the need for either to be expressed with a cost function. CE is based on the solution of a set of equilibrium equations that balance data fit and regularity. In this framework, the problem of MAP estimation in regularized inversion is replaced by the problem of solving these equilibrium equatio...
International Journal of Cyber-Security and Digital Forensics, 2012
This paper attempts to speed-up the modular reduction as an independent step of modular multiplic... more This paper attempts to speed-up the modular reduction as an independent step of modular multiplication, which is the central operation in public-key cryptosystems. Based on the properties of Mersenne and Quasi-Mersenne primes, we have described four distinct sets of moduli which are responsible for converting the single-precision multiplication prevalent in many of today's techniques into an addition operation and a few simple shift operations. We propose a novel revision to the Modified Barrett algorithm presented in [3]. With the backing of the special moduli sets, the proposed algorithm is shown to outperform (speed-wise) the Modified Barrett algorithm by 80% for operands of length 700 bits, the least speed-up being around 70% for smaller operands, in the range of around 100 bits. Keywords–Large integer modular reduction; Mersenne primes; Quasi-Mersenne primes; Barrett-based reduction.
Advanced Prior Modeling for Nano-scale Imaging
Visualization transforms of non-spatial data for convolutional neural networks
Many datasets in important fields like healthcare and finance are often in a tabular format, wher... more Many datasets in important fields like healthcare and finance are often in a tabular format, where each observation is expressed as a vector of various feature values. While there exist several competitive algorithms such as random forests and gradient boosting, convolutional neural networks (CNNs) are making tremendous strides in terms of new research and applications. In order to exploit the power of convolution neural networks for these tabular datasets, we propose two vector-to-image transformations. One is a direct transformation, while the other is an indirect mechanism to first modulate the latent space of a trained generative adversarial network (GAN) with the observation vectors and then generate the images using the generator. On both simulated and real datasets, we show that CNNs trained on images based on our proposed transforms lead to better predictive performance compared to random forests and neural networks that are trained on the raw tabular datasets.
Regularized inversion methods for image reconstruction are used widely due to their tractability ... more Regularized inversion methods for image reconstruction are used widely due to their tractability and their ability to combine complex physical sensor models with useful regularity criteria. Such methods motivated the recently developed Plug-and-Play prior method, which provides a framework to use advanced denoising algorithms as regularizers in inversion. However, the need to formulate regularized inversion as the solution to an optimization problem limits the expressiveness of possible regularity conditions and physical sensor models. In this paper, we introduce the idea of consensus equilibrium (CE), which generalizes regularized inversion to include a much wider variety of both forward (or data fidelity) components and prior (or regularity) components without the need for either to be expressed using a cost function. CE is based on the solution of a set of equilibrium equations that balance data fit and regularity. In this framework, the problem of MAP estimation in regularized i...
Many samples imaged in structural biology and material science contain several similar particles ... more Many samples imaged in structural biology and material science contain several similar particles at random locations and orientations. Model-based iterative reconstruction (MBIR) methods can in principle be used to exploit such redundancies in images through log prior probabilities that accurately account for non-local similarity between the particles. However, determining such a log prior term can be challenging. Several denoising algorithms like non-local means (NLM) successfully capture such non-local redundancies, but the problem is two-fold: NLM is not explicitly formulated as a cost function, and neither can it capture similarity between randomly oriented particles. In this paper, we propose a rotationally-invariant nonlocal means (RINLM) algorithm, and describe a method to implement RINLM as a prior model using a novel framework that we call plug-and-play priors. We introduce the idea of patch pre-rotation to make RINLM computationally tractable. Finally, we showcase image de...
Bias-mitigated face ID using secure generative transforms
Measuring transform invertibility with computational metrics
arXiv (Cornell University), Nov 28, 2016
Figure 1: An illustration of the proposed multi-resolution data fusion (MDF) system in which a tr... more Figure 1: An illustration of the proposed multi-resolution data fusion (MDF) system in which a transmission electron microscope is specially configured to collect both low resolution data over a large field-of-view (FoV) together with a small set of high resolution patches from the same sample. The MDF algorithm uses the high-resolution patches as an image model within the "plug-and-play" framework to synthesize the 8x interpolated super-resolution output image over the full FoV. We demonstrate the MDF system using an aberration-corrected FEI Titan transmission electron microscope and show that it can dramatically speed up image acquisition and reduce dosage as compared to conventional homogeneous raster-scanning microscopy.
Advanced Prior Modeling for Nano-scale Imaging
Many material and biological samples in scientific imaging are characterized by non-local repeati... more Many material and biological samples in scientific imaging are characterized by non-local repeating structures. These are studied using scanning/transmission electron microscopy and electron tomography. Sparse sampling of individual pixels in a 2D image acquisition geometry, or sparse sampling of projection images with large tilt increments in a tomography experiment, or low-resolution data acquisition can enable high speed data acquisition and minimize sample damage caused by the electron beam. However, accurate reconstructions from such sparse/low-resolution data is often challenging. In this work, we present algorithms for electron tomographic reconstruction, sparse image interpolation (or inpainting), and super-resolution that exploits the non-local redundancy in images. We adapt a framework, termed plug-and-play priors, to solve these imaging problems in a regularized inversion setting. The power of the plug-and-play approach is that it allows a wide array of modern denoising algorithms to be used as a prior model for a variety of inverse problems. We also present sufficient mathematical conditions that ensure convergence of the plug-and-play approach, and we use these insights to design a new non-local means denoising algorithm. In the end, we look at 4x, 8x, and 16x super-resolution reconstruction using library-based non-local means (LB-NLM) denoiser as a prior model within plug-and-play, to accurately characterize high-resolution textures and edge features, using high-resolution library patches acquired over a small field-of-view of the microscopy sample. Finally, we demonstrate that our algorithms produces higher quality reconstructions on both simulated and real electron microscope data, along with improved convergence properties compared to other methods
arXiv (Cornell University), Nov 28, 2016
Figure 1: An illustration of the proposed multi-resolution data fusion (MDF) system in which a tr... more Figure 1: An illustration of the proposed multi-resolution data fusion (MDF) system in which a transmission electron microscope is specially configured to collect both low resolution data over a large field-of-view (FoV) together with a small set of high resolution patches from the same sample. The MDF algorithm uses the high-resolution patches as an image model within the "plug-and-play" framework to synthesize the 8x interpolated super-resolution output image over the full FoV. We demonstrate the MDF system using an aberration-corrected FEI Titan transmission electron microscope and show that it can dramatically speed up image acquisition and reduce dosage as compared to conventional homogeneous raster-scanning microscopy.
arXiv (Cornell University), May 24, 2017
Regularized inversion methods for image reconstruction are used widely due to their tractability ... more Regularized inversion methods for image reconstruction are used widely due to their tractability and their ability to combine complex physical sensor models with useful regularity criteria. Such methods motivated the recently developed Plug-and-Play prior method, which provides a framework to use advanced denoising algorithms as regularizers in inversion. However, the need to formulate regularized inversion as the solution to an optimization problem limits the expressiveness of possible regularity conditions and physical sensor models. In this paper, we introduce the idea of Consensus Equilibrium (CE), which generalizes regularized inversion to include a much wider variety of both forward (or data fidelity) components and prior (or regularity) components without the need for either to be expressed using a cost function. Consensus equilibrium is based on the solution of a set of equilibrium equations that balance data fit and regularity. In this framework, the problem of MAP estimation in regularized inversion is replaced by the problem of solving these equilibrium equations, which can be approached in multiple ways. The key contribution of CE is to provide a novel framework for fusing multiple heterogeneous models of physical sensors or models learned from data. We describe the derivation of the CE equations and prove that the solution of the CE equations generalizes the standard MAP estimate under appropriate circumstances. We also discuss algorithms for solving the CE equations, including a version of the Douglas-Rachford (DR)/ADMM algorithm with a novel form of preconditioning and Newton's method, both standard form and a Jacobian-free form using Krylov subspaces. We give several examples to illustrate the idea of consensus equilibrium and the convergence properties of these algorithms and demonstrate this method on some toy problems and on a denoising example in which we use an array of convolutional neural network denoisers, none of which is tuned to match the noise level in a noisy image but which in consensus can achieve a better result than any of them individually.
International Journal of Cyber-Security and Digital Forensics, 2012
This paper attempts to speed-up the modular reduction as an independent step of modular multiplic... more This paper attempts to speed-up the modular reduction as an independent step of modular multiplication, which is the central operation in public-key cryptosystems. Based on the properties of Mersenne and Quasi-Mersenne primes, we have described four distinct sets of moduli which are responsible for converting the single-precision multiplication prevalent in many of today's techniques into an addition operation and a few simple shift operations. We propose a novel revision to the Modified Barrett algorithm presented in [3]. With the backing of the special moduli sets, the proposed algorithm is shown to outperform (speed-wise) the Modified Barrett algorithm by 80% for operands of length 700 bits, the least speed-up being around 70% for smaller operands, in the range of around 100 bits.
Many samples imaged in structural biology and material science contain many similar particles at ... more Many samples imaged in structural biology and material science contain many similar particles at random locations and orientations. Model-based iterative reconstruction (MBIR) methods can in principle be used to exploit such redundancies in images through log prior probabilities that accurately account for non-local similarity between the particles. However, determining such a log prior term can be challenging. Several denoising algorithms like non-local means (NLM) successfully capture such non-local redundancies, but the problem is twofold: NLM is not explicitly formulated as a cost function, and neither can it capture similarity between randomly oriented particles. In this paper, we propose a rotationally-invariant nonlocal means (RINLM) algorithm, and describe a method to implement RINLM as a prior model using a novel framework that we call plug-and-play priors. We introduce the idea of patch pre-rotation to make RINLM computationally tractable. Finally, we showcase image denoising and 2D tomography results, using the proposed RINLM algorithm, as we highlight high reconstruction quality, image sharpness, and artifact suppression.
Microscopy and Microanalysis, Jul 1, 2016
DECLARATION OF ORIGINALITY I hereby certify that I am the sole author of this thesis and that no ... more DECLARATION OF ORIGINALITY I hereby certify that I am the sole author of this thesis and that no part of this thesis has been published or submitted for publication. I certify that, to the best of my knowledge, my thesis does not infringe upon anyone's copyright nor violate any proprietary rights and that any ideas, techniques, quotations, or any other material from the work of other people included in my thesis, published or otherwise, are fully acknowledged in accordance with the standard referencing practices. Furthermore, to the extent that I have included copyrighted material that surpasses the bounds of fair dealing within the meaning of the Canada Copyright Act, I certify that I have obtained a written permission from the copyright owner(s) to include such material(s) in my thesis and have included copies of such copyright clearances to my appendix. I declare that this is a true copy of my thesis, including any final revisions, as approved by my thesis committee and the Graduate Studies office, and that this thesis has not been
Media 1: High frame-rate multichannel beam-scanning microscopy based on Lissajous trajectories
Originally published in Optics Express on 06 October 2014 (oe-22-20-24224
Media 5: High frame-rate multichannel beam-scanning microscopy based on Lissajous trajectories
Originally published in Optics Express on 06 October 2014 (oe-22-20-24224
Media 2: High frame-rate multichannel beam-scanning microscopy based on Lissajous trajectories
Originally published in Optics Express on 06 October 2014 (oe-22-20-24224
Many material and biological samples in scientific imaging are characterized by non-local repeati... more Many material and biological samples in scientific imaging are characterized by non-local repeating structures. These are studied using scanning electron microscopy and electron tomography. Sparse sampling of individual pixels in a 2D image acquisition geometry, or sparse sampling of projection images with large tilt increments in a tomography experiment, can enable high speed data acquisition and minimize sample damage caused by the electron beam. In this paper, we present an algorithm for electron tomographic reconstruction and sparse image interpolation that exploits the non-local redundancy in images. We adapt a framework, termed plug-and-play (P&P) priors, to solve these imaging problems in a regularized inversion setting. The power of the P&P approach is that it allows a wide array of modern denoising algorithms to be used as a "prior model" for tomography and image interpolation. We also present sufficient mathematical conditions that ensure convergence of the P&P a...
Regularized inversion methods for image reconstruction are used widely due to their tractability ... more Regularized inversion methods for image reconstruction are used widely due to their tractability and ability to combine complex physical sensor models with useful regularity criteria. Such methods motivated the recently developed Plug-and-Play prior method, which provides a framework to use advanced denoising algorithms as regularizers in inversion. However, the need to formulate regularized inversion as the solution to an optimization problem limits the possible regularity conditions and physical sensor models. In this paper, we introduce Consensus Equilibrium (CE), which generalizes regularized inversion to include a much wider variety of both forward components and prior components without the need for either to be expressed with a cost function. CE is based on the solution of a set of equilibrium equations that balance data fit and regularity. In this framework, the problem of MAP estimation in regularized inversion is replaced by the problem of solving these equilibrium equatio...
International Journal of Cyber-Security and Digital Forensics, 2012
This paper attempts to speed-up the modular reduction as an independent step of modular multiplic... more This paper attempts to speed-up the modular reduction as an independent step of modular multiplication, which is the central operation in public-key cryptosystems. Based on the properties of Mersenne and Quasi-Mersenne primes, we have described four distinct sets of moduli which are responsible for converting the single-precision multiplication prevalent in many of today's techniques into an addition operation and a few simple shift operations. We propose a novel revision to the Modified Barrett algorithm presented in [3]. With the backing of the special moduli sets, the proposed algorithm is shown to outperform (speed-wise) the Modified Barrett algorithm by 80% for operands of length 700 bits, the least speed-up being around 70% for smaller operands, in the range of around 100 bits. Keywords–Large integer modular reduction; Mersenne primes; Quasi-Mersenne primes; Barrett-based reduction.
Advanced Prior Modeling for Nano-scale Imaging
Visualization transforms of non-spatial data for convolutional neural networks
Many datasets in important fields like healthcare and finance are often in a tabular format, wher... more Many datasets in important fields like healthcare and finance are often in a tabular format, where each observation is expressed as a vector of various feature values. While there exist several competitive algorithms such as random forests and gradient boosting, convolutional neural networks (CNNs) are making tremendous strides in terms of new research and applications. In order to exploit the power of convolution neural networks for these tabular datasets, we propose two vector-to-image transformations. One is a direct transformation, while the other is an indirect mechanism to first modulate the latent space of a trained generative adversarial network (GAN) with the observation vectors and then generate the images using the generator. On both simulated and real datasets, we show that CNNs trained on images based on our proposed transforms lead to better predictive performance compared to random forests and neural networks that are trained on the raw tabular datasets.
Regularized inversion methods for image reconstruction are used widely due to their tractability ... more Regularized inversion methods for image reconstruction are used widely due to their tractability and their ability to combine complex physical sensor models with useful regularity criteria. Such methods motivated the recently developed Plug-and-Play prior method, which provides a framework to use advanced denoising algorithms as regularizers in inversion. However, the need to formulate regularized inversion as the solution to an optimization problem limits the expressiveness of possible regularity conditions and physical sensor models. In this paper, we introduce the idea of consensus equilibrium (CE), which generalizes regularized inversion to include a much wider variety of both forward (or data fidelity) components and prior (or regularity) components without the need for either to be expressed using a cost function. CE is based on the solution of a set of equilibrium equations that balance data fit and regularity. In this framework, the problem of MAP estimation in regularized i...
Many samples imaged in structural biology and material science contain several similar particles ... more Many samples imaged in structural biology and material science contain several similar particles at random locations and orientations. Model-based iterative reconstruction (MBIR) methods can in principle be used to exploit such redundancies in images through log prior probabilities that accurately account for non-local similarity between the particles. However, determining such a log prior term can be challenging. Several denoising algorithms like non-local means (NLM) successfully capture such non-local redundancies, but the problem is two-fold: NLM is not explicitly formulated as a cost function, and neither can it capture similarity between randomly oriented particles. In this paper, we propose a rotationally-invariant nonlocal means (RINLM) algorithm, and describe a method to implement RINLM as a prior model using a novel framework that we call plug-and-play priors. We introduce the idea of patch pre-rotation to make RINLM computationally tractable. Finally, we showcase image de...