Vivek Goyal - Profile on Academia.edu (original) (raw)

Papers by Vivek Goyal

Research paper thumbnail of Seeing around corners with edge-resolved transient imaging

Nature Communications

Non-line-of-sight (NLOS) imaging is a rapidly growing field seeking to form images of objects out... more Non-line-of-sight (NLOS) imaging is a rapidly growing field seeking to form images of objects outside the field of view, with potential applications in autonomous navigation, reconnaissance, and even medical imaging. The critical challenge of NLOS imaging is that diffuse reflections scatter light in all directions, resulting in weak signals and a loss of directional information. To address this problem, we propose a method for seeing around corners that derives angular resolution from vertical edges and longitudinal resolution from the temporal response to a pulsed light source. We introduce an acquisition strategy, scene response model, and reconstruction algorithm that enable the formation of 2.5-dimensional representations—a plan view plus heights—and a 180∘ field of view for large-scale scenes. Our experiments demonstrate accurate reconstructions of hidden rooms up to 3 meters in each dimension despite a small scan aperture (1.5-centimeter radius) and only 45 measurement locations.

Research paper thumbnail of Offline Secondary Electron Counting and Conditional Re-illumination in SEM

Microscopy and Microanalysis

This is an Accepted Manuscript for the Microscopy and Microanalysis 2020 Proceedings. This versio... more This is an Accepted Manuscript for the Microscopy and Microanalysis 2020 Proceedings. This version may be subject to change during the production process.

Research paper thumbnail of A Few Photons Among Many: Unmixing Signal and Noise for Photon-Efficient Active Imaging

IEEE Transactions on Computational Imaging

Conventional LIDAR systems require hundreds or thousands of photon detections per pixel to form a... more Conventional LIDAR systems require hundreds or thousands of photon detections per pixel to form accurate depth and reflectivity images. Recent photon-efficient computational imaging methods are remarkably effective with only 1.0 to 3.0 detected photons per pixel, but they are not demonstrated at signalto-background ratio (SBR) below 1.0 because their imaging accuracies degrade significantly in the presence of high background noise. We introduce a new approach to depth and reflectivity estimation that emphasizes the unmixing of contributions from signal and noise sources. At each pixel in an image, short-duration range gates are adaptively determined and applied to remove detections likely to be due to noise. For pixels with too few detections to perform this censoring accurately, data are combined from neighboring pixels to improve depth estimates, where the neighborhood formation is also adaptive to scene content. Algorithm performance is demonstrated on experimental data at varying levels of noise. Results show improved performance of both reflectivity and depth estimates over state-of-the-art methods, especially at low SBR. In particular, accurate imaging is demonstrated with SBR as low as 0.04. This validation of a photon-efficient, noise-tolerant method demonstrates the viability of rapid, long-range, and low-power LIDAR imaging.

Research paper thumbnail of Photon-efficient imaging with a single-photon camera

Nature communications, Jan 24, 2016

Reconstructing a scene's 3D structure and reflectivity accurately with an active imaging syst... more Reconstructing a scene's 3D structure and reflectivity accurately with an active imaging system operating in low-light-level conditions has wide-ranging applications, spanning biological imaging to remote sensing. Here we propose and experimentally demonstrate a depth and reflectivity imaging system with a single-photon camera that generates high-quality images from ∼1 detected signal photon per pixel. Previous achievements of similar photon efficiency have been with conventional raster-scanning data collection using single-pixel photon counters capable of ∼10-ps time tagging. In contrast, our camera's detector array requires highly parallelized time-to-digital conversions with photon time-tagging accuracy limited to ∼ns. Thus, we develop an array-specific algorithm that converts coarsely time-binned photon detections to highly accurate scene depth and reflectivity by exploiting both the transverse smoothness and longitudinal sparsity of natural scenes. By overcoming the coa...

Research paper thumbnail of System for Reconstructing MRI Images Acquired in Parallel

System for Reconstructing MRI Images Acquired in Parallel

Research paper thumbnail of Method and apparatus for reduced complexity entropy coding

Method and apparatus for reduced complexity entropy coding

Research paper thumbnail of Malleable Coding with Fixed Segment Reuse

Corr, 2008

In cloud computing, storage area networks, remote backup storage, and similar settings, stored da... more In cloud computing, storage area networks, remote backup storage, and similar settings, stored data is modified with updates from new versions. Representing information and modifying the representation are both expensive. Therefore it is desirable for the data to not only be compressed but to also be easily modified during updates. A malleable coding scheme considers both compression efficiency and ease of alteration, promoting codeword reuse. We examine the trade-off between compression efficiency and malleability cost-the difficulty of synchronizing compressed versions-measured as the length of a reused prefix portion. Through a coding theorem, the region of achievable rates and malleability is expressed as a single-letter optimization. Relationships to common information problems are also described.

Research paper thumbnail of Malleable Coding: Compressed Palimpsests

Eprint Arxiv 0806 4722, Jun 28, 2008

A malleable coding scheme considers not only compression efficiency but also the ease of alterati... more A malleable coding scheme considers not only compression efficiency but also the ease of alteration, thus encouraging some form of recycling of an old compressed version in the formation of a new one. Malleability cost is the difficulty of synchronizing compressed versions, and malleable codes are of particular interest when representing information and modifying the representation are both expensive. We examine the trade-off between compression efficiency and malleability cost under a malleability metric defined with respect to a string edit distance. This problem introduces a metric topology to the compressed domain. We characterize the achievable rates and malleability as the solution of a subgraph isomorphism problem. This can be used to argue that allowing conditional entropy of the edited message given the original message to grow linearly with block length creates an exponential increase in code length.

Research paper thumbnail of Digital Fountain Technical Report DF2002-09-001 On WEBRC Wave Design and Sender Implementation Vivek K Goyal 39141 Civic Center Drive, Suite 300 Fremont, CA 94538

Digital Fountain Technical Report DF2002-09-001 On WEBRC Wave Design and Sender Implementation Vivek K Goyal 39141 Civic Center Drive, Suite 300 Fremont, CA 94538

ABSTRACT

Research paper thumbnail of Quantization for Compressed Sensing Reconstruction

Quantization for Compressed Sensing Reconstruction

ABSTRACT

Research paper thumbnail of Multiple description lattice vector quantization: variations and extensions

Proceedings of the Conference on Data Compression, Feb 1, 2000

Multiple description lattice vector quantization (MDLVQ) is a technique for two-channel multiple ... more Multiple description lattice vector quantization (MDLVQ) is a technique for two-channel multiple description coding. We observe that MDLVQ, in the form introduced by Servetto, Vaishampayan and Sloane in 1999, is inherently optimized for the central decoder; i.e., for a zero probability of a lost description. With a nonzero probability of description loss, performance is improved by modifying the encoding rule (using nearest neighbors with respect to "multiple description distance") and by perturbing the lattice codebook. The perturbation maintains many symmetries and hence does not significantly effect encoding or decoding complexity. An extension to more than two descriptions with attractive decoding properties is outlined.

Research paper thumbnail of Adaptive Transform Coding Using LMS-like Principal Component Tracking

A new set of algorithms for transform adaptation in adaptive transform coding is presented. These... more A new set of algorithms for transform adaptation in adaptive transform coding is presented. These algorithms are inspired by standard techniques in adaptive nite impulse response (FIR) Wiener ltering and demonstrate that similar algorithms with simple updates exist for tracking principal components (eigenvectors of a correlation matrix). For coding an N-dimensional source, the transform adaptation problem is posed as an unconstrained minimization over K = N(N ? 1)=2 parameters, and this for two possible performance measures. Performing this minimization through a gradient descent gives an algorithm analogous to LMS. Step size bounds for stability similar in form to those for LMS are proven. Linear and xed-step random search methods are also considered. The stochastic gradient descent algorithm is simulated for both time-invariant and slowly-varying sources. A \backward-adaptive" mode, where the adaptation is based on quantized data so that the decoder and encoder can maintain the same state without side information, is also considered.

Research paper thumbnail of Nonlinear Digital Post-Processing to Mitigate Jitter in Sampling

This paper describes several new algorithms for estimating the parameters of a periodic bandlimit... more This paper describes several new algorithms for estimating the parameters of a periodic bandlimited signal from samples corrupted by jitter (timing noise) and additive noise. Both classical (non-random) and Bayesian formulations are considered: an Expectation-Maximization (EM) algorithm is developed to compute the maximum likelihood (ML) estimator for the classical estimation framework, and two Gibbs samplers are proposed to approximate the Bayes least squares (BLS) estimate for parameters independently distributed according to a uniform prior. Simulations are performed to demonstrate the significant performance improvement achievable using these algorithms as compared to linear estimators. The ML estimator is also compared to the Cramér-Rao lower bound to determine the range of jitter for which the estimator is approximately efficient. These simulations provide evidence that the nonlinear algorithms derived here can tolerate 1.4-2 times more jitter than linear estimators, reducing on-chip ADC power consumption by 50-75 percent.

Research paper thumbnail of Luby And Goyal: Webrc Using Mrtt 1

Luby And Goyal: Webrc Using Mrtt 1

ABSTRACT Wave and Equation Based Rate Control (WEBRC) is a new equation-based, multiple rate cong... more ABSTRACT Wave and Equation Based Rate Control (WEBRC) is a new equation-based, multiple rate congestion control protocol that is naturally suited to multicast but also applicable to unicast. No previous multiple rate congestion control algorithm is equation based. A main impediment to extending equation-based rate control to multiple rate multicast was until now the lack of a suitable analogue to the round trip time (RTT) in unicast. This paper introduces an analogue of unicast RTT, called multicast round trip time (MRTT), that can be measured by receivers without placing any added message processing burden on the server or intermediate network elements.

Research paper thumbnail of Method and apparatus for wireless transmission using multiple description coding

Method and apparatus for wireless transmission using multiple description coding

Research paper thumbnail of Method For Joint Sparsity-Enforced K-Space Trajectory and Radiofrequency Pulse Design

Method For Joint Sparsity-Enforced K-Space Trajectory and Radiofrequency Pulse Design

Research paper thumbnail of Ordered and disordered source coding, in

Ordered and disordered source coding, in

Research paper thumbnail of Benefiting from Disorder: Source Coding for Unordered Data

Eprint Arxiv 0708 2310, Aug 17, 2007

The order of letters is not always relevant in a communication task. This paper discusses the imp... more The order of letters is not always relevant in a communication task. This paper discusses the implications of order irrelevance on source coding, presenting results in several major branches of source coding theory: lossless coding, universal lossless coding, rate-distortion, high-rate quantization, and universal lossy coding. The main conclusions demonstrate that there is a significant rate savings when order is irrelevant. In particular, lossless coding of n letters from a finite alphabet requires Theta(log n) bits and universal lossless coding requires n + o(n) bits for many countable alphabet sources. However, there are no universal schemes that can drive a strong redundancy measure to zero. Results for lossy coding include distribution-free expressions for the rate savings from order irrelevance in various high-rate quantization schemes. Rate-distortion bounds are given, and it is shown that the analogue of the Shannon lower bound is loose at all finite rates.

Research paper thumbnail of Compressed History Matching: Exploiting Transform-Domain Sparsity for Regularization of Nonlinear Dynamic Data Integration Problems

Compressed History Matching: Exploiting Transform-Domain Sparsity for Regularization of Nonlinear Dynamic Data Integration Problems

Mathematical Geosciences, 2010

In this paper, we present a new approach for estimating spatially-distributed reservoir propertie... more In this paper, we present a new approach for estimating spatially-distributed reservoir properties from scattered nonlinear dynamic well measurements by promoting sparsity in an appropriate transform domain where the unknown properties are believed to have a sparse approximation. The method is inspired by recent advances in sparse signal reconstruction that is formalized under the celebrated compressed sensing paradigm. Here, we

Research paper thumbnail of First-Photon Imaging: Scene Depth and Reflectance Acquisition from One Detected Photon per Pixel

First-Photon Imaging: Scene Depth and Reflectance Acquisition from One Detected Photon per Pixel

ABSTRACT Capturing depth and reflectance images using active illumination despite the detection o... more ABSTRACT Capturing depth and reflectance images using active illumination despite the detection of little light backscattered from the scene has wide-ranging applications in computer vision. Conventionally, even with single-photon detectors, a large number of detected photons is needed at each pixel location to mitigate Poisson noise. Here, using only the first detected photon at each pixel location, we capture both the 3D structure and reflectivity of the scene, demonstrating greater photon efficiency than previous work. Our computational imager combines physically accurate photoncounting statistics with exploitation of spatial correlations present in real-world scenes. We experimentally achieve millimeter-accurate, sub-pulse width depth resolution and 4-bit reflectivity contrast, simultaneously, using only the first photon detection per pixel, even in the presence of high background noise. Our technique enables rapid, low-power, and noise-tolerant active optical imaging.

Research paper thumbnail of Seeing around corners with edge-resolved transient imaging

Nature Communications

Non-line-of-sight (NLOS) imaging is a rapidly growing field seeking to form images of objects out... more Non-line-of-sight (NLOS) imaging is a rapidly growing field seeking to form images of objects outside the field of view, with potential applications in autonomous navigation, reconnaissance, and even medical imaging. The critical challenge of NLOS imaging is that diffuse reflections scatter light in all directions, resulting in weak signals and a loss of directional information. To address this problem, we propose a method for seeing around corners that derives angular resolution from vertical edges and longitudinal resolution from the temporal response to a pulsed light source. We introduce an acquisition strategy, scene response model, and reconstruction algorithm that enable the formation of 2.5-dimensional representations—a plan view plus heights—and a 180∘ field of view for large-scale scenes. Our experiments demonstrate accurate reconstructions of hidden rooms up to 3 meters in each dimension despite a small scan aperture (1.5-centimeter radius) and only 45 measurement locations.

Research paper thumbnail of Offline Secondary Electron Counting and Conditional Re-illumination in SEM

Microscopy and Microanalysis

This is an Accepted Manuscript for the Microscopy and Microanalysis 2020 Proceedings. This versio... more This is an Accepted Manuscript for the Microscopy and Microanalysis 2020 Proceedings. This version may be subject to change during the production process.

Research paper thumbnail of A Few Photons Among Many: Unmixing Signal and Noise for Photon-Efficient Active Imaging

IEEE Transactions on Computational Imaging

Conventional LIDAR systems require hundreds or thousands of photon detections per pixel to form a... more Conventional LIDAR systems require hundreds or thousands of photon detections per pixel to form accurate depth and reflectivity images. Recent photon-efficient computational imaging methods are remarkably effective with only 1.0 to 3.0 detected photons per pixel, but they are not demonstrated at signalto-background ratio (SBR) below 1.0 because their imaging accuracies degrade significantly in the presence of high background noise. We introduce a new approach to depth and reflectivity estimation that emphasizes the unmixing of contributions from signal and noise sources. At each pixel in an image, short-duration range gates are adaptively determined and applied to remove detections likely to be due to noise. For pixels with too few detections to perform this censoring accurately, data are combined from neighboring pixels to improve depth estimates, where the neighborhood formation is also adaptive to scene content. Algorithm performance is demonstrated on experimental data at varying levels of noise. Results show improved performance of both reflectivity and depth estimates over state-of-the-art methods, especially at low SBR. In particular, accurate imaging is demonstrated with SBR as low as 0.04. This validation of a photon-efficient, noise-tolerant method demonstrates the viability of rapid, long-range, and low-power LIDAR imaging.

Research paper thumbnail of Photon-efficient imaging with a single-photon camera

Nature communications, Jan 24, 2016

Reconstructing a scene's 3D structure and reflectivity accurately with an active imaging syst... more Reconstructing a scene's 3D structure and reflectivity accurately with an active imaging system operating in low-light-level conditions has wide-ranging applications, spanning biological imaging to remote sensing. Here we propose and experimentally demonstrate a depth and reflectivity imaging system with a single-photon camera that generates high-quality images from ∼1 detected signal photon per pixel. Previous achievements of similar photon efficiency have been with conventional raster-scanning data collection using single-pixel photon counters capable of ∼10-ps time tagging. In contrast, our camera's detector array requires highly parallelized time-to-digital conversions with photon time-tagging accuracy limited to ∼ns. Thus, we develop an array-specific algorithm that converts coarsely time-binned photon detections to highly accurate scene depth and reflectivity by exploiting both the transverse smoothness and longitudinal sparsity of natural scenes. By overcoming the coa...

Research paper thumbnail of System for Reconstructing MRI Images Acquired in Parallel

System for Reconstructing MRI Images Acquired in Parallel

Research paper thumbnail of Method and apparatus for reduced complexity entropy coding

Method and apparatus for reduced complexity entropy coding

Research paper thumbnail of Malleable Coding with Fixed Segment Reuse

Corr, 2008

In cloud computing, storage area networks, remote backup storage, and similar settings, stored da... more In cloud computing, storage area networks, remote backup storage, and similar settings, stored data is modified with updates from new versions. Representing information and modifying the representation are both expensive. Therefore it is desirable for the data to not only be compressed but to also be easily modified during updates. A malleable coding scheme considers both compression efficiency and ease of alteration, promoting codeword reuse. We examine the trade-off between compression efficiency and malleability cost-the difficulty of synchronizing compressed versions-measured as the length of a reused prefix portion. Through a coding theorem, the region of achievable rates and malleability is expressed as a single-letter optimization. Relationships to common information problems are also described.

Research paper thumbnail of Malleable Coding: Compressed Palimpsests

Eprint Arxiv 0806 4722, Jun 28, 2008

A malleable coding scheme considers not only compression efficiency but also the ease of alterati... more A malleable coding scheme considers not only compression efficiency but also the ease of alteration, thus encouraging some form of recycling of an old compressed version in the formation of a new one. Malleability cost is the difficulty of synchronizing compressed versions, and malleable codes are of particular interest when representing information and modifying the representation are both expensive. We examine the trade-off between compression efficiency and malleability cost under a malleability metric defined with respect to a string edit distance. This problem introduces a metric topology to the compressed domain. We characterize the achievable rates and malleability as the solution of a subgraph isomorphism problem. This can be used to argue that allowing conditional entropy of the edited message given the original message to grow linearly with block length creates an exponential increase in code length.

Research paper thumbnail of Digital Fountain Technical Report DF2002-09-001 On WEBRC Wave Design and Sender Implementation Vivek K Goyal 39141 Civic Center Drive, Suite 300 Fremont, CA 94538

Digital Fountain Technical Report DF2002-09-001 On WEBRC Wave Design and Sender Implementation Vivek K Goyal 39141 Civic Center Drive, Suite 300 Fremont, CA 94538

ABSTRACT

Research paper thumbnail of Quantization for Compressed Sensing Reconstruction

Quantization for Compressed Sensing Reconstruction

ABSTRACT

Research paper thumbnail of Multiple description lattice vector quantization: variations and extensions

Proceedings of the Conference on Data Compression, Feb 1, 2000

Multiple description lattice vector quantization (MDLVQ) is a technique for two-channel multiple ... more Multiple description lattice vector quantization (MDLVQ) is a technique for two-channel multiple description coding. We observe that MDLVQ, in the form introduced by Servetto, Vaishampayan and Sloane in 1999, is inherently optimized for the central decoder; i.e., for a zero probability of a lost description. With a nonzero probability of description loss, performance is improved by modifying the encoding rule (using nearest neighbors with respect to "multiple description distance") and by perturbing the lattice codebook. The perturbation maintains many symmetries and hence does not significantly effect encoding or decoding complexity. An extension to more than two descriptions with attractive decoding properties is outlined.

Research paper thumbnail of Adaptive Transform Coding Using LMS-like Principal Component Tracking

A new set of algorithms for transform adaptation in adaptive transform coding is presented. These... more A new set of algorithms for transform adaptation in adaptive transform coding is presented. These algorithms are inspired by standard techniques in adaptive nite impulse response (FIR) Wiener ltering and demonstrate that similar algorithms with simple updates exist for tracking principal components (eigenvectors of a correlation matrix). For coding an N-dimensional source, the transform adaptation problem is posed as an unconstrained minimization over K = N(N ? 1)=2 parameters, and this for two possible performance measures. Performing this minimization through a gradient descent gives an algorithm analogous to LMS. Step size bounds for stability similar in form to those for LMS are proven. Linear and xed-step random search methods are also considered. The stochastic gradient descent algorithm is simulated for both time-invariant and slowly-varying sources. A \backward-adaptive" mode, where the adaptation is based on quantized data so that the decoder and encoder can maintain the same state without side information, is also considered.

Research paper thumbnail of Nonlinear Digital Post-Processing to Mitigate Jitter in Sampling

This paper describes several new algorithms for estimating the parameters of a periodic bandlimit... more This paper describes several new algorithms for estimating the parameters of a periodic bandlimited signal from samples corrupted by jitter (timing noise) and additive noise. Both classical (non-random) and Bayesian formulations are considered: an Expectation-Maximization (EM) algorithm is developed to compute the maximum likelihood (ML) estimator for the classical estimation framework, and two Gibbs samplers are proposed to approximate the Bayes least squares (BLS) estimate for parameters independently distributed according to a uniform prior. Simulations are performed to demonstrate the significant performance improvement achievable using these algorithms as compared to linear estimators. The ML estimator is also compared to the Cramér-Rao lower bound to determine the range of jitter for which the estimator is approximately efficient. These simulations provide evidence that the nonlinear algorithms derived here can tolerate 1.4-2 times more jitter than linear estimators, reducing on-chip ADC power consumption by 50-75 percent.

Research paper thumbnail of Luby And Goyal: Webrc Using Mrtt 1

Luby And Goyal: Webrc Using Mrtt 1

ABSTRACT Wave and Equation Based Rate Control (WEBRC) is a new equation-based, multiple rate cong... more ABSTRACT Wave and Equation Based Rate Control (WEBRC) is a new equation-based, multiple rate congestion control protocol that is naturally suited to multicast but also applicable to unicast. No previous multiple rate congestion control algorithm is equation based. A main impediment to extending equation-based rate control to multiple rate multicast was until now the lack of a suitable analogue to the round trip time (RTT) in unicast. This paper introduces an analogue of unicast RTT, called multicast round trip time (MRTT), that can be measured by receivers without placing any added message processing burden on the server or intermediate network elements.

Research paper thumbnail of Method and apparatus for wireless transmission using multiple description coding

Method and apparatus for wireless transmission using multiple description coding

Research paper thumbnail of Method For Joint Sparsity-Enforced K-Space Trajectory and Radiofrequency Pulse Design

Method For Joint Sparsity-Enforced K-Space Trajectory and Radiofrequency Pulse Design

Research paper thumbnail of Ordered and disordered source coding, in

Ordered and disordered source coding, in

Research paper thumbnail of Benefiting from Disorder: Source Coding for Unordered Data

Eprint Arxiv 0708 2310, Aug 17, 2007

The order of letters is not always relevant in a communication task. This paper discusses the imp... more The order of letters is not always relevant in a communication task. This paper discusses the implications of order irrelevance on source coding, presenting results in several major branches of source coding theory: lossless coding, universal lossless coding, rate-distortion, high-rate quantization, and universal lossy coding. The main conclusions demonstrate that there is a significant rate savings when order is irrelevant. In particular, lossless coding of n letters from a finite alphabet requires Theta(log n) bits and universal lossless coding requires n + o(n) bits for many countable alphabet sources. However, there are no universal schemes that can drive a strong redundancy measure to zero. Results for lossy coding include distribution-free expressions for the rate savings from order irrelevance in various high-rate quantization schemes. Rate-distortion bounds are given, and it is shown that the analogue of the Shannon lower bound is loose at all finite rates.

Research paper thumbnail of Compressed History Matching: Exploiting Transform-Domain Sparsity for Regularization of Nonlinear Dynamic Data Integration Problems

Compressed History Matching: Exploiting Transform-Domain Sparsity for Regularization of Nonlinear Dynamic Data Integration Problems

Mathematical Geosciences, 2010

In this paper, we present a new approach for estimating spatially-distributed reservoir propertie... more In this paper, we present a new approach for estimating spatially-distributed reservoir properties from scattered nonlinear dynamic well measurements by promoting sparsity in an appropriate transform domain where the unknown properties are believed to have a sparse approximation. The method is inspired by recent advances in sparse signal reconstruction that is formalized under the celebrated compressed sensing paradigm. Here, we

Research paper thumbnail of First-Photon Imaging: Scene Depth and Reflectance Acquisition from One Detected Photon per Pixel

First-Photon Imaging: Scene Depth and Reflectance Acquisition from One Detected Photon per Pixel

ABSTRACT Capturing depth and reflectance images using active illumination despite the detection o... more ABSTRACT Capturing depth and reflectance images using active illumination despite the detection of little light backscattered from the scene has wide-ranging applications in computer vision. Conventionally, even with single-photon detectors, a large number of detected photons is needed at each pixel location to mitigate Poisson noise. Here, using only the first detected photon at each pixel location, we capture both the 3D structure and reflectivity of the scene, demonstrating greater photon efficiency than previous work. Our computational imager combines physically accurate photoncounting statistics with exploitation of spatial correlations present in real-world scenes. We experimentally achieve millimeter-accurate, sub-pulse width depth resolution and 4-bit reflectivity contrast, simultaneously, using only the first photon detection per pixel, even in the presence of high background noise. Our technique enables rapid, low-power, and noise-tolerant active optical imaging.