Kenneth Hanson | Los Alamos National Laboratory (original) (raw)
Papers by Kenneth Hanson
Conference Record of the 1991 Ieee Nuclear Science Symposium and Medical Imaging Conference, 1991
This rcpon was prepared as an account of work sponsored by an agency of the United States Governm... more This rcpon was prepared as an account of work sponsored by an agency of the United States Government. Neither the United States Government nor any agency thereof, nor any of their employees, makes any warranty, express or implied, or assumes any legal liability or responsibility for the accuracy, completeness, or usefulness of any information, apparatus, product, or process disclosed, or represents that its use would not infringe privately owned rights. Reference herein to any specific commercial product, process, or service by trade name, trademark, manufacturer, or otherwise does not necessarily constitute or imply its endorsement, recommendation, or favoring by the United States Government or any agency thereof. The views and opinions of authors expressed herein do not necessarily state or reflect those of the United States Government or any agency thereof. By acceptance of Inis arucle, the publisher recognizes that the U.S Government retains a nonexciuswe, royalty-free hcense to pul=l,shor reproduce the published form of this contribution, or to allow others to do so, for U.S Government purposes.
With a change of variables the Abel transform gives the projection of a 2-D distribution with cir... more With a change of variables the Abel transform gives the projection of a 2-D distribution with circular symmetry. Thus the Abel inversion formula allows one to determine the radial dependence of such a distribution from its projection. However, this inversion formula is very sensitive to noise in the projection data. When the projection data are derived from radiographic measurements, further
Maximum-Entropy and Bayesian Spectral Analysis and Estimation Problems, 1987
A method of evaluating image-recovery algorithms is presented that is based on the numerical comp... more A method of evaluating image-recovery algorithms is presented that is based on the numerical computation of how well a specified visual task can be performed on the basis of the reconstructed images. A Monte Carlo technique is used to simulate the complete imaging process including generation of scenes appropriate to the desired application, subsequent data taking, image recovery, and performance of the stated task based on the final image. The pseudorandom-simulation process permits one to assess the response of an image-recovery algorithm to many different realizations of the same type of scene. The usefulness of this method is demonstrated through a study of the algebraic reconstruction technique (ART), a tomographic reconstruction algorithm that reconstructs images from their projections. The task chosen for this study is the detection of disks of known size and position. Task performance is rated on the basis of the detectability index derived from the area under the receiver operating characteristic curve. In the imaging situations explored, the use of the nonnegativity constraint in the ART dramatically increases the detectability of objects in some instances, particularly when the data consist of a limited number of noiseless projections. Conversely, the nonnegativity constraint does not improve detectability when the data are complete but noisy.
The multitude of commercial computed tomographic (CT) scanners which have recently been introduce... more The multitude of commercial computed tomographic (CT) scanners which have recently been introduced for use in diagnostic radiology has given rise to a need to compare these different machines in terms of image quality and dose to the patient. It is therefore desirable to arrive at a figure of merit for a CT image which gives a measure of the diagnostic efficacy of that image. This figure of merit may well be dependent upon the specific visual task being performed. It is clearly important that the capabilities and deficiencies of the human observer as well as the interface between man and machine, namely the viewing system, be taken into account in formulating the figure of merit. Since the CT reconstruction is the result of computer processing, it is possible to use this processing to alter the characteristics of the displayed images. This image processing may improve or degrade the figure of merit.
When the available CT projection data are incomplete, there exists a null space in the space of p... more When the available CT projection data are incomplete, there exists a null space in the space of possible reconstructions about which the data provide no information. Deterministic CT reconstructions are impotent in regard to this null space. Furthermore, it is shown that consistency conditions based on projection moments do not provide the missing projections. When the projection data consist of a set of parallel projections that do not encompass a complete 180° rotation, the null space corresponds to a missing sector in the Fourier transform of the original 2-D function. The long-range streak artifacts created by the missing sector can be reduced by attenuating the Fourier transform of the reconstruction smoothly to zero at the sector boundary. It is shown that the Fourier transform of a reconstruction obtained under a maximum entropy constraint is nearly zero in the missing sector. Hence, maximum entropy does not overcome the basic lack of information. It is suggested that some p...
ABSTRACT.,We have previously described how imaging systems and image reconstruction algorithms,ca... more ABSTRACT.,We have previously described how imaging systems and image reconstruction algorithms,can be evaluated,on the basis of how,well binary-discrimination,tasks can be performed by a machine algorithm that “views” the reconstructions [l, 21. The present work examines the performance,of a family,of algorithmic observers viewing tomographic images reconstructed using the Cambridge Maximum Entropy software, MEMSYS 3. We investigate the effects on the performance of these observers due to varying the parameter QI, which controls the strength of the prior in the
A method for optimizing image-recovery algorithms is presented that is based on how well the spec... more A method for optimizing image-recovery algorithms is presented that is based on how well the specified task of object localization can be performed using the reconstructed images. The task performance is numerically assessed by a Monte Carlo simulation of the complete imaging process including the generation of scenes appropriate to the desired application, subsequent data taking, image recovery, and performance of the stated task based on the final image. This method is used to optimize the constrained Algebraic Reconstruction Technique (ART), which reconstructs images from their projections under a nonnegativity constraint by means of an iterative updating procedure. The optimization is performed by finding the relaxation factor, which is employed in the updating procedure, that yields the minimum rms error in estimating the position of discs in the reconstructed images. It is found that the optimum operating points for the best object localization are essentially the same as thos...
A method for optimizing image-recovery algorithms is presented that is based on how well a specif... more A method for optimizing image-recovery algorithms is presented that is based on how well a specified visual task can be performed using the reconstructed images. Visual task performance is numerically assessed by a Monte Carlo simulation of the complete imaging process including the generation of scenes appropriate to the desired application, subsequent data taking, image recovery, and performance of the stated task based on the final image. This method is used to optimize the Algebraic Reconstruction Technique (ART), which reconstructs images from their projections, by varying the relaxation factor employed in the updating procedure. In some of the imaging situations studied, it is found that the optimization of constrained ART, in which a nonnegativity constraint is invoked, can vastly increase the detectability of objects. There is little improvement attained for unconstrained ART.
This work demonstrates the application of a method to optimize image reconstruction algorithms on... more This work demonstrates the application of a method to optimize image reconstruction algorithms on the basis of the performance of specific visual tasks that are to be accomplished using the reconstructed images. The evaluation of task performance is numerically realized by a Monte Carlo simulation of the complete imaging chain, including the final inference based on the reconstructions. Fundamental to this evaluation is that it yields an average response by consideration of many initial scenes. It is shown that the use of the nonnegativity constraint in the Algebraic Reconstruction Technique can significantly improve performance in situations where there is a severe lack of measurements when the relaxation factor is optimized. There is no indication in any of the cases studied hitherto that the nonnegativity constraint can improve performance in situations where the data are complete, but noisy. 10 refs., 5 figs.
A technique to estimate the radial dependence of the noise power spectrum of images is proposed i... more A technique to estimate the radial dependence of the noise power spectrum of images is proposed in which the calculations are conducted solely in the spatial domain of the noise image. The noise power spectrum averaged over a radial spatial-frequency interval is obtained form the variance of a noise image that has been convolved with a small kernel that approximates a Laplacian operator. Recursive consolidation of the image by factors of two in each dimension yields estimates of the noise power spectrum over that full range of spatial frequencies.
Physica D: Nonlinear Phenomena, 1999
A probabilistic framework is presented for assessing the uncertainties in simulation predictions ... more A probabilistic framework is presented for assessing the uncertainties in simulation predictions that arise from model parameters derived from uncertain measurements. A probabilistic network facilitates both conceptualizing and computationally implementing an analysis of a large number of experiments in terms of many i n trinsic models in a logically consistent manner. This approach permits one to improve one's knowledge about the underlying models at every level of the hierarchy of validation experiments.
Suppose that it is desired to estimate certain parameters associated with a model of an object th... more Suppose that it is desired to estimate certain parameters associated with a model of an object that is contained within a larger scene and that only indirect measurements of the scene are available. The optimal solution is provided by a Bayesian approach, which is founded on the posterior probability density distribution. The complete Bayesian procedure requires an integration of the posterior probability over all possible values of the image exterior to the local region being analyzed. In the present work, the full treatment is approximated by simultaneously estimating the reconstruction outside the local region and the parameters of the model within the local region that maximize the posterior probability. A Monte Carlo procedure is employed to evaluate the usefulness of the technique in a signal-known-exactly detection task
Applied Optics, 1982
The magnitude of a measurement of the noise power spectrum (NPS) of a radiographic screen-film sy... more The magnitude of a measurement of the noise power spectrum (NPS) of a radiographic screen-film system is affected by the light-scattering properties of the film studied and the optical characteristics of the micro-densitometer used to sample the image. To facilitate absolute NPS intercomparisons among laboratories, NPS in terms of instrument density must be converted to diffuse density. Conversion in terms of a Callier Q factor was found to be inadequate due to nonlinearity of the density response of the microdensitometer. By establishing instrument-to-diffuse density characteristic curves for the microdensitometers at two laboratories and correcting the NPS by the square of the slopes of these curves at the density of the image, good agreement was achieved for independent NPS measurements of a given film sample.
Applied Optics, 1985
functions, which are arranged on a rectangular grid and possess a local region of support. The ba... more functions, which are arranged on a rectangular grid and possess a local region of support. The basis functions considered here are positive and may overlap. It is found that basis functions based on cubic B-splines offer significant improvements in the calculational accuracy that can be achieved with iterative tomographic reconstruction algorithms. By employing repetitive basis functions, the computitisnal effort involved in these algorithms can be minimized through the use of tabulated values for the line or strip integralsover a single-basis function. The local nature of the basis functions reduces the difficulties associated with applying iocal wnstraints on reconstruction values, such as upper and lower limits. Sincea r e~l l s t~d i~n is specified everywhere by a set of coefficients, display of a coarsely represented image does not require an arbitrary choice of an interpolation function Kenneth Hanson is with
We present an approach to uncertainty quantification for nuclear applications, which combines the... more We present an approach to uncertainty quantification for nuclear applications, which combines the covariance evaluation of differential cross-sections data and the error propagation from matching a criticality experiment using a neutron-transport calculation. We have studied the reduction in uncertainty of 239 Pu fission cross sections by using a one-dimensional neutron-transport calculation with the PARTISN code. The evaluation of 239 Pu differential cross-section data is combined with a criticality measurement (Jezebel) using a Bayesian method. To quantify the uncertainty in such calculations, we generate a set of random samples of the cross sections, which represents the covariance matrix, and estimate the distribution of calculated quantities, such as criticality. We show that inclusion of the Jezebel data reduces uncertainties in estimating neutron multiplicity.
It is well known that image assessment is task dependent. This is demonstrated in the context of ... more It is well known that image assessment is task dependent. This is demonstrated in the context of images reconstructed from sparse data using MEM-SYS3. We demonstrate that the problem of determining the regularization-or hyperparameter has a task-dependent character independent of whether the images are viewed by human observers or by classical or neural-net classi ers. This issue is not addressed by Bayesian image analysts. We suggest, however, that knowledge of the task, or the use to which the images are to be put, is a form of prior knowledge that should be incorporated into a Bayesian analysis. We sketch a frequentist approach that may serve as a guide to a Bayesian solution.
Computational Imaging, 2003
The goal in Quasi-Monte Carlo (QMC) is to improve the accuracy of integrals estimated by the Mont... more The goal in Quasi-Monte Carlo (QMC) is to improve the accuracy of integrals estimated by the Monte Carlo technique through a suitable specification of the sample point set. Indeed, the errors from N samples typically drop as N −1 with QMC, which is much better than the N −1/2 dependence obtained with Monte Carlo estimates based on random point sets. The heuristic reasoning behind selecting QMC point sets is similar to that in halftoning, that is, to spread the points out as evenly as possible, consistent with the desired point density. I will outline the parallels between QMC and halftoning, and describe an halftoning-inspired algorithm for generating a sample set with uniform density, which yields smaller integration errors than standard QMC algorithms in two dimensions.
Conference Record of the 1991 Ieee Nuclear Science Symposium and Medical Imaging Conference, 1991
This rcpon was prepared as an account of work sponsored by an agency of the United States Governm... more This rcpon was prepared as an account of work sponsored by an agency of the United States Government. Neither the United States Government nor any agency thereof, nor any of their employees, makes any warranty, express or implied, or assumes any legal liability or responsibility for the accuracy, completeness, or usefulness of any information, apparatus, product, or process disclosed, or represents that its use would not infringe privately owned rights. Reference herein to any specific commercial product, process, or service by trade name, trademark, manufacturer, or otherwise does not necessarily constitute or imply its endorsement, recommendation, or favoring by the United States Government or any agency thereof. The views and opinions of authors expressed herein do not necessarily state or reflect those of the United States Government or any agency thereof. By acceptance of Inis arucle, the publisher recognizes that the U.S Government retains a nonexciuswe, royalty-free hcense to pul=l,shor reproduce the published form of this contribution, or to allow others to do so, for U.S Government purposes.
With a change of variables the Abel transform gives the projection of a 2-D distribution with cir... more With a change of variables the Abel transform gives the projection of a 2-D distribution with circular symmetry. Thus the Abel inversion formula allows one to determine the radial dependence of such a distribution from its projection. However, this inversion formula is very sensitive to noise in the projection data. When the projection data are derived from radiographic measurements, further
Maximum-Entropy and Bayesian Spectral Analysis and Estimation Problems, 1987
A method of evaluating image-recovery algorithms is presented that is based on the numerical comp... more A method of evaluating image-recovery algorithms is presented that is based on the numerical computation of how well a specified visual task can be performed on the basis of the reconstructed images. A Monte Carlo technique is used to simulate the complete imaging process including generation of scenes appropriate to the desired application, subsequent data taking, image recovery, and performance of the stated task based on the final image. The pseudorandom-simulation process permits one to assess the response of an image-recovery algorithm to many different realizations of the same type of scene. The usefulness of this method is demonstrated through a study of the algebraic reconstruction technique (ART), a tomographic reconstruction algorithm that reconstructs images from their projections. The task chosen for this study is the detection of disks of known size and position. Task performance is rated on the basis of the detectability index derived from the area under the receiver operating characteristic curve. In the imaging situations explored, the use of the nonnegativity constraint in the ART dramatically increases the detectability of objects in some instances, particularly when the data consist of a limited number of noiseless projections. Conversely, the nonnegativity constraint does not improve detectability when the data are complete but noisy.
The multitude of commercial computed tomographic (CT) scanners which have recently been introduce... more The multitude of commercial computed tomographic (CT) scanners which have recently been introduced for use in diagnostic radiology has given rise to a need to compare these different machines in terms of image quality and dose to the patient. It is therefore desirable to arrive at a figure of merit for a CT image which gives a measure of the diagnostic efficacy of that image. This figure of merit may well be dependent upon the specific visual task being performed. It is clearly important that the capabilities and deficiencies of the human observer as well as the interface between man and machine, namely the viewing system, be taken into account in formulating the figure of merit. Since the CT reconstruction is the result of computer processing, it is possible to use this processing to alter the characteristics of the displayed images. This image processing may improve or degrade the figure of merit.
When the available CT projection data are incomplete, there exists a null space in the space of p... more When the available CT projection data are incomplete, there exists a null space in the space of possible reconstructions about which the data provide no information. Deterministic CT reconstructions are impotent in regard to this null space. Furthermore, it is shown that consistency conditions based on projection moments do not provide the missing projections. When the projection data consist of a set of parallel projections that do not encompass a complete 180° rotation, the null space corresponds to a missing sector in the Fourier transform of the original 2-D function. The long-range streak artifacts created by the missing sector can be reduced by attenuating the Fourier transform of the reconstruction smoothly to zero at the sector boundary. It is shown that the Fourier transform of a reconstruction obtained under a maximum entropy constraint is nearly zero in the missing sector. Hence, maximum entropy does not overcome the basic lack of information. It is suggested that some p...
ABSTRACT.,We have previously described how imaging systems and image reconstruction algorithms,ca... more ABSTRACT.,We have previously described how imaging systems and image reconstruction algorithms,can be evaluated,on the basis of how,well binary-discrimination,tasks can be performed by a machine algorithm that “views” the reconstructions [l, 21. The present work examines the performance,of a family,of algorithmic observers viewing tomographic images reconstructed using the Cambridge Maximum Entropy software, MEMSYS 3. We investigate the effects on the performance of these observers due to varying the parameter QI, which controls the strength of the prior in the
A method for optimizing image-recovery algorithms is presented that is based on how well the spec... more A method for optimizing image-recovery algorithms is presented that is based on how well the specified task of object localization can be performed using the reconstructed images. The task performance is numerically assessed by a Monte Carlo simulation of the complete imaging process including the generation of scenes appropriate to the desired application, subsequent data taking, image recovery, and performance of the stated task based on the final image. This method is used to optimize the constrained Algebraic Reconstruction Technique (ART), which reconstructs images from their projections under a nonnegativity constraint by means of an iterative updating procedure. The optimization is performed by finding the relaxation factor, which is employed in the updating procedure, that yields the minimum rms error in estimating the position of discs in the reconstructed images. It is found that the optimum operating points for the best object localization are essentially the same as thos...
A method for optimizing image-recovery algorithms is presented that is based on how well a specif... more A method for optimizing image-recovery algorithms is presented that is based on how well a specified visual task can be performed using the reconstructed images. Visual task performance is numerically assessed by a Monte Carlo simulation of the complete imaging process including the generation of scenes appropriate to the desired application, subsequent data taking, image recovery, and performance of the stated task based on the final image. This method is used to optimize the Algebraic Reconstruction Technique (ART), which reconstructs images from their projections, by varying the relaxation factor employed in the updating procedure. In some of the imaging situations studied, it is found that the optimization of constrained ART, in which a nonnegativity constraint is invoked, can vastly increase the detectability of objects. There is little improvement attained for unconstrained ART.
This work demonstrates the application of a method to optimize image reconstruction algorithms on... more This work demonstrates the application of a method to optimize image reconstruction algorithms on the basis of the performance of specific visual tasks that are to be accomplished using the reconstructed images. The evaluation of task performance is numerically realized by a Monte Carlo simulation of the complete imaging chain, including the final inference based on the reconstructions. Fundamental to this evaluation is that it yields an average response by consideration of many initial scenes. It is shown that the use of the nonnegativity constraint in the Algebraic Reconstruction Technique can significantly improve performance in situations where there is a severe lack of measurements when the relaxation factor is optimized. There is no indication in any of the cases studied hitherto that the nonnegativity constraint can improve performance in situations where the data are complete, but noisy. 10 refs., 5 figs.
A technique to estimate the radial dependence of the noise power spectrum of images is proposed i... more A technique to estimate the radial dependence of the noise power spectrum of images is proposed in which the calculations are conducted solely in the spatial domain of the noise image. The noise power spectrum averaged over a radial spatial-frequency interval is obtained form the variance of a noise image that has been convolved with a small kernel that approximates a Laplacian operator. Recursive consolidation of the image by factors of two in each dimension yields estimates of the noise power spectrum over that full range of spatial frequencies.
Physica D: Nonlinear Phenomena, 1999
A probabilistic framework is presented for assessing the uncertainties in simulation predictions ... more A probabilistic framework is presented for assessing the uncertainties in simulation predictions that arise from model parameters derived from uncertain measurements. A probabilistic network facilitates both conceptualizing and computationally implementing an analysis of a large number of experiments in terms of many i n trinsic models in a logically consistent manner. This approach permits one to improve one's knowledge about the underlying models at every level of the hierarchy of validation experiments.
Suppose that it is desired to estimate certain parameters associated with a model of an object th... more Suppose that it is desired to estimate certain parameters associated with a model of an object that is contained within a larger scene and that only indirect measurements of the scene are available. The optimal solution is provided by a Bayesian approach, which is founded on the posterior probability density distribution. The complete Bayesian procedure requires an integration of the posterior probability over all possible values of the image exterior to the local region being analyzed. In the present work, the full treatment is approximated by simultaneously estimating the reconstruction outside the local region and the parameters of the model within the local region that maximize the posterior probability. A Monte Carlo procedure is employed to evaluate the usefulness of the technique in a signal-known-exactly detection task
Applied Optics, 1982
The magnitude of a measurement of the noise power spectrum (NPS) of a radiographic screen-film sy... more The magnitude of a measurement of the noise power spectrum (NPS) of a radiographic screen-film system is affected by the light-scattering properties of the film studied and the optical characteristics of the micro-densitometer used to sample the image. To facilitate absolute NPS intercomparisons among laboratories, NPS in terms of instrument density must be converted to diffuse density. Conversion in terms of a Callier Q factor was found to be inadequate due to nonlinearity of the density response of the microdensitometer. By establishing instrument-to-diffuse density characteristic curves for the microdensitometers at two laboratories and correcting the NPS by the square of the slopes of these curves at the density of the image, good agreement was achieved for independent NPS measurements of a given film sample.
Applied Optics, 1985
functions, which are arranged on a rectangular grid and possess a local region of support. The ba... more functions, which are arranged on a rectangular grid and possess a local region of support. The basis functions considered here are positive and may overlap. It is found that basis functions based on cubic B-splines offer significant improvements in the calculational accuracy that can be achieved with iterative tomographic reconstruction algorithms. By employing repetitive basis functions, the computitisnal effort involved in these algorithms can be minimized through the use of tabulated values for the line or strip integralsover a single-basis function. The local nature of the basis functions reduces the difficulties associated with applying iocal wnstraints on reconstruction values, such as upper and lower limits. Sincea r e~l l s t~d i~n is specified everywhere by a set of coefficients, display of a coarsely represented image does not require an arbitrary choice of an interpolation function Kenneth Hanson is with
We present an approach to uncertainty quantification for nuclear applications, which combines the... more We present an approach to uncertainty quantification for nuclear applications, which combines the covariance evaluation of differential cross-sections data and the error propagation from matching a criticality experiment using a neutron-transport calculation. We have studied the reduction in uncertainty of 239 Pu fission cross sections by using a one-dimensional neutron-transport calculation with the PARTISN code. The evaluation of 239 Pu differential cross-section data is combined with a criticality measurement (Jezebel) using a Bayesian method. To quantify the uncertainty in such calculations, we generate a set of random samples of the cross sections, which represents the covariance matrix, and estimate the distribution of calculated quantities, such as criticality. We show that inclusion of the Jezebel data reduces uncertainties in estimating neutron multiplicity.
It is well known that image assessment is task dependent. This is demonstrated in the context of ... more It is well known that image assessment is task dependent. This is demonstrated in the context of images reconstructed from sparse data using MEM-SYS3. We demonstrate that the problem of determining the regularization-or hyperparameter has a task-dependent character independent of whether the images are viewed by human observers or by classical or neural-net classi ers. This issue is not addressed by Bayesian image analysts. We suggest, however, that knowledge of the task, or the use to which the images are to be put, is a form of prior knowledge that should be incorporated into a Bayesian analysis. We sketch a frequentist approach that may serve as a guide to a Bayesian solution.
Computational Imaging, 2003
The goal in Quasi-Monte Carlo (QMC) is to improve the accuracy of integrals estimated by the Mont... more The goal in Quasi-Monte Carlo (QMC) is to improve the accuracy of integrals estimated by the Monte Carlo technique through a suitable specification of the sample point set. Indeed, the errors from N samples typically drop as N −1 with QMC, which is much better than the N −1/2 dependence obtained with Monte Carlo estimates based on random point sets. The heuristic reasoning behind selecting QMC point sets is similar to that in halftoning, that is, to spread the points out as evenly as possible, consistent with the desired point density. I will outline the parallels between QMC and halftoning, and describe an halftoning-inspired algorithm for generating a sample set with uniform density, which yields smaller integration errors than standard QMC algorithms in two dimensions.