Image Reconstruction in Seismic and Medical Tomography (original) (raw)
Related papers
Adaptation of back projection tomography to seismic travel time problems
Journal of Geophysical Research, 1988
A back projection method of reconstruction is adapted to invert seismic travel time data for velocity structure. Adaptations are made so that the inhomogeneous and anisotropic ray sets and the threedimensional geometries commonly dealt with in seismic experiments can be handled with greater success. Jacobi iteration, deconvolution, and ray weighting work well in augmenting the basic back projection method to produce a well-focused image. These methods succeed by amounts that depend on the quality of the ray coverage. Also, the ability to reconstruct an accurate image when the data include moderate amounts of noise is shown to be good. Comparison of inversions produced with back projection tomography and with damped least squares indicate that the two methods are comparable in their ability to reconstruct an image of the actual structure. The back projection approach, however, is much more computer efficient. In practice, this allows for the construction of more detailed inversions. INTRODUCTION The theoretical basis for tomography can be traced to Radon [1917], who formulated the forward and inverse transform pair that now bear his name. The Radon transform can often be related to the projection of a two-dimensional scalar field, such as the projection of a slowness field along a parallel ray set to produce the travel time delays associated with the rays. One of the first significant applications of this theory to a physical problem was that of Bracewell [1956], who devised a method to image celestial bodies with radio signals. Seismologists also have long been using the principle of the Radon transform in the construction of "slant stacks." But it has been the medical researchers who have had the most remarkable success, and they who coined the word tomography for the high-resolution, two-dimensional "tomograph" (slice picture) through a patient. Fundamental to their success has been the discretization of the space to be imaged into cells, thus posing the problem in a manner well suited to digital computers. High resolution is achieved by dividing the space into many small cells. The key inversion algorithm employed by this method is a back projection scheme in which each ray is individually traced and the signal associated with the ray is distributed in the region along the ray path. The algorithm relies on the back projection of many rays and the superposition of the associated streaks to reconstruct an image. By itself, simple back projection produces a rather blurred image. The nature of the blurring is well understood, though, and the application of procedures specifically designed to compensate for the blurring produces a high-quality image. It is the capability of back projection tomography to produce a highly resolved image that makes it attractive for seismic application. However, such application requires the ability to handle ray sets of poor distribution and which often fill all three spatial dimensions. These are problems carefully avoided in the medical application of the technique. In this paper, adaptations to back projection tomography are discussed that allow us to deal with these more general ray geometries.
The importance of the spread system geometry on the image reconstruction of seismic tomography
Journal of Geophysics and Engineering, 2008
The influence of spread and shot systems is investigated using common measuring geometries like conventional crosshole and multi-offset VSP. Both synthetic and field data are used to understand better the importance of commonly used spread and shot systems in the image reconstruction process. In the synthetic case, a faulted structure under a near-surface low velocity layer is used. The results of the seismic images compared with the original model and the similarities in the structure and in the velocities are investigated. Tomograms derived from the real data are compared with ground truth. Our study shows that realistic spread systems can produce nearly the same quality images as in the ideal ones.
Model parametrization in seismic tomography: a choice of consequence for the solution quality
Physics of the Earth and Planetary Interiors, 2001
To better assess quality of three-dimensional (3-D) tomographic images and to better define possible improvements to tomographic inversion procedures, one must consider not only data quality and numerical precision of forward and inverse solvers but also appropriateness of model parametrization and display of results. The quality of the forward solution, in particular, strongly depends on parametrization of the velocity field and is of great importance both for calculation of travel times and partial derivatives that characterize the inverse problem. To achieve a quality in model parametrization appropriate to high-precision forward and inverse algorithms and to high-quality data, we propose a three-grid approach encompassing a seismic, a forward, and an inversion grid. The seismic grid is set up in such a way that it may appropriately account for the highest resolution capability (i.e. optimal data) in the data set and that the 3-D velocity structure is adequately represented to the smallest resolvable detail apriori known to exist in real earth structure. Generally, the seismic grid is of uneven grid spacing and it provides the basis for later display and interpretation. The numerical grid allows a numerically stable computation of travel times and partial derivatives. Its specifications are defined by the individual forward solver and it might vary for different numerical techniques. The inversion grid is based on the seismic grid but must be large enough to guarantee uniform and fair resolution in most areas. For optimal data sets the inversion grid may eventually equal the seismic grid but in reality, the spacing of this grid will depend on the illumination qualities of our data set (ray sampling) and on the maximum matrix size we can invert for. The use of the three-grid approach in seismic tomography allows to adequately and evenly account for characteristics of forward and inverse solution algorithms, apriori knowledge of earth's structure, and resolution capability of available data set. This results in possibly more accurate and certainly in more reliable tomographic images since the inversion process may be well-tuned to the particular application and since the three-grid approach allows better assessment of solution quality.
Methodology of Seismic Tomography
In this chapter, we introduce the tomographic methods which are widely used to study three-dimensional (3-D) seismic velocity, attenuation and anisot-ropy structures of the Earth's interior. The fundamental mathematical equations of these methods are presented for a better understanding of the principles of seismic tomography.
Seismic tomography in civil engineering investigation
1994
Seismic processing begins to appear as a new method not only in the applied geophysics of great depth but even in geophysics in civil engineering, hydrogeology and environment. The application of seismic tomography in civil engineering investigation is described. This task was done for PC computers. The main ideas of mathematical and software solution are given. The obtained solutions enable not only the tomography processing of crosshole measurements but it is possible to model various geological situations. Examples of crossgalleries processing and results from extending seismic logging are given. These types of measurements are the most common in civil engineering investigation. There is one example from the noninline measurement too. The results of described solution is not only the velocity contour line map but even the map with seismic curved rays among shot points and receivers.
Crosshole seismic waveform tomography – II. Resolution analysis
In an accompanying paper, we used waveform tomography to obtain a velocity model between two boreholes from a real crosshole seismic experiment. As for all inversions of geophysical data, it is important to make an assessment of the final model, to determine which parts of the model are well-resolved and can confidently be used for geological interpretation. In this paper we use checkerboard tests to provide a quantitative estimate of the performance of the inversion and the reliability of the final velocity model. We use the output from the checker-board tests to determine resolvability across the velocity model. Such tests can act as good guides for designing appropriate inversion strategies. Here we discovered that, by including both reference-model and smoothing constraints in initial inversions, and then relaxing the smoothing constraint for later inversions, an optimum velocity image was obtained. Additionally , we noticed that the performance of the inversion was dependent on a relationship between velocity perturbation and checkerboard grid-size: larger velocity perturbations were better-resolved when the grid-size was also increased. Our results suggest that model assessment is an essential step prior to interpreting features in waveform tomographic images. Waveform tomography is a powerful tool that can yield quantitative images of physical properties of the earth media. Compared to traveltime tomography, a velocity image generated by waveform tomography will have significantly better resolution. However, one significant question remains, 'how reliable is the velocity image, and can we use it to make a direct geological interpretation?' Can we use the observed velocity contrasts to distinguish individual geological layers? In this paper, we conduct a series of checkerboard tests on a waveform tomographic velocity model that was obtained from a real crosshole seismic data set. The aims of these test are to reveal the resolving power of the inversion when dealing with real data, and to provide an indication of reliability of the inversion result. Generally speaking, a geophysical tomographic solution is not unique. It depends on the quality of the data, data selection, the inversion method employed, and the model parametrization. Tomo-graphic resolution can be very poor in regions where the distribution of sources and receivers is irregular. Additionally, it is common to apply model constraints to the inverse problem to produce a practical solution. The effect of these factors upon the inversion solution is difficult to quantify, especially when dealing with real seismic data. It is thus questionable whether we can use the final velocity model to infer the earth's properties correctly. In this paper we use checkerboard tests to verify the final velocity model obtained from a waveform tomographic inversion. Checker-board testing has been used commonly in traveltime tomography (Inoue et al. 1990; Zelt 1998; Zelt & Barton 1998; Morgan et al. 2002), but has not yet been used in an application of waveform tomography to real seismic data. We set up a checkerboard consisting of rows and columns of alternating positive and negative velocity anomalies, superimposed on the final velocity model. The velocity perturbations are a percentage of the actual velocity value, and thus are spatially varying. Based on the checkerboard model, we generate a synthetic data set using the same frequency-domain finite difference scheme as in the inversion itself, and then invert these data using exactly the same method, procedure, constraints, and parametrization as used for the tomographic inversion of the real data. Resolvability at any point of the model space is defined in terms of the ratio of recovered velocity anomaly to the real velocity perturbation. In the resolution analysis tests, we test the effect of: the reference-model constraint, the model smoothness constraint, and the effect of the combination that we applied in the inversion of the real data. We also mimic the real data acquisition with an irregular source/receiver geometry. Then we test the effect of the irregular ray coverage in the real experiment by setting up an ideal crosshole configuration, consisting of regular sources for each of the cells in one borehole and regular receivers spanning over all cells at the other borehole. Finally, we test the effect of varying the magnitude of the velocity perturbation and the cell-size of the checkerboard.