Petr Vaníček - Academia.edu (original) (raw)

Papers by Petr Vaníček

Research paper thumbnail of Geodesy, the concepts

Research paper thumbnail of Analyses nouvelles du mouvement du p�le terrestre

Research paper thumbnail of Effet de la densité des données sur la précision de la détermination du pied du talus continental par une surface de courbure maximum, en utilisant un algorithme de traçage automatique de la dorsale

Revue Hydrographique Internationale, 1996

Research paper thumbnail of Adjustment methods

Kluwer Academic Publishers eBooks, Feb 1, 2006

Research paper thumbnail of Padding of Terrestrial Gravity Data to Improve Stokes-Helmert Geoid Computation

Research paper thumbnail of Contemporary vertical crustal mouvements in southern Ontario

Research paper thumbnail of Selection of an appropriate height system for geomatics

Journal of Remote Sensing & GIS, 2018

Research paper thumbnail of Can Mean Values of Helmert’s Gravity Anomalies be Continued Downward Directly?

Geoinformatica, Jul 22, 2019

The computation of a precise gravimetric geoid based on the Stokes-Helmert approach requires the ... more The computation of a precise gravimetric geoid based on the Stokes-Helmert approach requires the solution of the geodetic boundary value problem. For that, the mean Helmert's gravity anomaly on the earth's topographic surface must be reduced to the geoid, the surface that plays the role of the boundary. This reduction is a process known as downward continuation. This paper considers the downward continuation as a solution of the discrete inverse Poisson problem. It shows the derivation of a doubly-averaged upward continuation operator that relates mean Helmert's gravity anomaly from the boundary to the surface. Downward continuation is then carried out by the inversion of this operator. It is shown that this can be done rigorously if, and only if, the processes of averaging and downward continuation are commutative (mutually interchangeable).

Research paper thumbnail of Hiking and biking with GPS: The Canadian perspective

Springer eBooks, Jan 25, 2006

The paper attempts to review and put into perspective the findings of a recent by-invitation-only... more The paper attempts to review and put into perspective the findings of a recent by-invitation-only workshop on GPS held in Ottawa. The workshop brought together both GPS-information suppliers and consumers coming from all walks of life. Some unique exchanges of views took place which are worth bringing to the attention of the geodetic community.

Research paper thumbnail of A new look at the U.S. Geological Survey 1970-1980 horizontal crustal deformation data around Hollister, California

Journal of Geophysical Research, Dec 10, 1991

A compact mathematical formulation is presented for the estimation of crustal strain and fault di... more A compact mathematical formulation is presented for the estimation of crustal strain and fault displacement parameters from repeated geodetic observations. On the basis of the theory of Hilbert space optimization, complex linear forms are used to model spatially and temporally piecewise continuous relative horizontal displacement fields. A mathematical model for the simultaneous network adjustment and strain approximation is elaborated. In an application of the method to the Hollister network reobserved several times during the 1970s, several approximation models are evaluated. The final model, which incorporates a third degree complex algebraic polynomial with four block translation terms in space and a fifth degree algebraic polynomial with three episodic terms in time, is presented in detail. This approximation estimates coseismic fault slip and strain release associated with three moderate earthquakes which occurred in the Hollister area within the time interval spanned by the observations.

Research paper thumbnail of Further analysis of the 1981 Southern California Field Test for leveling refraction

Journal of Geophysical Research, 1986

Application of least squares spectral analysis and multiple linear regression techniques to the 1... more Application of least squares spectral analysis and multiple linear regression techniques to the 1981 southern California field test for leveling refraction has revealed that in addition to differential refraction, rod/instrument settlement and an effect attributed to differential rod miscalibration are also detectable. The object of our analysis was the discrepancy between the forward and backward runnings of a section as this quantity properly reflects the direction of running thereby allowing for the detection of direction dependent effects. A multiple regression model using three arguments representing differential refraction, rod/instrument settlement, and differential rod miscalibration reduced the observed variation of the discrepancies by 61% as opposed to 53% when only the National Geodetic Survey (NGS) computed refraction correction is applied. It was found that of the original 23-mm accumulated discrepancy, 14 mm was attributed to differential refraction, 20 mm to settlement, and-14 mm to differential rod miscalibration. Analyses with the NGS computed refraction corrections applied (based on Kukkam•iki's single sight equation with observed temperatures) gave similar results. It is also shown that the settlement effect is always present in any discrepancy and accumulates in the discrepancies between the forward and backward runnings while it cancels and is thus hidden in the accumulation of the NGS-derived discrepancies between the short and long sight length runnings. IN'rRODUCTION In May and June 1981 a joint U.S. Geological Survey and National Geodetic Survey field leveling experiment was carried out along a 50-km line from Saugus to Palmdale, California [cf. Adams, 1981]. The purposes of the experiment were [Whalen and Strange, 1983] (1) to measure the magnitude of the differences between heights determined using two different sight lengths along the same leveling line, (2) to determine if standard refraction models, in conjunction with measured vertical temperature gradients, would explain possible differences in observed heights, and (3) to determine how well the temperature model developed by Holdahl [1981] reproduces observed temperature differences. For this experiment a single line of double run leveling over uniformly sloping terrain was observed. One running employed short sights (SSL) of an average length of 24.3 m and the other long sights (LSL) of an average length of 42.6 m. It was expected that the uniform slope and significant sight length difference would amplify the differential refraction effect on the discrepancies between the SSL and LSL runnings. It was also intended to frequently alter the direction of running of both the short and long sight levelings in order to minimize the rod settlement effect. Unfortunately, this procedure was not strictly followed as 12 of the 60 section runnings (20%) were not properly "balanced." In fact, all of the imbalance occurs over the last 42 runnings (70% of the line) corresponding to a 29% imbalance over this part. A number of analyses have been performed on the collected data [e.g.,

Research paper thumbnail of Positioning of Horizontal Geodetic Datums

Canadian surveyor, Dec 1, 1974

This paper treats the classical problem of positioning horizontal geodetic datums. By ‘classical ... more This paper treats the classical problem of positioning horizontal geodetic datums. By ‘classical problem’ we mean the problem we face when dealing with the usual horizontal control networks, as opposed to some of the more modern ideas, such as the use of geocentric coordinates, or the ideas of Hotine [1959, 1969]. Our basic assumption is that a coordinate system is a fixed framework (i.e., invariant with respect to network adjustment, readjustment, or expansion) for describing geodetic networks. Our view is that a coordinate system and the network it describes are two different things (this view is not universally accepted within the geodetic community).

Research paper thumbnail of Global displacements caused by point dislocations in a realistic Earth model

Journal of Geophysical Research, Apr 10, 1996

We define dislocation Love numbers [h•,• ii q , ß • l,•,•, k,•,• _,•,• and Green's functions to d... more We define dislocation Love numbers [h•,• ii q , ß • l,•,•, k,•,• _,•,• and Green's functions to describe the elastic deformation of the Earth caused by a point dislocation and study the coseismic displacements caused in a radially heterogeneous spherical Earth model. We derive spherical harmonic expressions for the shear and tensile dislocations, which can be expressed by four independent solutions: a vertical strike slip, a vertical dip slip, a tensile opening in a horizontal plane, and a tensile opening in a vertical plane. We carry out calculations with a radially heterogeneous Earth model (1066A). The results indicate that the dominating deformations appear in the near field and attenuate rapidly as the epicentral distance increases. The shallower the point source, the larger the displacements. Both the Earth's curvature and vertical layering have considerable effects on the deformation fields. Especially the vertical layering can cause a 10% difference at the epicentral distance of 0.1 ø. As an illustration, we calculate the theoretical displacements caused by the 1964 Alaska earthquake (rnw = 9.2) and compare the results with the observed vertical displacements at 10 stations. The results of the near field show that the vertical displacement can reach some meters. The far-field displacements are also significant. For example, the horizontal displacements (•e) can be as large as i cm at the epicentral distance of 30 ø, 0.5 cm a*. about 40 ø, magnitudes detectable by modern instrument, such as satellite laser ranging very long baseline interferometry (VLBI) or Global Positioning System (GPS). Globally, the displacement (•) caused by the earthquake is larger than 0.25 ram. the formulations for a more realistic Earth model have also been advanced through numerous studies [Ben

Research paper thumbnail of Height Networks

Research paper thumbnail of Least-Squares Solution of Overdetermined Models

Research paper thumbnail of Determination of the Gravity Field from Observations to Satellites

Research paper thumbnail of Geoid-Quasigeoid Correction in Formulation of the Fundamental Formula of Physical Geodesy

Revista Brasileira de Cartografia, Feb 20, 2006

To formulate the fundamental formula of physical geodesy at the physical surface of the Earth, th... more To formulate the fundamental formula of physical geodesy at the physical surface of the Earth, the gravity anomalies are used instead of the gravity disturbances, because the geodetic heights above the geocentric reference ellipsoid are not usually available. The relation between the gravity anomaly and the gravity disturbance is defined as a product of the normal gravity gradient referred to the telluroid and the height anomaly according to Molodensky's theory of the normal heights (Molodensky, 1945; Molodensky et al., 1960). Considering the normal gravity gradient referred to the surface of the geocentric reference ellipsoid, this relation is redefined as a function of the normal height (Vaníček et al., 1999). When the orthometric heights are practically used for the realization of the vertical datum, the geoid-quasigeoid correction is applied to the fundamental formula of physical geodesy to determine the precise geoid. Theoretical formulation of the geoid-quasigeoid correction to the fundamental formula of physical geodesy can be found in Martinec (1993) and Vaníček et al. (1999). In this paper, the numerical investigation of this correction at the territory of Canada is shown and the error analysis is introduced.

Research paper thumbnail of Foreword to the Second Edition

Research paper thumbnail of Numerical evaluation of mean values of topographical effects

Journal of Geodetic Science, 2011

The main problem treated in this paper is the determination of accurate mean values of the topogr... more The main problem treated in this paper is the determination of accurate mean values of the topographical effects from point values known on a regular geographical grid. Three kinds of topographical effects are studied: terrain correction, condensed terrain correction and direct topographical effect. The relation between the terrain roughness and optimal density of the points to be used in the computations is investigated in five morphologically different areas of Canada. The error of the geoid caused by the inaccuracy of the mean values computed from a variable number of points in a cell is estimated. These errors are then compared against the one centimetre target to give us the sufficient minimum number of points needed for the averaging. The mean terrain effects are computed from the point values as a simple average over a particular cell. Point values are assumed to be errorless so that the accuracy of the mean values is a function of the density of the point values only. The mentioned one centimetre criterion is applied in the sense of the Chebyshev norm. It has been observed that the relation between the number of points needed for the averaging and the terrain roughness as quantified by the terrain RMS is almost linear. After estimating the two parameters of this linear relation, seven minimally required grid densities are suggested for different intervals of terrain roughness. The results are applied to produce maps of a minimal density of points needed for sufficiently accurate determination of mean topographical effects for Canada.

Research paper thumbnail of Netan — a computer program for the interactive analysis of geodetic networks

CISM journal, Apr 1, 1989

A network analysis program package using interactive graphics has been developed. It has the capa... more A network analysis program package using interactive graphics has been developed. It has the capability of operating in one of three modes: variance-covariance analysis, geometrical strength analysis and strain analysis. Transfer from one mode to another is possible. The program produces graphical displays of various strength characteristics, strain parameters as well as the usual confidence ellipses and ellipsoids. Strain analysis is used to quantify network deformation in response to: changes in observation values and their weights, changes in position and position difference values and their weights, addition/deletion of observations and network densification. Geometrical strength of a network is portrayed by a series of plots showing the different attributes of strength. Both two- and three-dimensional networks using different types of observations can be accommodated. The expressions used to sequentialize the analyses are also described.

Research paper thumbnail of Geodesy, the concepts

Research paper thumbnail of Analyses nouvelles du mouvement du p�le terrestre

Research paper thumbnail of Effet de la densité des données sur la précision de la détermination du pied du talus continental par une surface de courbure maximum, en utilisant un algorithme de traçage automatique de la dorsale

Revue Hydrographique Internationale, 1996

Research paper thumbnail of Adjustment methods

Kluwer Academic Publishers eBooks, Feb 1, 2006

Research paper thumbnail of Padding of Terrestrial Gravity Data to Improve Stokes-Helmert Geoid Computation

Research paper thumbnail of Contemporary vertical crustal mouvements in southern Ontario

Research paper thumbnail of Selection of an appropriate height system for geomatics

Journal of Remote Sensing & GIS, 2018

Research paper thumbnail of Can Mean Values of Helmert’s Gravity Anomalies be Continued Downward Directly?

Geoinformatica, Jul 22, 2019

The computation of a precise gravimetric geoid based on the Stokes-Helmert approach requires the ... more The computation of a precise gravimetric geoid based on the Stokes-Helmert approach requires the solution of the geodetic boundary value problem. For that, the mean Helmert's gravity anomaly on the earth's topographic surface must be reduced to the geoid, the surface that plays the role of the boundary. This reduction is a process known as downward continuation. This paper considers the downward continuation as a solution of the discrete inverse Poisson problem. It shows the derivation of a doubly-averaged upward continuation operator that relates mean Helmert's gravity anomaly from the boundary to the surface. Downward continuation is then carried out by the inversion of this operator. It is shown that this can be done rigorously if, and only if, the processes of averaging and downward continuation are commutative (mutually interchangeable).

Research paper thumbnail of Hiking and biking with GPS: The Canadian perspective

Springer eBooks, Jan 25, 2006

The paper attempts to review and put into perspective the findings of a recent by-invitation-only... more The paper attempts to review and put into perspective the findings of a recent by-invitation-only workshop on GPS held in Ottawa. The workshop brought together both GPS-information suppliers and consumers coming from all walks of life. Some unique exchanges of views took place which are worth bringing to the attention of the geodetic community.

Research paper thumbnail of A new look at the U.S. Geological Survey 1970-1980 horizontal crustal deformation data around Hollister, California

Journal of Geophysical Research, Dec 10, 1991

A compact mathematical formulation is presented for the estimation of crustal strain and fault di... more A compact mathematical formulation is presented for the estimation of crustal strain and fault displacement parameters from repeated geodetic observations. On the basis of the theory of Hilbert space optimization, complex linear forms are used to model spatially and temporally piecewise continuous relative horizontal displacement fields. A mathematical model for the simultaneous network adjustment and strain approximation is elaborated. In an application of the method to the Hollister network reobserved several times during the 1970s, several approximation models are evaluated. The final model, which incorporates a third degree complex algebraic polynomial with four block translation terms in space and a fifth degree algebraic polynomial with three episodic terms in time, is presented in detail. This approximation estimates coseismic fault slip and strain release associated with three moderate earthquakes which occurred in the Hollister area within the time interval spanned by the observations.

Research paper thumbnail of Further analysis of the 1981 Southern California Field Test for leveling refraction

Journal of Geophysical Research, 1986

Application of least squares spectral analysis and multiple linear regression techniques to the 1... more Application of least squares spectral analysis and multiple linear regression techniques to the 1981 southern California field test for leveling refraction has revealed that in addition to differential refraction, rod/instrument settlement and an effect attributed to differential rod miscalibration are also detectable. The object of our analysis was the discrepancy between the forward and backward runnings of a section as this quantity properly reflects the direction of running thereby allowing for the detection of direction dependent effects. A multiple regression model using three arguments representing differential refraction, rod/instrument settlement, and differential rod miscalibration reduced the observed variation of the discrepancies by 61% as opposed to 53% when only the National Geodetic Survey (NGS) computed refraction correction is applied. It was found that of the original 23-mm accumulated discrepancy, 14 mm was attributed to differential refraction, 20 mm to settlement, and-14 mm to differential rod miscalibration. Analyses with the NGS computed refraction corrections applied (based on Kukkam•iki's single sight equation with observed temperatures) gave similar results. It is also shown that the settlement effect is always present in any discrepancy and accumulates in the discrepancies between the forward and backward runnings while it cancels and is thus hidden in the accumulation of the NGS-derived discrepancies between the short and long sight length runnings. IN'rRODUCTION In May and June 1981 a joint U.S. Geological Survey and National Geodetic Survey field leveling experiment was carried out along a 50-km line from Saugus to Palmdale, California [cf. Adams, 1981]. The purposes of the experiment were [Whalen and Strange, 1983] (1) to measure the magnitude of the differences between heights determined using two different sight lengths along the same leveling line, (2) to determine if standard refraction models, in conjunction with measured vertical temperature gradients, would explain possible differences in observed heights, and (3) to determine how well the temperature model developed by Holdahl [1981] reproduces observed temperature differences. For this experiment a single line of double run leveling over uniformly sloping terrain was observed. One running employed short sights (SSL) of an average length of 24.3 m and the other long sights (LSL) of an average length of 42.6 m. It was expected that the uniform slope and significant sight length difference would amplify the differential refraction effect on the discrepancies between the SSL and LSL runnings. It was also intended to frequently alter the direction of running of both the short and long sight levelings in order to minimize the rod settlement effect. Unfortunately, this procedure was not strictly followed as 12 of the 60 section runnings (20%) were not properly "balanced." In fact, all of the imbalance occurs over the last 42 runnings (70% of the line) corresponding to a 29% imbalance over this part. A number of analyses have been performed on the collected data [e.g.,

Research paper thumbnail of Positioning of Horizontal Geodetic Datums

Canadian surveyor, Dec 1, 1974

This paper treats the classical problem of positioning horizontal geodetic datums. By ‘classical ... more This paper treats the classical problem of positioning horizontal geodetic datums. By ‘classical problem’ we mean the problem we face when dealing with the usual horizontal control networks, as opposed to some of the more modern ideas, such as the use of geocentric coordinates, or the ideas of Hotine [1959, 1969]. Our basic assumption is that a coordinate system is a fixed framework (i.e., invariant with respect to network adjustment, readjustment, or expansion) for describing geodetic networks. Our view is that a coordinate system and the network it describes are two different things (this view is not universally accepted within the geodetic community).

Research paper thumbnail of Global displacements caused by point dislocations in a realistic Earth model

Journal of Geophysical Research, Apr 10, 1996

We define dislocation Love numbers [h•,• ii q , ß • l,•,•, k,•,• _,•,• and Green's functions to d... more We define dislocation Love numbers [h•,• ii q , ß • l,•,•, k,•,• _,•,• and Green's functions to describe the elastic deformation of the Earth caused by a point dislocation and study the coseismic displacements caused in a radially heterogeneous spherical Earth model. We derive spherical harmonic expressions for the shear and tensile dislocations, which can be expressed by four independent solutions: a vertical strike slip, a vertical dip slip, a tensile opening in a horizontal plane, and a tensile opening in a vertical plane. We carry out calculations with a radially heterogeneous Earth model (1066A). The results indicate that the dominating deformations appear in the near field and attenuate rapidly as the epicentral distance increases. The shallower the point source, the larger the displacements. Both the Earth's curvature and vertical layering have considerable effects on the deformation fields. Especially the vertical layering can cause a 10% difference at the epicentral distance of 0.1 ø. As an illustration, we calculate the theoretical displacements caused by the 1964 Alaska earthquake (rnw = 9.2) and compare the results with the observed vertical displacements at 10 stations. The results of the near field show that the vertical displacement can reach some meters. The far-field displacements are also significant. For example, the horizontal displacements (•e) can be as large as i cm at the epicentral distance of 30 ø, 0.5 cm a*. about 40 ø, magnitudes detectable by modern instrument, such as satellite laser ranging very long baseline interferometry (VLBI) or Global Positioning System (GPS). Globally, the displacement (•) caused by the earthquake is larger than 0.25 ram. the formulations for a more realistic Earth model have also been advanced through numerous studies [Ben

Research paper thumbnail of Height Networks

Research paper thumbnail of Least-Squares Solution of Overdetermined Models

Research paper thumbnail of Determination of the Gravity Field from Observations to Satellites

Research paper thumbnail of Geoid-Quasigeoid Correction in Formulation of the Fundamental Formula of Physical Geodesy

Revista Brasileira de Cartografia, Feb 20, 2006

To formulate the fundamental formula of physical geodesy at the physical surface of the Earth, th... more To formulate the fundamental formula of physical geodesy at the physical surface of the Earth, the gravity anomalies are used instead of the gravity disturbances, because the geodetic heights above the geocentric reference ellipsoid are not usually available. The relation between the gravity anomaly and the gravity disturbance is defined as a product of the normal gravity gradient referred to the telluroid and the height anomaly according to Molodensky's theory of the normal heights (Molodensky, 1945; Molodensky et al., 1960). Considering the normal gravity gradient referred to the surface of the geocentric reference ellipsoid, this relation is redefined as a function of the normal height (Vaníček et al., 1999). When the orthometric heights are practically used for the realization of the vertical datum, the geoid-quasigeoid correction is applied to the fundamental formula of physical geodesy to determine the precise geoid. Theoretical formulation of the geoid-quasigeoid correction to the fundamental formula of physical geodesy can be found in Martinec (1993) and Vaníček et al. (1999). In this paper, the numerical investigation of this correction at the territory of Canada is shown and the error analysis is introduced.

Research paper thumbnail of Foreword to the Second Edition

Research paper thumbnail of Numerical evaluation of mean values of topographical effects

Journal of Geodetic Science, 2011

The main problem treated in this paper is the determination of accurate mean values of the topogr... more The main problem treated in this paper is the determination of accurate mean values of the topographical effects from point values known on a regular geographical grid. Three kinds of topographical effects are studied: terrain correction, condensed terrain correction and direct topographical effect. The relation between the terrain roughness and optimal density of the points to be used in the computations is investigated in five morphologically different areas of Canada. The error of the geoid caused by the inaccuracy of the mean values computed from a variable number of points in a cell is estimated. These errors are then compared against the one centimetre target to give us the sufficient minimum number of points needed for the averaging. The mean terrain effects are computed from the point values as a simple average over a particular cell. Point values are assumed to be errorless so that the accuracy of the mean values is a function of the density of the point values only. The mentioned one centimetre criterion is applied in the sense of the Chebyshev norm. It has been observed that the relation between the number of points needed for the averaging and the terrain roughness as quantified by the terrain RMS is almost linear. After estimating the two parameters of this linear relation, seven minimally required grid densities are suggested for different intervals of terrain roughness. The results are applied to produce maps of a minimal density of points needed for sufficiently accurate determination of mean topographical effects for Canada.

Research paper thumbnail of Netan — a computer program for the interactive analysis of geodetic networks

CISM journal, Apr 1, 1989

A network analysis program package using interactive graphics has been developed. It has the capa... more A network analysis program package using interactive graphics has been developed. It has the capability of operating in one of three modes: variance-covariance analysis, geometrical strength analysis and strain analysis. Transfer from one mode to another is possible. The program produces graphical displays of various strength characteristics, strain parameters as well as the usual confidence ellipses and ellipsoids. Strain analysis is used to quantify network deformation in response to: changes in observation values and their weights, changes in position and position difference values and their weights, addition/deletion of observations and network densification. Geometrical strength of a network is portrayed by a series of plots showing the different attributes of strength. Both two- and three-dimensional networks using different types of observations can be accommodated. The expressions used to sequentialize the analyses are also described.