Earthquake prediction: the null hypothesis (original) (raw)
Related papers
Hypothesis testing and earthquake prediction
Proceedings of the National Academy of Sciences, 1996
Requirements for testing include advance specification of the conditional rate density (probability per unit time, area, and magnitude) or, alternatively, probabilities for specified intervals of time, space, and magnitude. Here I consider testing fully specified hypotheses, with no parameter adjustments or arbitrary decisions allowed during the test period. Because it may take decades to validate prediction methods, it is worthwhile to formulate testable hypotheses carefully in advance. Earthquake prediction generally implies that the probability will be temporarily higher than normal. Such a statement requires knowledge of "normal behavior"--that is, it requires a null hypothesis. Hypotheses can be tested in three ways: (i) by comparing the number of actual earth-quakes to the number predicted, (ii) by comparing the likelihood score of actual earthquakes to the predicted distribution, and (iii) by comparing the likelihood ratio to that of a null hypothesis. The first two...
Testing earthquake predictions
Institute of Mathematical Statistics Collections, 2008
Statistical tests of earthquake predictions require a null hypothesis to model occasional chance successes. To define and quantify 'chance success' is knotty. Some null hypotheses ascribe chance to the Earth: Seismicity is modeled as random. The null distribution of the number of successful predictionsor any other test statistic-is taken to be its distribution when the fixed set of predictions is applied to random seismicity. Such tests tacitly assume that the predictions do not depend on the observed seismicity. Conditioning on the predictions in this way sets a low hurdle for statistical significance. Consider this scheme: When an earthquake of magnitude 5.5 or greater occurs anywhere in the world, predict that an earthquake at least as large will occur within 21 days and within an epicentral distance of 50 km. We apply this rule to the Harvard centroid-moment-tensor (CMT) catalog for 2000-2004 to generate a set of predictions. The null hypothesis is that earthquake times are exchangeable conditional on their magnitudes and locations and on the predictions-a common "nonparametric" assumption in the literature. We generate random seismicity by permuting the times of events in the CMT catalog. We consider an event successfully predicted only if (i) it is predicted and (ii) there is no larger event within 50 km in the previous 21 days. The P-value for the observed success rate is < 0.001: The method successfully predicts about 5% of earthquakes, far better than 'chance,' because the predictor exploits the clustering of earthquakes-occasional foreshocks-which the null hypothesis lacks. Rather than condition on the predictions and use a stochastic model for seismicity, it is preferable to treat the observed seismicity as fixed, and to compare the success rate of the predictions to the success rate of simple-minded predictions like those just described. If the proffered predictions do no better than a simple scheme, they have little value.
2008
Robert K. Vincent, Advisor successfully predicted 100 earthquakes in the Western Pacific Rim including China, Japan, Taiwan, and Philippine, using a temperature anomaly method. Their model is based on a predicted increase of ground temperatures in the lower atmosphere from 2 to 8 days before an earthquake of with a Richter Scale magnitude of 5 or greater. Mixed gases, such as CO 2 and CH 4 , in different ratios under the action of a transient electric field, cause the temperature of the lower atmosphere to increase up to 6 °C, while solar radiation only increases temperature by 3 °C. The authors detected the thermal anomalies using ground-based evidence and thermal infrared anomalies in METEOSAT thermal infrared image data. Despite their apparent success at predicting the earthquakes, they did not compare their prediction with the natural rate of occurrence in the area, which experiences an earthquake of Richter magnitude greater than 4 every week.
Basic principles for evaluating an earthquake prediction method
Geophysical Research Letters, 1996
A three year continuous sample of earthquake predictions based on the observation of Seismic Electric Signals in Greece was published by Varotsos and Lazaridou [1991]. Four independent studies analyzed this sample and concluded that the success rate of the predictions is far beyond chance. On the other hand, Mulargia and Gasperini [1992] (hereafter cited as MG) claim that these predictions can be ascribed to chance. In the present paper we examine the origin of this disagreement. Several serious problems in the study of MG are pointed out, such as: 1. The probability of a prediction's being successful by chance should be approximately considered as the product of three probabilities, Pv, PE and PM, i.e., the probabilities with respect to time, epicenter and magnitude. In spite of their major importance, P•. and PM were ignored by MG. The incorporation of P•. decreases the probability for chancy success by more than a factor of 10 (when P•. is taken into account it can be shown that the VAN predictions cannot be ascribed to chance). 2. MG grossly overestimated the number of earthquakes that should have been predicted, by taking different thresholds for earthquakes and predictions. With such an overestimation, MG' s procedure can "reject" even an ideally perfect earthquake prediction method. 3. MG's procedure did not take into account that the predictions were based on three different types of electrical precursors with different lead-times. 4. MG applied a Poisson distribution to the time series of earthquakes but included a large number of aftershocks. 5. The backward time correlation between predictions and earthquakes claimed by MG is due to misinterpretation of the text of some predictions and an incorrect use of aftershocks. Although even the discussion of the first problem alone is enough to invalidate the claims of MG, we also discuss the other four problems because MG violated some basic principles even in the time domain alone. The results derived in this paper are of general use when examining whether a correlation between earthquakes and various geophysical phenomena is beyond chance or not.
Evaluating the statistical validity beyond chance of ‘VAN’ earthquake precursors
Geophysical Journal International, 1992
November 30 recently published by Varotsos & Lazaridou (1991) using any possible combination of the 'rules of the game' that they consider. We find that the apparent success of V A N predictions can be confidently ascribed t o chance; conversely, we find that the occurrence of earthquakes with M ,~5. 8 is followed by V A N predictions (with identical epicentre a n d magnitude) with a probability too large to be ascribed t o chance.
Evaluation of Proposed Earthquake Precursors
1991
This review summarizes the result of the second round of nominations for the IASPEI Preliminary List of Significant Precursors. Currently this List contains five cases of precursors: (1) foreshocks, (2) preshocks, (3) seismic quiescence before major aftershocks, (4) radon decrease in ground water, and (5) ground water level increase. A list of four cases that could not be accepted nor rejected by the panels reviewing them contains three on crustal deformations and one on seismic quiescence. In the second round 10 nominations were evaluated, nine new ones and one which had been considered previously. Two were accepted for the List, two were placed in the category of undecided cases. To date, a total of 40 nominations have been evaluated by IASPEI. For 37 of these the nominations, the mail reviews, the panel opinions, and, where supplied, the author's reply were published. This evaluation process remains active throughout the International Decade for Natural Hazards Reduction. Additional nominations are invited.
On the Use of Receiver Operating Characteristic Tests for Evaluating Spatial Earthquake Forecasts
Spatial forecasts of triggered earthquake distributions have been ranked using receiver operating characteristic (ROC) tests. The test is a binary comparison between regions of positive and negative forecast against positive and negative presence of earthquakes. Forecasts predicting only positive changes score higher than Coulomb methods, which predict positive and negative changes. I hypothesize that removing the possibility of failures in negative forecast realms yields better ROC scores. I create a "perfect" Coulomb forecast where all earthquakes only fall into positive stress change areas and compare with an informationless all-positive forecast. The "perfect" Coulomb forecast barely beats the informationless forecast, and adding as few as four earthquakes occurring in the negative stress regions causes the Coulomb forecast to be no better than an informationless forecast under a ROC test. ROC tests also suffer from data imbalance when applied to earthquake forecasts because there are many more negative cases than positive. Plain Language Summary Recent studies have evaluated the Coulomb stress change method, a popular technique for calculating where future earthquakes will occur, against alternative stress change representations. Spatial forecasts were compared with receiver operating characteristic tests, which rank methods based on the number of true and false positive and negative forecast cases. Coulomb stress changes, which predict areas of positive and negative stress change fare poorly against methods that only produce positive forecast areas. Methods that forecast negative cases (earthquake suppression) have to be nearly perfect to score well in a receiver operating characteristic test against even an informationless all-positive forecast, because there are no possible false negatives. There is also a general data imbalance problem with using ROC tests for earthquake forecasts because there are almost always many more negative cases (places with no earthquakes).
Some observations on the probabilistic interpretation of short-term earthquake precursors
Earthquake Engineering & Structural Dynamics, 1984
This paper analyses the uncertainties in probabilistic interpretation of short-term earthquake precursors, even when the statistical information commonly indicated in the literature as sufficient to define the characteristics of these precursors is assumed to be known. The wide margins for uncertainty in the interpretation of such data are pointed out. One of the principal causes of uncertainty, as an example, lies in the physical origin of false alarms. Depending on this physical origin, the conditional probability of an earthquake, other conditions being equal, may vary in certain cases from values around 0.1 to as much as 0.7 or even higher.