Estimation of circular statistics in the presence of measurement bias (original) (raw)

Bayesian theory of systematic measurement deviations

2010

Concerning systematic effects, the recommendation given in the GUM is to correct for them, but unfortunately no detailed information is available, how to do this. This publication will show, how systematic measurement deviations can be handled correctly based on the Bayesian probability theory. After a short overview about useful methods and tools, like the product rule of probability theory, Bayes' theorem, the principle of maximum entropy, and the marginalisation equation, an outline of a method to handle systematic measurement deviations is introduced. Finally some simple examples of practical interest are given, in order to demonstrate the applicability of the suggested method.

Statistics and the Theory of Measurement

Journal of the Royal Statistical Society. Series A (Statistics in Society), 1996

Just as there are different interpretations of probability, leading to different kinds of inferential statements and different conclusions about statistical models and questions, so there are different theories of measurement, which in turn may lead to different kinds of statistical model and possibly different conclusions. This has led to much confusion and a long running debate about when different classes of statistical methods may legitimately be applied. This paper outlines the major theories of measurement and their relationships and describes the different kinds of models and hypotheses which may be formulated within each theory. One general conclusion is that the domains of applicability of the two major theories are typically different, and it is this which helps apparent contradictions to be avoided in most practical applications.

Disaggregating measurement uncertainty from population variability and bayesian treatment of uncensored results

2012

In making low-level radioactivity measurements of populations, it is commonly observed that a substantial portion of net results is negative. Furthermore, the observed variance of the measurement results arises from a combination of measurement uncertainty and population variability. This paper presents a method for disaggregating measurement uncertainty from population variability to produce a probability density function (PDF) of possibly true results. To do this, simple, justifiable and reasonable assumptions are made about the relationship of the measurements to the measurands (the 'true values'). The measurements are assumed to be unbiased, that is, that their average value is the average of the measurands. Using traditional estimates of each measurement's uncertainty, a likelihood PDF for each individual's measurand is produced. Then using the same assumptions and all the data from the population of individuals, a prior PDF of measurands for the population is produced. The prior PDF is non-negative, and the average is equal to the average of the measurement results for the population. Using Bayes's theorem, posterior PDFs of each individual measurand are calculated. The uncertainty in these Bayesian posterior PDFs appears to be all Berkson with no remaining classical component. The method is applied to baseline bioassay data from the Hanford site. The data include 90 Sr urinalysis measurements of 128 people, 137 Cs in vivo measurements of 5337 people and 239 Pu urinalysis measurements of 3270 people. The method produces excellent results for the 90 Sr and 137 Cs measurements, since there are non-zero concentrations of these global fallout radionuclides in people who have not been occupationally exposed. The method does not work for the 239 Pu measurements in non-occupationally exposed people because the population average is essentially zero relative to the sensitivity of the measurement technique. The method is shown to give results similar to classical statistical inference when the measurements have relatively small uncertainty.

Estimation and testing based on data subject to measurement errors: from parametric to non-parametric likelihood methods

Statistics in medicine, 2012

Measurement error (ME) problems can cause bias or inconsistency of statistical inferences. When investigators are unable to obtain correct measurements of biological assays, special techniques to quantify MEs need to be applied. Sampling based on repeated measurements is a common strategy to allow for ME. This method has been well addressed in the literature under parametric assumptions. The approach with repeated measures data may not be applicable when the replications are complicated because of cost and/or time concerns. Pooling designs have been proposed as cost-efficient sampling procedures that can assist to provide correct statistical operations based on data subject to ME. We demonstrate that a mixture of both pooled and unpooled data (a hybrid pooled-unpooled design) can support very efficient estimation and testing in the presence of ME. Nonparametric techniques have not been well investigated to analyze repeated measures data or pooled data subject to ME. We propose and e...

Estimation of distribution functions in measurement error models

Journal of Statistical Planning and Inference, 2013

Many practical problems are related to the pointwise estimation of distribution functions when data contains measurement errors. Motivation for these problems comes from diverse fields such as astronomy, reliability, quality control, public health and survey data. Recently, Dattner, Goldenshluger and Juditsky (2011) showed that an estimator based on a direct inversion formula for distribution functions has nice properties when the tail of the characteristic function of the measurement error distribution decays polynomially. In this paper we derive theoretical properties for this estimator for the case where the error distribution is smoother and study its finite sample behavior for different error distributions. Our method is data-driven in the sense that we use only known information, namely, the error distribution and the data. Application of the estimator to estimating hypertension prevalence based on real data is also examined.

The Role of Measurement Error in Familiar Statistics

Organizational Research Methods, 2006

Measurement error, or reliability, affects many common applications in statistics, such as correlation, partial correlation, analysis of variance, regression, factor analysis, and others. Despite its importance, the role of measurement error in these familiar statistical applications often receives little or no attention in textbooks and courses on statistics. The purpose of this article is to examine the role of reliability in familiar statistics and to show how ignoring the consequences of (less than perfect) reliability in common statistical techniques can lead to false conclusions and erroneous interpretation.

Advances in Theoretical and Applied Statistics

2013

The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. While the advice and information in this book are believed to be true and accurate at the date of publication, neither the authors nor the editors nor the publisher can accept any legal responsibility for any errors or omissions that may be made. The publisher makes no warranty, express or implied, with respect to the material contained herein.

Confidence Distribution, the Frequentist Distribution Estimator of a Parameter: A Review

International Statistical Review, 2013

In frequentist inference, we commonly use a single point (point estimator) or an interval (confidence interval/"interval estimator") to estimate a parameter of interest. A very simple question is: Can we also use a distribution function ("distribution estimator") to estimate a parameter of interest in frequentist inference in the style of a Bayesian posterior? The answer is affirmative, and confidence distribution is a natural choice of such a "distribution estimator". The concept of a confidence distribution has a long history, and its interpretation has long been fused with fiducial inference. Historically, it has been misconstrued as a fiducial concept, and has not been fully developed in the frequentist framework. In recent years, confidence distribution has attracted a surge of renewed attention, and several developments have highlighted its promising potential as an effective inferential tool. This article reviews recent developments of confidence distributions, along with a modern definition and interpretation of the concept. It includes distributional inference based on confidence distributions and its extensions, optimality issues and their applications. Based on the new developments, the concept of a confidence distribution subsumes and unifies a wide range of examples, from regular parametric (fiducial distribution) examples to bootstrap distributions, significance (p-value) functions, normalized likelihood functions, and, in some cases, Bayesian priors and posteriors. The discussion is entirely within the school of frequentist inference, with emphasis on applications providing useful statistical inference tools for problems where frequentist methods with good properties were previously unavailable or could not be easily obtained. Although it also draws attention to some of the differences and similarities among frequentist, fiducial and Bayesian approaches, the review is not intended to reopen the philosophical debate that has lasted more than two hundred years. On the contrary, it is hoped that the article will help bridge the gaps between these different statistical procedures.