Introduction to Estimation Theory, Lecture Notes (original) (raw)
Related papers
CHAPTER 4: ESTIMATION OF PARAMETERS
In real life, we work with data that are affected by randomness, and we need to extract information and draw conclusions from the data. The randomness might come from a variety of sources. Here are two examples of such situations:
In discussing estimation theory in detail, it will be essential to recall the following definitions. Statistics: A statistics is a number that describes characteristics of a sample. In other words, it is a statistical constant associated with the sample. Examples are mean, sample variance and sample standard deviation (chukwu, 2007). Parameter: this is a number that describes characteristics of the population. In other words, it is a statistical constant associated with the population. Examples are population mean, population variance and population standard deviation. A statistic called an unbiased estimator of a population parameter if the mean of the static is equal to the parameter; the corresponding value of the statistic is then called an unbiased estimate of the parameter. (Spiegel, 1987). Estimator: any statistic 0=0 (x1, x2, x3.......xn) used to estimate the value of a parameter 0 of the population is called estimator of 0 whereas, any observed value of the statistic 0=0 (x1, x2, x3.......xn) is known as the estimate of 0 (Chukwu, 2007).
On the estimation of a parameter with incomplete knowledge on a nuisance parameter
AIP Conference Proceedings, 2004
In this paper we consider the problem of estimating a parameter of a probability distribution when we have some prior information on a nuisance parameter. We start by the very simple case where we know perfectly the value of the nuisance parameter. The complete likelihood is the classical tool in this case. Then, progressively, we consider the case where we are given a prior probability distribution on this nuisance parameter. The marginal likelihood is then the classical tool in this case. Then, we consider the case where we only have a fixed number of its moments. Here, we may use the maximum entropy (ME) principle to assign a prior law and thus go back to the previous case. Finally, we consider the case where we know only its median. In our knowledge, there is not any classical tool for this case. We propose then a new tool for this case based on a recently proposed alternative distribution to the marginal probability distribution. This new criterion is obtained by first remarking that the marginal distribution can be considered as the mean value of the original distribution over the prior probability law of the nuisance parameter, and then, by using the median in place of the mean. In this paper, we first summarize the classical tools used for the three first cases, then we give the precise definition of this new criterion and its properties and, finally, present a few examples to show the differences of these cases.
Statistics with Estimated PARAMETERS1
2005
Abstract: This paper studies a general problem of making inferences for functions of two sets of parameters where, when the first set is given, there exists a statistic with a known distribution. We study the distribution of this statistic when the first set of parameters is unknown and is replaced by an estimator. We show that under mild conditions the variance of the statistic is inflated when the unconstrained maximum likelihood estimator (MLE) is used, but deflated when the constrained MLE is used. The results are shown to be useful in hypothesis testing and confidence-interval construction in providing simpler and improved inference methods than do the standard large sample likelihood inference theories. We provide three applications of our theories, namely Box-Cox regression, dynamic regression, 1We thank Peter Robinson, Anil Bera, W. K. Li and a referee for their helpful comments that have led to improvements in this paper. Thanks are also due to seminar participants at the U...
Statistics with Estimated Parameters
2007
This paper studies a general problem of making inferences for functions of two sets of parameters where, when the first set is given, there exists a statistic with a known distribution. We study the distribution of this statistic when the first set of parameters is unknown and is replaced by an estimator. We show that under mild conditions the variance of the statistic is inflated when the unconstrained maximum likelihood estimator (MLE) is used, but deflated when the constrained MLE is used. The results are shown to be useful in hypothesis testing and confidence-interval construction in providing simpler and improved inference methods than do the standard large sample likelihood inference theories. We provide three applications of our theories, namely Box-Cox regression, dynamic regression, and spatial regression, to illustrate the generality and versatility of our results.