Estimators for the errors-in-variables model (original) (raw)
Related papers
Linear Algebra and its Applications, 1989
Given matrices A,B and vectors a, b, a necessary and sufficient condition is established for the Lijwner partial ordering (Am + a)(Am + a)' Q (Bm + b)(Bm + b)' to hold for all vectors m. This result is then applied to derive a complete characterization of estimators that are admissible for a given vector of parametric functions among the set of all linear estimators under the general Gauss-Markov model, when the mean square error matrix is adopted as the criterion for evaluating the estimators. inwhichY EIW,,~ is an observable random vector with expectation E(Y) = X B LINEAR ALGEBRA AND ITS APPLICATIONS 112:9-18 (1989)
On choosing estimators'in a simple linear errors-in-variables model
Communications in Statistics - Theory and Methods, 1993
This paper focuses on studying the accuracy of two well-known estimators in a simple errors-in-variables model, the ordinary least squares and the corrected least squares estimator. As a measure of accuracy of the estimators, the mean squared error is adopted. While Ketellapper (1983) addressed this issue for the case where the error of measurement in the independent variable is known, the present article is concerned with this comparison for the case where the ratio of I , the error variances is known. Comparison of the mean I squared errors of the above estimators leads to a I simple rule involving quantities estimable from the Copyright 6 1993 by Marcel Dekker, Inc. Downloaded by [UNSW Library] at 16:53 07 July 2013 PENEV AND RAYKOV
Theory of estimation of parameters in a linear regression scheme
Journal of Soviet Mathematics, 1979
UDC 519.281 Fairly simple asymptotically optimal equivariant polynomial estimators of any degree ~ are constructed for the parameters of a standard linear regression scheme whose design matrix satisfies a certain additional condition. These estimators depend on the error distribution function only in terms of its first s moments. An explicit equation for an optimal equivariant quadratic estimator of parameters is also presented. i. Introduction We consider parameter estimation in a standard scheme of linear regression (Gauss-Markov scheme) with small and large samples. In this scheme the observations IE~ ~ ~C~ have the form The errors ~L are assumed to be independent uniformly distributed random variables with a distribution function (d.f.) ~ ~3g I such that J Let a >0 t=~,2,...~n and the parameter T denoting transposition. The rank of A =m~ CLLi 0 ~ R m. Thus, in scheme (i.i) we have (1,2) By ~0 we denote the mean (or conditional mean) corresponding to the distribution m@.
On a class of parameters estimators in linear models dominating the least squares one
Digital Signal Processing
The estimation of parameters in a linear model is considered under the hypothesis that the noise, with finite second order statistics, can be represented in a given deterministic basis by random coefficients. An extended underdetermined design matrix is then considered and an estimator of the extended parameters is proposed with minimum l 1 norm. It is proved that if the noise variance is larger than a threshold, which depends on the unknown parameters and on the extended design matrix, then the proposed estimator of the original parameters dominates the least-squares estimator in the sense of the mean square error. A small simulation illustrates the behavior of the proposed estimator. Moreover it is shown experimentally that the proposed estimator can be convenient even if the design matrix is not known but only an estimate can be used. Furthermore the noise basis can eventually be used to introduce some prior information in the estimation process. These points are illustrated by simulation by using the proposed estimator for solving a difficult inverse ill-posed problem related to the complex moments of an atomic complex measure.
Mean Squared Error Matrix Comparisons of Some Biased Estimators in Linear Regression
Communications in Statistics, 2003
Consider the linear regression model y ¼ X þ u in the usual notation. In the presence of multicollinearity certain biased estimators like the ordinary ridge regression estimator k ¼ ðX 0 X þ kI Þ À1 X 0 y and the Liu estimator d ¼ ðX 0 X þ I Þ À1 ðX 0 y þ d Þ introduced by Liu (Liu, Ke Jian. (1993). A new class of biased estimate in linear regression. Communications in Statistics-Theory and Methods 22(2):393-402) or improved ridge and Liu estimators are used to outperform the ordinary least squares estimates in the linear regression model. In this article we compare the (almost unbiased) generalized ridge regression estimator with the (almost unbiased) generalized Liu estimator in the matrix mean square error sense.
An elementary development of the equation characterizing best linear unbiased estimators
Puntanen et al. [J. Statist. Plann. Inference 88 (2000) 173] provided two matrix-based proofs of the result stating that a linear estimator By represents the best linear unbiased estimator (BLUE) of the expectation vector X under the general Gauss-Markov model M = {y, X , σ 2 V} if and only if B(X : VX ⊥ ) = (X , where X ⊥ is any matrix whose columns span the orthogonal complement to the column space of X. In this note, still another development of such a characterization is proposed with reference to the BLUE of any vector of estimable parametric functions K . From the algebraic point of view, the present development seems to be the simplest from among all accessible in the literature till now.