Rates of convergence in the asymptotic normality for some local maximum estimators (original) (raw)
Related papers
A general theorem on approximate maximum likelihood estimation
Glasnik Matematicki
In this paper a version of the general theorem on approximate maximum likelihood estimation is proved. We assume that there exists a log-likelihood function L(θ) and a sequence (Ln(θ)) of its estimates defined on some statistical structure parametrized by θ from an open set Θ ⊆ Rd, and dominated by a probability P. It is proved that if L(θ) and Ln(θ) are random functions of class C2(Θ) such that there exists a unique point θ ∈ Θ of the global maximum of L(θ) and the first and second derivatives of Ln(θ) with the respect to θ converge to the corresponding derivatives of L(θ) uniformly on compacts in Θ with the order OP(γn), limn γn = 0, then there exists a sequence of Θ-valued random variables θn which converges to θ with the order OP(γn), and such that θn is a stationary point of Ln(θ) in asymptotic sense. Moreover, we prove that under two more assumptions on L and Ln, such estimators could be chosen to be measurable with respect to the σ-algebra generated by Ln(θ). http://web.math....
On the errors committed by sequences of estimator functionals
Mathematical Methods of Statistics, 2011
Consider a sequence of estimatorsθ n which converges almost surely to θ 0 as the sample size n tends to infinity. Under weak smoothness conditions, we identify the asymptotic limit of the last timeθ n is further than ε away from θ 0 when ε → 0 +. These limits lead to the construction of sequentially fixed width confidence regions for which we find analytic approximations. The smoothness conditions we impose is thatθ n is to be close to a Hadamard-differentiable functional of the empirical distribution, an assumption valid for a large class of widely used statistical estimators. Similar results were derived in Hjort and Fenstad (1992, Annals of Statistics) for the case of Euclidean parameter spaces; part of the present contribution is to lift these results to situations involving parameter functionals. The apparatus we develop is also used to derive appropriate limit distributions of other quantities related to the far tail of an almost surely convergent sequence of estimators, like the number of times the estimator is more than ε away from its target. We illustrate our results by giving a new sequential simultaneous confidence set for the cumulative hazard function based on the Nelson-Aalen estimator and investigate a problem in stochastic programming related to computational complexity. n i=1 f (X i).
On Bahadur efficiency and maximum likelihood estimation in general parameter spaces
2001
The paper studies large deviations of maximum likelihood and related estimates in the case of i.i.d. observations with distribution determined by a parameter θ taking values in a general metric space. The main theorems provide sufficient conditions under which an approximate sieve maximum likelihood estimate is an asymptotically locally optimal estimate of g(θ) in the sense of Bahadur, for virtually all functions g of interest. These conditions are illustrated by application to several parametric, nonparametric, and semiparametric examples.
Minimax estimation of norms of a probability density: II. Rate-optimal estimation procedures
arXiv: Statistics Theory, 2020
In this paper we develop rate--optimal estimation procedures in the problem of estimating the LpL_pLp--norm, pin(0,infty)p\in (0, \infty)pin(0,infty) of a probability density from independent observations. The density is assumed to be defined on RdR^dRd, dgeq1d\geq 1dgeq1 and to belong to a ball in the anisotropic Nikolskii space. We adopt the minimax approach and construct rate--optimal estimators in the case of integer pgeq2p\geq 2pgeq2. We demonstrate that, depending on parameters of Nikolskii's class and the norm index ppp, the risk asymptotics ranges from inconsistency to sqrtn\sqrt{n}sqrtn--estimation. The results in this paper complement the minimax lower bounds derived in the companion paper \cite{gl20}.
On asymptotics of estimating functions
1998
The asymptotic theory of estimators obtained from estimating functions is reviewed and some new results on the multivariate parameter case are presented. Specifically, results about existence of consistent estimators and about asymptotic normality of these are given ...
Convergence Rates & Asymptotic Normality Estimators
1995
This paper gives general conditions for convergence rates and asymptotic normality of series estimators of conditional expectations, and specializes these conditions to polynomial regression and regression splines. Both mean-square and uniform convergence rates are derived. Asymptotic normality is shown for nonlinear functionals of series estimators, covering many cases not previously treated. Also, a simple condition for v'n-consistency of a functional of a series estimator is given. The regularity conditions are straightforward to understand, and several examples are given to illustrate their application.
Maximal Uniform Convergence Rates in Parametric Estimation Problems
Econometric Theory, 2010
This paper considers parametric estimation problems with independent, identically nonregularly distributed data. It focuses on rate efficiency, in the sense of maximal possible convergence rates of stochastically bounded estimators, as an optimality criterion, largely unexplored in parametric estimation. Under mild conditions, the Hellinger metric, defined on the space of parametric probability measures, is shown to be an essentially universally applicable tool to determine maximal possible convergence rates. These rates are shown to be attainable in general classes of parametric estimation problems.