VISX vs Summit: risultati refrattivo funzionali a 3 – 6 mesi (original) (raw)
Related papers
arXiv (Cornell University), 2017
We introduce a modification of the index of increase that works in both deterministic and random environments, and thus allows us to assess monotonicity of functions that are prone to random measurement errors. We prove consistency of the empirical index and show how its rate of convergence is influenced by deterministic and random parts of the data. In particular, the obtained results allow us to determine the frequency at which observations should be taken in order to reach any pre-specified level of estimation precision. We illustrate the performance of the suggested estimator using simulated data arising from purely deterministic and error-contaminated monotonic and non-monotonic functions.
Estimating the Index of Increase via Balancing Deterministic and Random Data
Mathematical Methods of Statistics
We introduce and explore an empirical index of increase that works in both deterministic and random environments, thus allowing to assess monotonicity of functions that are prone to random measurement-errors. We prove consistency of the index and show how its rate of convergence is influenced by deterministic and random parts of the data. In particular, the obtained results suggest a frequency at which observations should be taken in order to reach any pre-specified level of estimation precision. We illustrate the index using data arising from purely deterministic and error-contaminated functions, which may or may not be monotonic.
Measuring the lack of monotonicity in functions
Problems in econometrics, insurance, reliability engineering, and statistics quite often rely on the assumption that certain functions are non-decreasing. To satisfy this requirement, researchers frequently model the underlying phenomena using parametric and semi-parametric families of functions, thus effectively specifying the required shapes of the functions. To tackle these problems in a non-parametric way, in this paper we suggest indices for measuring the lack of monotonicity in functions. We investigate properties of the indices and also offer a convenient computational technique for practical use.
2015
Numerous problems in econometrics, insurance, reliability engineering, and statistics rely on the assumption that certain functions are monotonic, which may or may not be true in real life scenarios. To satisfy this requirement, from the theoretical point of view, researchers frequently model the underlying phenomena using parametric and semi-parametric families of functions, thus effectively specifying the required shapes of the functions. To tackle these problems in a non-parametric way, when the shape cannot be specified explicitly but only estimated approximately, we suggest indices for measuring the lack of monotonicity in functions. We investigate properties of these indices and offer convenient computational techniques for practical use. To illustrate the new technique, we analyze a data-set of student marks on mathematics, reading and spelling. In particular, we apply our technique to determine if the marks are co-monotonic, but if not, then how much they deviate from the co...
Algorithms, 2020
Motivated by the desire to numerically calculate rigorous upper and lower bounds on deviation probabilities over large classes of probability distributions, we present an adap-tive algorithm for the reconstruction of increasing real-valued functions. While this problem is similar to the classical statistical problem of isotonic regression, we assume that the observational data arise from optimisation problems with partially controllable one-sided errors, and this setting alters several characteristics of the problem and opens natural algorithmic possibilities. Our algorithm uses imperfect evaluations of the target function to direct further evaluations of the target function either at new sites in the function's domain or to improve the quality of evaluations at already-evaluated sites. We establish sufficient conditions for convergence of the reconstruction to the ground truth, and apply the method both to synthetic test cases and to a real-world example of uncertainty quantification for aerodynamic design.
Some indices to measure departures from stochastic order
arXiv (Cornell University), 2018
An essential feature of stochastic order is its invariance against increasing maps. In this paper, we analyze a family of invariant indices of disagreement with respect to stochastic dominance. The indices in this family admit the representation θ(F, G) = P (X > Y), where (X, Y) is a random vector with marginal distribution functions F and G. This includes the case of independent marginals, but also other interesting indices related to a contamination model or to a joint quantile representation. For some choices of θ the condition θ(F, G) = 0 is equivalent to stochastic dominance of G over F. We show that the index associated to the contamination model achieves the minimal value within this family. The plug-in sample-based versions of these indices lead to the Mann-Whitney, the one-sided Kolmogorov-Smirnov, and the Galton statistics. For some of the most interesting indices this fact provides sufficient theoretical support for asymptotic inference. However, this is not the case for Galton's statistic, for which we provide additional theory for its resampling behaviour. We stress on the complementary roles of some of these indices, which beyond measuring disagreement with respect to stochastic order allow to describe the maximum possible difference in status of a value x ∈ R under F or G. We apply these indices to some real data sets.
A Simple Measure of the Efficiency of a Buehler Confidence Limit
Communications in Statistics - Theory and Methods, 2005
The Buehler 1 − upper confidence limit for a scalar parameter is as small as possible, subject to the constraints that (a) its coverage probability never falls below 1 − and (b) it is a non decreasing function of a pre-specified designated statistic T . This confidence limit finds important applications in the analysis of discrete data arising in, among others, the fields of reliability, epidemiology, finance, doseresponse analysis, and the pharmaceutical industry. The efficiency of the Buehler 1 − limit depends greatly on T . We present an easy-to-compute, single-number measure of the inefficiency of this Buehler limit. We also derive the large sample properties of a relative inefficiency measure when T is an approximate 1 − upper limit where 0 < ≤ 1 2 . A numerical example, illustrating the application of these large sample properties, is also presented.