The accuracy of effect-size estimates under normals and contaminated normals in meta-analysis (original) (raw)
Related papers
A Unified Approach to the Estimation of Effect Size in Meta-Analysis
1989
Parametric measures to estimate J. Cohen's effect size (1966) from a single experiment or for a single study in meta-analysis are investigated. The main objective was to examine the principal statistical properties of this effect size-delta-under variance homogeneity, variance heterogeneity with known variance ratios, and for the Behrens-Fisher problem. Derived estimators were compared according to the criteria of ..heir magnitudes, unbiasedness, and mean-square errors. General properties of the derived estimators were examined by means of Monte Carlo results. Results tend to confirm the recommendation of theoretical analysis that two identified estimators, "h" and "d(sub T)," should be used in conducting a meta-analytic study. These estimators better ensure unbiasedness and minimum mean-square error than do other: derived. Eight tables and four graphs illustrate study data. (SLD) *
A comparison of the effect size estimators in meta-analysis
2009
The objective of a meta-analysis is usually to estimate the overall treatment effect and make inferences about the difference between the efffects of the two treatments. This article presents several forms of effect size estimators and compares these effect size estimators and the variance of overall treatment effect estimator within each group. as outcome measures, standarized difference is considered. Four modes of effect size estimators are discussed. Effect size estimators by Glass, Hedges, The Maximum Likelihood, and Shrunken Estimators of Effect Size are employed in this study. Finally, with the help of a software the results of these four effect size estimators are discussed. Estimators are illustrated using a comparison of the effectiveness of amlodipine and placebo on work capacity.
Behavior Research Methods, 2003
Although use of the standardized mean difference in meta-analysis is appealing for several reasons, there are some drawbacks. In this article, we focus on the following problem: that a precision-weighted mean of the observed effect sizes results in a biased estimate of the mean standardized mean difference. This bias is due to the fact that the weight given to an observed effect size depends on this observed effect size. In order to eliminate the bias, Hedges and Olkin (1985) proposed using the mean effect size estimate to calculate the weights. In the article, we propose a third alternative for calculating the weights: using empirical Bayes estimates of the effect sizes. In a simulation study, these three approaches are compared. The mean squared error (MSE) is used as the criterion by which to evaluate the resulting estimates of the mean effect size. For a meta-analytic dataset with a small number of studies, theMSE is usually smallest when the ordinary procedure is used, whereas for a moderate or large number of studies, the procedures yielding the best results are the empirical Bayes procedure and the procedure of Hedges and Olkin, respectively.
The Impact of Effect Size Heterogeneity on Meta-Analysis: A Monte Carlo Experiment
SSRN Electronic Journal, 2000
In this paper we use Monte Carlo simulation to investigate the impact of effect size heterogeneity on the results of a meta-analysis. Specifically, we address the small sample behaviour of the OLS, the fixed effects regression and the mixed effects meta-estimators under three alternative scenarios of effect size heterogeneity. We distinguish heterogeneity in effect size variance, heterogeneity due to a varying true underlying effect across primary studies, and heterogeneity due to a non-systematic impact of omitted variable bias in primary studies. Our results show that the mixed effects estimator is to be preferred to the other two estimators in the first two situations. However, in the presence of random effect size variation due to a non-systematic impact of omitted variable bias, using the mixed effects estimator may be suboptimal. We also address the impact of sample size and show that meta-analysis sample size is far more effective in reducing meta-estimator variance and increasing the power of hypothesis testing than primary study sample size. JEL-codes: C12; C15; C40
Consequences of effect size heterogeneity for meta-analysis: a Monte Carlo study
Statistical Methods and Applications, 2010
In this article we use Monte Carlo analysis to assess the small sample behaviour of the OLS, the weighted least squares (WLS) and the mixed effects metaestimators under several types of effect size heterogeneity, using the bias, the mean squared error and the size and power of the statistical tests as performance indicators. Specifically, we analyse the consequences of heterogeneity in effect size precision (heteroskedasticity) and of two types of random effect size variation, one where the variation holds for the entire sample, and one where only a subset of the sample of studies is affected. Our results show that the mixed effects estimator is to be preferred to the other two estimators in the first two situations, but that WLS outperforms OLS and mixed effects in the third situation. Our findings therefore show that, under circumstances that are quite common in practice, using the mixed effects estimator may be suboptimal and that the use of WLS is preferable.
Communication Monographs, 2009
Meta-analysis involves cumulating effects across studies in order to qualitatively summarize existing literatures. A recent finding suggests that the effect sizes reported in meta-analyses may be negatively correlated with study sample sizes. This prediction was tested with a sample of 51 published meta-analyses summarizing the results of 3,602 individual studies. The correlation between effect size and sample size was negative in almost 80 percent of the meta-analyses examined, and the negative correlation was not limited to a particular type of research or substantive area. This result most likely stems from a bias against publishing findings that are not statistically significant. The primary implication is that meta-analyses may systematically overestimate population effect sizes. It is recommended that researchers routinely examine the nÁr scatter plot and correlation, or some other indication of publication bias and report this information in meta-analyses.
Quantifying Effect Sizes in Randomised and Controlled Trials: A Review
Journal of Health Science Research
Meta-analysis aggregates quantitative outcomes from multiple scientific studies to produce comparable effect sizes. The resultant integration of useful information leads to a statistical estimate with higher power and more reliable point estimate when compared to the measure derived from any individual study. Effect sizes are usually estimated using mean differences of the outcomes of treatment and control groups in experimental studies. Although different software exists for the calculations in meta-analysis, understanding how the calculations are done can be useful to many researchers, particularly where the values reported in the literature data is not applicable in the software available to the researcher. In this paper, search was conducted online primarily using Google and PubMed to retrieve relevant articles on the different methods of calculating the effect sizes and the associated confidence intervals, effect size correlation, p values and I 2 , and how to evaluate heterogeneity and publication bias are presented.
Evaluation of the Normality Assumption in Meta-Analyses
American Journal of Epidemiology, 2019
Random-effects meta-analysis is one of the mainstream methods for research synthesis. The heterogeneity in meta-analyses is usually assumed to follow a normal distribution. This is actually a strong assumption, but one that often receives little attention and is used without justification. Although methods for assessing the normality assumption are readily available, they cannot be used directly because the included studies have different within-study standard errors. Here we present a standardization framework for evaluation of the normality assumption and examine its performance in random-effects meta-analyses with simulation studies and real examples. We use both a formal statistical test and a quantile-quantile plot for visualization. Simulation studies show that our normality test has well-controlled type I error rates and reasonable power. We also illustrate the real-world significance of examining the normality assumption with examples. Investigating the normality assumption ...
Psychological Bulletin, 1992
Combined significance tests (combined p values) and tests of the weighted mean effect size are both used to combine information across studies in meta-analysis. This article compares a combined significance test (the Stouffer test) with a test based on the weighted mean effect size as tests of the same null hypothesis. The tests are compared analytically in the case in which the withingroup variances are known and compared through large-sample theory in the more usual case in which the variances are unknown. Generalizations suggested are then explored through a simulation study. This work demonstrates that the test based on the average effect size is usually more powerful than the Stouffer test unless there is a substantial negative correlation between withinstudy sample size and effect size. Thus the test based on the average effect size is generally preferable, and there is little reason to also calculate the Stouffer test.
Methods of estimating the pooled effect size under meta-analysis: A comparative appraisal
Clinical Epidemiology and Global Health, 2020
Present study has compared methods of synthesizing the pooled effect estimate under meta-analysis, namely Fixed Effect Method (FEM), Random Effects Method (REM) and a recently proposed Weighted Least Square (WLS) method. Methods: Three methods of estimating pooled effect estimates under meta-analysis were compared on the basis of coverage probability and width of confidence interval. These methods were compared for seven outcomes with varying heterogeneity and sample size using real data of systematic review comparing neo-adjuvant chemotherapy with adjuvant chemotherapy involving 'hazard ratio' and 'risk ratio' as effect size. Results: WLS method was found to be superior to FEM having higher coverage probability in case of heterogeneity. Further, WLS with similar coverage probability was found to be superior to REM with more precise confidence interval. Conclusion: Unrestricted WLS method needs to be preferred unconditionally over fixed effect method and random effects method.