A comparison of the effect size estimators in meta-analysis (original) (raw)

A Unified Approach to the Estimation of Effect Size in Meta-Analysis

1989

Parametric measures to estimate J. Cohen's effect size (1966) from a single experiment or for a single study in meta-analysis are investigated. The main objective was to examine the principal statistical properties of this effect size-delta-under variance homogeneity, variance heterogeneity with known variance ratios, and for the Behrens-Fisher problem. Derived estimators were compared according to the criteria of ..heir magnitudes, unbiasedness, and mean-square errors. General properties of the derived estimators were examined by means of Monte Carlo results. Results tend to confirm the recommendation of theoretical analysis that two identified estimators, "h" and "d(sub T)," should be used in conducting a meta-analytic study. These estimators better ensure unbiasedness and minimum mean-square error than do other: derived. Eight tables and four graphs illustrate study data. (SLD) *

The accuracy of effect-size estimates under normals and contaminated normals in meta-analysis

Heliyon

This article evaluates the accuracy of effect-size estimates for some estimation procedures in meta-analysis. The dilemma of which effect-size estimate is suitable is still a problem in meta-analysis. Monte Carlo simulations were used to generate random variables from a normal distribution or contaminated normal distribution for primary studies. The primary studies were hypothesised to have equal variance under different population effect sizes. The primary studies were also hypothesised to have unequal variance. Meta-analysis was done on the simulated hypothesized-primary-studies. The effect sizes for the simulated design of the primary studies were estimated using Cohen's , Hedges' , Glass' △, Cliff's delta and the Probability of Superiority. Their corresponding standard error and confidence interval were computed and a comparison of an efficient estimator was done using statistical bias, percentage error and confidence interval width. The statistical bias, percentage error and confidence interval width pointed to Probability of Superiority as an accurate effect size estimate under contaminated normal distribution, and Hedges' as the most accurate effect size estimates compared to Cohen's and Glass' △ when equal variance assumptions are violated. This study suggests that the accuracy of effect size estimates depends on the details of the primary studies included in the metaanalysis.

Quantifying Effect Sizes in Randomised and Controlled Trials: A Review

Journal of Health Science Research

Meta-analysis aggregates quantitative outcomes from multiple scientific studies to produce comparable effect sizes. The resultant integration of useful information leads to a statistical estimate with higher power and more reliable point estimate when compared to the measure derived from any individual study. Effect sizes are usually estimated using mean differences of the outcomes of treatment and control groups in experimental studies. Although different software exists for the calculations in meta-analysis, understanding how the calculations are done can be useful to many researchers, particularly where the values reported in the literature data is not applicable in the software available to the researcher. In this paper, search was conducted online primarily using Google and PubMed to retrieve relevant articles on the different methods of calculating the effect sizes and the associated confidence intervals, effect size correlation, p values and I 2 , and how to evaluate heterogeneity and publication bias are presented.

Estimating the mean effect size in meta-analysis: Bias, precision, and mean squared error of different weighting methods

Behavior Research Methods, 2003

Although use of the standardized mean difference in meta-analysis is appealing for several reasons, there are some drawbacks. In this article, we focus on the following problem: that a precision-weighted mean of the observed effect sizes results in a biased estimate of the mean standardized mean difference. This bias is due to the fact that the weight given to an observed effect size depends on this observed effect size. In order to eliminate the bias, Hedges and Olkin (1985) proposed using the mean effect size estimate to calculate the weights. In the article, we propose a third alternative for calculating the weights: using empirical Bayes estimates of the effect sizes. In a simulation study, these three approaches are compared. The mean squared error (MSE) is used as the criterion by which to evaluate the resulting estimates of the mean effect size. For a meta-analytic dataset with a small number of studies, theMSE is usually smallest when the ordinary procedure is used, whereas for a moderate or large number of studies, the procedures yielding the best results are the empirical Bayes procedure and the procedure of Hedges and Olkin, respectively.

Methods of estimating the pooled effect size under meta-analysis: A comparative appraisal

Clinical Epidemiology and Global Health, 2020

Present study has compared methods of synthesizing the pooled effect estimate under meta-analysis, namely Fixed Effect Method (FEM), Random Effects Method (REM) and a recently proposed Weighted Least Square (WLS) method. Methods: Three methods of estimating pooled effect estimates under meta-analysis were compared on the basis of coverage probability and width of confidence interval. These methods were compared for seven outcomes with varying heterogeneity and sample size using real data of systematic review comparing neo-adjuvant chemotherapy with adjuvant chemotherapy involving 'hazard ratio' and 'risk ratio' as effect size. Results: WLS method was found to be superior to FEM having higher coverage probability in case of heterogeneity. Further, WLS with similar coverage probability was found to be superior to REM with more precise confidence interval. Conclusion: Unrestricted WLS method needs to be preferred unconditionally over fixed effect method and random effects method.

The Impact of Effect Size Heterogeneity on Meta-Analysis: A Monte Carlo Experiment

SSRN Electronic Journal, 2000

In this paper we use Monte Carlo simulation to investigate the impact of effect size heterogeneity on the results of a meta-analysis. Specifically, we address the small sample behaviour of the OLS, the fixed effects regression and the mixed effects meta-estimators under three alternative scenarios of effect size heterogeneity. We distinguish heterogeneity in effect size variance, heterogeneity due to a varying true underlying effect across primary studies, and heterogeneity due to a non-systematic impact of omitted variable bias in primary studies. Our results show that the mixed effects estimator is to be preferred to the other two estimators in the first two situations. However, in the presence of random effect size variation due to a non-systematic impact of omitted variable bias, using the mixed effects estimator may be suboptimal. We also address the impact of sample size and show that meta-analysis sample size is far more effective in reducing meta-estimator variance and increasing the power of hypothesis testing than primary study sample size. JEL-codes: C12; C15; C40

Consequences of effect size heterogeneity for meta-analysis: a Monte Carlo study

Statistical Methods and Applications, 2010

In this article we use Monte Carlo analysis to assess the small sample behaviour of the OLS, the weighted least squares (WLS) and the mixed effects metaestimators under several types of effect size heterogeneity, using the bias, the mean squared error and the size and power of the statistical tests as performance indicators. Specifically, we analyse the consequences of heterogeneity in effect size precision (heteroskedasticity) and of two types of random effect size variation, one where the variation holds for the entire sample, and one where only a subset of the sample of studies is affected. Our results show that the mixed effects estimator is to be preferred to the other two estimators in the first two situations, but that WLS outperforms OLS and mixed effects in the third situation. Our findings therefore show that, under circumstances that are quite common in practice, using the mixed effects estimator may be suboptimal and that the use of WLS is preferable.

Sample Sizes and Effect Sizes are Negatively Correlated in Meta-Analyses: Evidence and Implications of a Publication Bias Against NonSignificant Findings

Communication Monographs, 2009

Meta-analysis involves cumulating effects across studies in order to qualitatively summarize existing literatures. A recent finding suggests that the effect sizes reported in meta-analyses may be negatively correlated with study sample sizes. This prediction was tested with a sample of 51 published meta-analyses summarizing the results of 3,602 individual studies. The correlation between effect size and sample size was negative in almost 80 percent of the meta-analyses examined, and the negative correlation was not limited to a particular type of research or substantive area. This result most likely stems from a bias against publishing findings that are not statistically significant. The primary implication is that meta-analyses may systematically overestimate population effect sizes. It is recommended that researchers routinely examine the nÁr scatter plot and correlation, or some other indication of publication bias and report this information in meta-analyses.