Understanding the Nature of Psychokinesis (original) (raw)
Related papers
Reexamining psychokinesis: Comment on Bösch, Steinkamp, and Boller (2006)
Psychological Bulletin, 2006
review of the evidence for psychokinesis confirms many of the authors' earlier findings. The authors agree with Bösch et al. that existing studies provide statistical evidence for psychokinesis, that the evidence is generally of high methodological quality, and that effect sizes are distributed heterogeneously. Bösch et al. postulated the heterogeneity is attributable to selective reporting and thus that psychokinesis is "not proven." However, Bösch et al. assumed that effect size is entirely independent of sample size. For these experiments, this assumption is incorrect; it also guarantees heterogeneity. The authors maintain that selective reporting is an implausible explanation for the observed data and hence that these studies provide evidence for a genuine psychokinetic effect.
Must the 'Magic' of Psychokinesis Hinder Precise Scientific Measurement?
Journal of Consciousness Studies, 2002
Although evidential reports of paranormal phenomena (psi for short) have been accumulating over the last 50 years, scepticism within the scientific community at large against the very existence of psi has not retreated in proportion. Strong criticism has been voiced and it is worth taking it under serious consideration while attempting to understand psi. This article reviews the micro-psychokinesis phenomenon, aiming to reconcile evidence that favours it with other evidence that seems to refute it. To achieve this challenging task, some seemingly irrelevant observations will be invoked, such a s the often observed decline and differential effects, the ten-year old statistical balancing effect, the long standing reports for the experimental evidence of PK, the recent large-scale failure to replicate the conventional PK hypothesis, alongside the austere arguments against PK. This paper argues that the evidence can withstand the serious criticism.
We describe a method of quantifying the effect of Questionable Research Practices (QRPs) on the results of meta-analyses. As an example we simulated a meta-analysis of a controversial telepathy protocol to assess the extent to which these experimental results could be explained by QRPs. Our simulations used the same numbers of studies and trials as the original meta-analysis and the frequencies with which various QRPs were applied in the simulated experiments were based on surveys of experimental psychologists. Results of both the meta-analysis and simulations were characterized by 4 metrics, two describing the trial and mean experiment hit rates (HR) of around 31%, where 25% is expected by chance, one the correlation between sample-size and hit-rate, and one the complete P-value distribution of the database. A genetic algorithm optimized the parameters describing the QRPs, and the fitness of the simulated meta-analysis was defined as the sum of the squares of Zscores for the 4 metrics. Assuming no anomalous effect a good fit to the empirical metaanalysis was found only by using QRPs with unrealistic parameter-values. Restricting the parameter space to ranges observed in studies of QRP occurrence, under the untested assumption that parapsychologists use comparable QRPs, the fit to the published Ganzfeld meta-analysis with no anomalous effect was poor. We allowed for a real anomalous effect, be it unidentified QRPs or a paranormal effect, where the HR ranged from 25% (chance) to 31%. With an anomalous HR of 27% the fitness became F = 1.8 (p = 0.47 where F = 0 is a perfect fit). We conclude that the very significant probability cited by the Ganzfeld metaanalysis is likely inflated by QRPs, though results are still significant (p = 0.003) with QRPs. Our study demonstrates that quantitative simulations of QRPs can assess their impact. Since meta-analyses in general might be polluted by QRPs, this method has wide applicability outside the domain of experimental parapsychology.
Meta-analysis in parapsychology: II. Psi domains other than ganzfeld
Australian Journal of Parapsychology, 2006
The present article completes the two-part review on metaanalyses in parapsychology (for Part I, see L. Storm, 2006). The reviewed literature other than ganzfeld/autoganzfeld studies, includes meta-analyses on: (i) biological systems (DMILS), (ii) forced-choice ESP, (iii) free-response ESP, (iv) dice-throwing, (v) micro-PK (RNG), and (vi) dream-psi. Meta-analyses by T. R. Lawrence (1993), E. Haraldsson (1993), and R. G. Stanford and A. G. Stein (1994) are also reviewed.
2010
We report the results of meta-analyses on 3 types of free-response study: (a) ganzfeld (a technique that enhances a communication anomaly referred to as "psi"); (b) nonganzfeld noise reduction using alleged psi-enhancing techniques such as dream psi, meditation, relaxation, or hypnosis; and (c) standard free response (nonganzfeld, no noise reduction). For the period 1997-2008, a homogeneous data set of 29 ganzfeld studies yielded a mean effect size of 0.142 (Stouffer Z ϭ 5.48, p ϭ 2.13 ϫ 10 Ϫ8 ). A homogeneous nonganzfeld noise reduction data set of 16 studies yielded a mean effect size of 0.110 (Stouffer Z ϭ 3.35, p ϭ 2.08 ϫ 10 Ϫ4 ), and a homogeneous data set of 14 standard free-response studies produced a weak negative mean effect size of Ϫ0.029 (Stouffer Z ϭ Ϫ2.29, p ϭ .989). The mean effect size value of the ganzfeld database were significantly higher than the mean effect size of the nonganzfeld noise reduction and the standard free-response databases. We also found that selected participants (believers in the paranormal, meditators, etc.) had a performance advantage over unselected participants, but only if they were in the ganzfeld condition.
Journal of Scientific Exploration
Micro-psychokinesis (micro-PK) research studies the effects of observers’ conscious or unconscious intentions on random outcomes derived from true random sources such as quantum random number generators (QRNGs). The micro-PK study presented here was originally planned, preregistered, and conducted to exactly replicate a correlational finding between two within-subject experimental conditions found in an original micro-PK dataset (n = 12,254) using a QRNG. However, after data collection and analyses, a data error was detected in the original to-be-replicated dataset. A reanalysis of the original correlation effect after error correction revealed strong evidence for the absence of a correlation in the original data. This study’s primary goal was to test the existence of a correlational micro-PK effect in the present data as specified in the pre-registration. In addition to this replication attempt, the present study also can be considered an unsystematic case report or field study on ...
[Replication and Meta-Analysis in Parapsychology]: Comment
Statistical Science, 1991
Parapsychology, the laboratory study of psychic phenomena, has had its history interwoven with that of statistics. Many of the controversies in parapsychology have focused on statistical issues, and statistical models have played an integral role in the experimental work. Recently, parapsychologists have been using meta-analysis as a tool for synthesizing large bodies of work. This paper presents an overview of the use of statistics in parapsychology and offers a summary of the meta-analyses that have been conducted. It begins with some anecdotal information about the involvement of statistics and statisticians with the early history of parapsychology. Next, it is argued that most nonstatisticians do not appreciate the connection between power and "successful" replication of experimental effects. Returning to parapsychology, a particular experimental regime is examined by summarizing an extended debate over the interpretation of the results. A new set of experiments designed to resolve the debate is then reviewed. Finally, meta-analyses from several areas of parapsychology are summarized. It is concluded that the overall evidence indicates that there is an anomalous effect in need of an explanation.
Psychological Bulletin, 2010
We report the results of meta-analyses on 3 types of free-response study: (a) ganzfeld (a technique that enhances a communication anomaly referred to as "psi"); (b) nonganzfeld noise reduction using alleged psi-enhancing techniques such as dream psi, meditation, relaxation, or hypnosis; and (c) standard free response (nonganzfeld, no noise reduction). For the period 1997-2008, a homogeneous data set of 29 ganzfeld studies yielded a mean effect size of 0.142 (Stouffer Z ϭ 5.48, p ϭ 2.13 ϫ 10 Ϫ8 ). A homogeneous nonganzfeld noise reduction data set of 16 studies yielded a mean effect size of 0.110 (Stouffer Z ϭ 3.35, p ϭ 2.08 ϫ 10 Ϫ4 ), and a homogeneous data set of 14 standard free-response studies produced a weak negative mean effect size of Ϫ0.029 (Stouffer Z ϭ Ϫ2.29, p ϭ .989). The mean effect size value of the ganzfeld database were significantly higher than the mean effect size of the nonganzfeld noise reduction and the standard free-response databases. We also found that selected participants (believers in the paranormal, meditators, etc.) had a performance advantage over unselected participants, but only if they were in the ganzfeld condition.
In recent decades psychological researchers have overemphasized exploratory research with little regard for well-powered confirmatory research. Cumming advocates that the resulting research problems be addressed with “new statistics” that emphasize estimation and meta-analyses and avoid hypothesis tests. He advocates that researchers avoid “dichotomous thinking” and associated conclusions about the validity of hypotheses. Unfortunately, this continues to overemphasize exploratory research and to underemphasize confirmation. Hypothesis tests and estimation both have a role in effective statistical methodology. Hypothesis tests are optimal for (a) research when human life is directly involved and answers are urgently needed, (b) controversial areas of research such as parapsychology, and (c) any case when researchers want to provide the most convincing evidence that they understand and can control an effect. Meta-analysis is post hoc analysis that involves correlational analysis of observational data. Like other types of post hoc analyses, meta-analyses have not been effective at resolving scientific controversies. A clear distinction between exploratory research and confirmatory research is needed, as has been established for clinical trials. A group of well-designed, adequately powered confirmatory studies using hypothesis tests provides the strongest scientific evidence that an effect is valid. The “new statistics” appear to place priority on generating academic publications rather than drawing strong inferences about the validity of effects.
Psychonomic bulletin & review, 2017
The Psychonomic Society (PS) adopted New Statistical Guidelines for Journals of the Psychonomic Society in November 2012. To evaluate changes in statistical reporting within and outside PS journals, we examined all empirical papers published in PS journals and in the Experimental Psychology Society journal, The Quarterly Journal of Experimental Psychology (QJEP), in 2013 and 2015, to describe these populations before and after effects of the Guidelines. Comparisons of the 2013 and 2015 PS papers reveal differences associated with the Guidelines, and QJEP provides a baseline of papers to reflect changes in reporting that are not directly influenced by the Guidelines. A priori power analyses increased from 5% to 11% in PS papers, but not in QJEP papers (2%). The reporting of effect sizes in PS papers increased from 61% to 70%, similar to the increase for QJEP from 58% to 71%. Only 18% of papers reported confidence intervals (CIs) for means; only two PS papers in 2015 reported CIs for ...