Facts Are More Important Than Novelty: Replication in the Education Sciences (original) (raw)
2014, Educational Researcher
Despite increased attention to methodological rigor in education research, the field has focused heavily on experimental design and not on the merit of replicating important results. The present study analyzed the complete publication history of the current top 100 education journals ranked by 5-year impact factor and found that only 0.13% of education articles were replications. Contrary to previous findings in medicine, but similar to psychology, the majority of education replications successfully replicated the original studies. However, replications were significantly less likely to be successful when there was no overlap in authorship between the original and replicating articles. The results emphasize the importance of third-party, direct replications in helping education research improve its ability to shape education policy and practice.
Related papers
Replications in Psychology Research
Perspectives on Psychological Science, 2012
Recent controversies in psychology have spurred conversations about the nature and quality of psychological research. One topic receiving substantial attention is the role of replication in psychological science. Using the complete publication history of the 100 psychology journals with the highest 5-year impact factors, the current article provides an overview of replications in psychological research since 1900. This investigation revealed that roughly 1.6% of all psychology publications used the term replication in text. A more thorough analysis of 500 randomly selected articles revealed that only 68% of articles using the term replication were actual replications, resulting in an overall replication rate of 1.07%. Contrary to previous findings in other fields, this study found that the majority of replications in psychology journals reported similar findings to their original studies (i.e., they were successful replications). However, replications were significantly less likely ...
Although replications are vital to scientific progress, psychologists rarely engage in systematic replication efforts. The present article considers psychologists’ narrative approach to scientific publications as an underlying reason for this neglect, and proposes an incentive structure for replications within psychology. First, researchers need accessible outlets for publishing replications. To accomplish this, psychology journals could publish replication reports, in files that are electronically linked to reports of the original research. Second, replications should get cited. This can be achieved by co-citing replications along with original research reports. Third, replications should become a valued collaborative effort. This can be realized by incorporating replications in teaching programs and by stimulating adversarial collaborations. The proposed incentive structure for replications can be developed in a relatively simple and cost-effective manner. By promoting replications, this incentive structure may greatly enhance the dependability of psychology’s knowledge base.
Improving the Replicability of Psychological Science Through Pedagogy
Replications are important to science, but who will do them? One proposal is that students can conduct replications as part of their training. As a proof-of-concept for this idea, here we report a series of 11 pre-registered replications of findings from the 2015 volume of Psychological Science, all conducted as part of a graduate-level course. Congruent with previous studies, replications typically yielded smaller effects than originals: The modal outcome was partial support for the original claim. This work documents the challenges facing motivated students in reproducing previously published results on a first attempt. We describe the workflow and pedagogical methods that were used in the class and discuss implications both for the adoption of this pedagogical model and for replication research more broadly.
PeerJ, 2020
What explanation is there when teams of researchers are unable to successfully replicate already established ‘canonical’ findings? One suggestion that has been put forward, but left largely untested, is that those researchers who fail to replicate prior studies are of low ‘expertise and diligence’ and lack the skill necessary to successfully replicate the conditions of the original experiment. Here we examine the replication success of 100 scientists of differing ‘expertise and diligence’ who attempted to replicate five different studies. Using a bibliometric tool (h-index) as our indicator of researcher ‘expertise and diligence’, we examine whether this was predictive of replication success. Although there was substantial variability in replication success and in the h-factor of the investigators, we find no relationship between these variables. The present results provide no evidence for the hypothesis that systematic replications fail because of low ‘expertise and diligence’ among replicators.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.