The American Statistician The ASA's Statement on p-Values: Context, Process, and Purpose (original) (raw)

Statistical significance and its critics: practicing damaging science, or damaging scientific practice?

Synthese

While the common procedure of statistical significance testing and its accompanying concept of p-values have long been surrounded by controversy, renewed concern has been triggered by the replication crisis in science. Many blame statistical significance tests themselves, and some regard them as sufficiently damaging to scientific practice as to warrant being abandoned. We take a contrary position, arguing that the central criticisms arise from misunderstanding and misusing the statistical tools, and that in fact the purported remedies themselves risk damaging science. We argue that banning the use of p-value thresholds in interpreting data does not diminish but rather exacerbates data-dredging and biasing selection effects. If an account cannot specify outcomes that will not be allowed to count as evidence for a claim—if all thresholds are abandoned—then there is no test of that claim. The contributions of this paper are: To explain the rival statistical philosophies underlying the...

The Quest for Statistical Significance: Ignorance, Bias and Malpractice of Research Practitioners

https://www.ijrrjournal.com/IJRR\_Vol.5\_Issue.3\_March2018/Abstract\_IJRR0014.html, 2018

There is a growing body of evidence on the prevalence of ignorance, biases and malpractice among researchers which questions the authenticity, validity and integrity of the knowledge been propagated in professional circles. The push for academic relevance and career advancement have driven some research practitioners into committing gross misconduct in the form of innocent ignorance, sloppiness, malicious intent and outright fraud. These, among other concerns around research data handling and reporting, form the basis for this in-depth review. This discourse also draws attention to the recent official statement on the correct use of the p-value and the need for professional intervention is ensuring that the outcomes of research are neither erroneous nor misleading. The expositions in this review express cogent implications for institutions, supervisors, mentors, and editors to promote high ethical standards and rigor in scientific investigations.

The (mis)reporting of statistical results in psychology journals

Behavior Research Methods

In order to study the prevalence, nature (direction), and causes of reporting errors in psychology, we checked the consistency of reported test statistics, degrees of freedom, and p values in a random sample of high- and low-impact psychology journals. In a second study, we established the generality of reporting errors in a random sample of recent psychological articles. Our results, on the basis of 281 articles, indicate that around 18% of statistical results in the psychological literature are incorrectly reported. Inconsistencies were more common in low-impact journals than in high-impact journals. Moreover, around 15% of the articles contained at least one statistical conclusion that proved, upon recalculation, to be incorrect; that is, recalculation rendered the previously significant result insignificant, or vice versa. These errors were often in line with researchers’ expectations. We classified the most common errors and contacted authors to shed light on the origins of the errors.

The American Statistical Association (ASA) Statement of 2016 on Statistical Significance and P-value: A Critical Thought

A study on American Statistical Association (ASA) policy statement on statistical significance testing and p-value of 2016 was carried out in Tanzania. The purpose of the study was to explore the feelings and reactions of university statistics tutors towards the American Statistical Association policy statement on statistical significance testing and p-value of 2016. A sample of 9 statistics tutors from different disciplines were selected from public and private universities via heterogeneous purposive sampling to participate in the study. Respondents had mixed feelings towards ASA policy statement of 2016. The ASA policy statement was criticized for being shallow in depth, subjective and failing to answer the core problems raised against the use of Null Hypothesis Significance Testing (NHST) and p-value. The ASA policy statement was dismissed as a non event with nothing new to offer. However, despite being shallow, the ASA policy on NHST and p-value is likely to trigger a health debate on the shortfalls of NHST and p-value and the debate will eventually lead to a breakthrough.

Correcting Corrupt Research: Recommendations for the Profession to Stop Misuse of p-Values

The American Statistician

p-Values and Null Hypothesis Significance Testing (NHST), combined with a large number of institutional factors, jointly define the Generally Accepted Soft Social Science Publishing Process (GASSSPP) that is now dominant in the social sciences and is increasingly used elsewhere. The case against NHST and the GASSSPP has been abundantly articulated over past decades, and yet it continues to spread, supported by a large number of self-reinforcing institutional processes. In this article, the author presents a number of steps that may be taken to counter the spread of this corruption that directly address the institutional forces, both as individuals and through collaborative efforts. While individual efforts are indispensable to this undertaking, the author argues that these alone cannot succeed unless the institutional forces are also addressed. Supplementary materials for this article are available online.

P-values: misunderstood and misused

P-values are widely used in both the social and natural sciences to quantify the statistical significance of observed results. The recent surge of big data research has made the p-value an even more popular tool to test the significance of a study. However, substantial literature has been produced critiquing how p-values are used and understood. In this paper we review this recent critical literature, much of which is routed in the life sciences, and consider its implications for social scientific research. We provide a coherent picture of what the main criticisms are, and draw together and disambiguate common themes. In particular, we explain how the False Discovery Rate is calculated, and how this differs from a p-value. We also make explicit the Bayesian nature of many recent criticisms, a dimension that is often underplayed or ignored. We conclude by identifying practical steps to help remediate some of the concerns identified. We recommend that (i) far lower significance levels are used, such as 0.01 or 0.001, and (ii) p-values are interpreted contextually, and situated within both the findings of the individual study and the broader field of inquiry (through, for example, meta-analyses).

The arbitrary magic of p<0.05: Beyond statistics

Journal of B.U.ON. : official journal of the Balkan Union of Oncology, 2020

Modern research and scientific conclusions are widely regarded as valid when the study design and analysis are interpreted correctly. P-value is considered to be the most commonly used method to provide a dichotomy between true and false data in evidence-based medicine. However, many authors, reviewers and editors may be unfamiliar with the true definition and correct interpretation of this number. This article intends to point out how misunderstanding or misuse of this value can have an impact in both the scientific community as well as the society we live in. The foundation of the medical education system rewards the abundance of scientific papers rather than the careful search of the truth. Appropriate research ethics should be practised in all stages of the publication process.