Designing experiments and analyzing data (original) (raw)
Related papers
The analysis of repeated measures designs: A review
British Journal of Mathematical and Statistical Psychology, 2001
Repeated measures ANOVA can refer to many different types of analysis. Speci®cally, this vague term can refer to conventiona l tests of signi®cance, one of three univariate solutions with adjusted degrees of freedom, two different types of multivariate statistic, or approache s that combine univariate and multivariate tests. Accordingly, it is argued that, by only reporting probability values and referring to statistical analyses as repeated measures ANOVA, authors convey neither the type of analysis that was used nor the validity of the reported probability value, since each of these approache s has its own strengths and weaknesses. The various approache s are presented with a discussion of their strengths and weaknesses, and recommendations are made regarding the`best' choice of analysis. Additional topics discussed include analyses for missing data and tests of linear contrasts.
Research Fundamentals: Study Design, Population, and Sample Size
2018
This is the second article of a three-part series that continues the discussion on the fundamentals of writing research protocols for quantitative, clinical research studies. In this editorial, the author discusses some considerations for including information in a research protocol on the study design and approach of a research study. This series provides a guide for undergraduate researchers interested in publishing their protocol in the Undergraduate Research in Natural and Clinical Sciences and Technology (URNCST) Journal.
Key Topics in Clinical Research. Oxford: BIOS Scientific Publishers Ltd; 2003., 2003
In contrast to laboratory or animal research, clinical studies use patients or volunteers in an attempt to address questions of direct clinical significance. The aim of this chapter is to present principles and guidelines on how to design clinical research.
Study Design – the First Step for a Successful Research
Journal of Interdisciplinary Medicine
One of the most difficult steps in conducting a clinical study, being of utmost importance, is the preparation of the study protocol that provides a succinct, comprehensive description of the future research. 1 There are five important phases in conducting medical research studies, including planning, performance, documentation, analysis, and publication. The initial phase of any medical study consists in a precise planning, which is materialized in drafting the study protocol. 2 The design of a clinical study represents a critical element for the future development of the study. A well-designed study will be easily conducted, as investigators will be guided by a well-defined set of rules in all the phases of the trial. The study protocol illustrates the procedure guidelines that are set before commencing with the research, as well as specific timelines, study population sampling, data collection and analysis, ethical considerations, methodology, and patient risks. 3 The role of a research protocol is to raise the study questions and to illustrate its importance, to analyze the current knowledge on the topic, as well as to find gaps in the existing literature, which will further the study rationale and create the study hypothesis and objectives. Simultaneously, the protocol includes ethical considerations and description of the methodology used to answer the questions stated in the hypothesis, as well as discusses the prerequisites and obstacles that may or may not be encountered when achieving the objectives. The benefits derived from a well-written study design will allow the research team to plan and review the phases of the project, it will act as a guide during the study, as well as a monitor for time and budget estimates. 1 A well-designed study includes non-equivocal inclusion and exclusion criteria, and a clear study protocol mentioning the procedures to be performed at baseline and during follow-up, as well as their timelines. At the same time, outcome measures and study endpoints should be presented in the protocol from the beginning, in order to make the investigators aware on all the aspects of the study. The structure of a properly written study design includes the following six aspects: the research hypothesis, the study population, the type of study, the unit of analysis, the measuring technique, and the calculation of the sample size.
Research Methods in Psychology
Journal of Environmental Psychology, 1994
The first Canadian edition (published in 2013) was authored by Rajiv S. Jhangiani (Kwantlen Polytechnic University) and was licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 License. Revisions included the addition of a table of contents, changes to Chapter 3 (Research Ethics) to include a contemporary example of an ethical breach and to reflect Canadian ethical guidelines and privacy laws, additional information regarding online data collection in Chapter 9 (Survey Research), corrections of errors in the text and formulae, spelling changes from US to Canadian conventions, the addition of a cover page, and other necessary formatting adjustments. The present adaptation constitutes the second Canadian edition and was co-authored by Rajiv S. Jhangiani (Kwantlen Polytechnic University) and I-Chant A. Chiang (Quest University Canada) and is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. Revisions include the following: • Chapter 1: Added a description of the "Many Labs Replication Project," added a reference to the Neurobonkers website, and embedded videos about open access publishing, driver distraction, two types of empirical studies, and the use of evidence to evaluate the world around us. • Chapter 2: Updated the exemplar study in the chapter overview, added relevant examples and descriptions of contemporary studies, provided a link to an interactive visualization for correlations, added a description of double-blind peer review, added a figure to illustrate a spurious correlation, and embedded videos about how to develop a good research topic, searching the PsycINFO database, using Google Scholar, and how to read an academic paper. • Chapter 3: Added in LaCour ethical violation. Revised chapter headings and order to reflect TCPS-2 moral principles. • Chapter 4: Added in difference between laws and effects and theoretical framework. • Chapter 5: Added fuller descriptions of the levels of measurement, added a table to summarize the levels of measurement, added a fuller description of the MMPI, removed the discussion of the IAT, and added descriptions of concurrent, predictive, and convergent validity. • Chapter 6: Added in construct validity, statistical validity, mundane realism, psychological realism, Latin Square Design. Updated references. • Chapter 7: Added in mixed-design studies and fuller discussion of qualitative-quantitative debate. • Chapter 8: Added an exercise to sketch the 8 possible results of a 2 x 2 factorial experiment. • Chapter 9: Added information about Canadian Election Studies, more references, specific guidelines about order and open-ended questions, and rating scale. Updated online survey creation sites. • Chapter 10: No significant changes were made. • Chapter 11: Updated examples and links to online resources. • Chapter 12: No significant changes were made. • Chapter 13: Added discussion of p-curve and BASP announcement about banning p-values. Added a section that introduces the "replicability crisis" in psychology, along with discussions of questionable research practices, best practices in research design and data management, and the emergence of open science practices and Transparency and Openness Promotion guidelines. • Glossary of key terms: Added. In addition, throughout the textbook, we revised the language to be more precise and to improve flow, added links to viii | About This Book other chapters, added images, updated hyperlinks, corrected spelling and formatting errors, and changed references to reflect the contemporary Canadian context.
Australian Journal of Psychology, 2012
Estimation based on effect sizes, confidence intervals, and meta-analysis usually provides a more informative analysis of empirical results than does statistical significance testing, which has long been the conventional choice in psychology. The sixth edition of the American Psychological Association Publication Manual now recommends that psychologists should, wherever possible, use estimation and base their interpretation of research results on point and interval estimates. We outline the Manual's recommendations and suggest how they can be put into practice: adopt an estimation framework, starting with the formulation of research aims as 'How much?' or 'To what extent?' questions. Calculate from your data effect size estimates and confidence intervals to answer those questions, then interpret. Wherever appropriate, use meta-analysis to integrate evidence over studies. The Manual's recommendations can assist psychologists improve they way they do their statistics and help build a more quantitative and cumulative discipline.
Review of Educational Research, 1998
Articles published in several prominent educational journals were examined to investigate the use of data analysis tools by researchers in four research paradigms: between-subjects univariate designs, between-subjects multivariate designs, repeated measures designs, and covariance designs. In addition to examining specific details pertaining to the research design (e.g., sample size, group size equality/inequality) and methods employed for data analysis, the authors also catalogued whether (a) validity assumptions were examined, (b) effect size indices were reported, (c) sample sizes were selected on the basis of power considerations, and (d) appropriate textbooks and/or articles were cited to communicate the nature of the analyses that were performed. The present analyses imply that researchers rarely verify that validity assumptions are satisfied and that, accordingly, they typically use analyses that are nonrobust to assumption violations. In addition, researchers rarely report e...