Twelve tips for introducing the concept of validity argument in assessment to novice medical teachers in a workshop (original) (raw)

Twelve tips for introducing the concept of validity argument in assessment to novice medical teachers in a workshop [Version 2]

MedEdPublish, 2021

Author Revision Notes •General amendments: the authors simplified, clarified, and defined several terms that are used exclusively in health professions education, such as "construct" in Tip 4, "threats to validity" in Tip 7, and Downing's (2003) sources of validity in Tip 7. •In Tip 1, we discussed how to mitigate the issues that would arise if presenters were forced to run the workshop for 25-30 participants if limiting the number to 15 participants was not possible. Additionally, we included a suggested workshop schedule to serve as a rough guide for conducting the workshop (attached in the appendix). •We corrected the spelling error in Figure 1 in Tip 6. •In tip 7, we discussed Downing's concept of sources of validity (2003)

Four tenets of modern validity theory for medical education assessment and evaluation

Advances in Medical Education and Practice

Validity is considered by many to be the most important criterion for evaluating a set of scores, yet few agree on what exactly the term means. Since the mid-1800s, scholars have been concerned with the notion of validity, but over time, the term has developed a variety of meanings across academic disciplines and contexts. Accordingly, when scholars with different academic backgrounds, many of whom hold deeply entrenched perspectives about validity conceptualizations, converge in the field of medical education assessment, it is a recipe for confusion. Thus, it is important to work toward a consensus about validity in the context of medical education assessment. Thus, the purpose of this work was to present four fundamental tenets of modern validity theory in an effort to establish a framework for scholars in the field of medical education assessment to follow when conceptualizing validity, interpreting validity evidence, and reporting research findings.

Collecting Validity Evidence: A Hands-on Workshop for Medical Education Assessment Instruments

MedEdPORTAL

Introduction: There is an increasing call for developing validity evidence in medical education assessment. The literature lacks a practical resource regarding an actual development process. Our workshop teaches how to apply principles of validity evidence to existing assessment instruments and how to develop new instruments that will yield valid data. Methods: The literature, consensus findings of curricula and content experts, and principles of adult learning guided the content and methodology of the workshop. The workshop underwent stringent peer review prior to presentation at one international and three national academic conferences. In the interactive workshop, selected domains of validity evidence were taught with sequential cycles of didactics, demonstration, and deliberate practice with facilitated feedback. An exercise guide steered participants through a stepwise approach. Using Likert-scale items and open-response questions, an evaluation form rated the workshop's effectiveness, captured details of how learners reached the objectives, and determined participants' plans for future work. Results: The workshop demonstrated generalizability with successful implementation in diverse settings. Sixty-five learners, the majority being clinician-educators, completed evaluations. Learners rated the workshop favorably for each prompt. Qualitative comments corroborated the workshop's effectiveness. The active application and facilitated feedback components allowed learners to reflect in real time as to how they were meeting a particular objective. Discussion: This feasible and practical educational intervention fills a literature gap by showing the medical educator how to apply validity evidence to both existing and indevelopment assessment instruments. Thus, it holds the potential to significantly impact learner and, subsequently, patient outcomes.

The Validity Concept in Medical Education: a Bibliometric Analysis

Revista Brasileira de Educação Médica, 2020

Introduction: Assessment is a critical part of learning and validity is arguably its most important aspect. However, different views and beliefs led to a fragmented conception of the validity meaning, with an excessive focus on psychometric methods and scores, neglecting the consequences and utility of the test. The last decades witnessed the creation of a significant number of tests to assess different aspects of the medical profession formation, but researchers frequently limit their conclusions to the consistency of their measurements, without any further analysis on the educational and social impacts of the test. The objective of this work is to determine the predominant concept of validity in medical education assessment studies. Method: The authors conducted a bibliometric research of the literature about studies on the assessment of learning of medical students, to determine the prevalent concept of validity. The research covered a period from January 2001 to august 2019. The studies were classified in two categories based on their approach to validity: (1)" fragmented validity concept" and (2)" unified validity concept". To help with validity arguments, the studies were also classified based on Miller's framework for clinical assessment. Results: From an initial search resulting in 2823 studies, 716 studies were selected based on the eligibility criteria, and from the selected list, of which 693 (96,7%) were considered studies of the fragmented validity concept, which prioritized score results over an analysis of the test's utility, and only 23 studies (3,2%) were aligned with a unified view of validity, showing an explicit analysis of the consequences and utility of the test. Although the last decade witnessed a significant increase in the number of assessment studies, this increase was not followed by a significant change in the validity concept. Conclusions: This bibliometric analysis demonstrated that assessment studies in medical education still have a fragmented concept of validity, restricted to psychometric methods and scores. The vast majority of studies are not committed to the analysis about the utility and educational impact of an assessment policy. This restrictive view can lead to the waste of valuable time and resources related to assessment methods without significant educational consequences.

Reconceptualizing Validity for Classroom Assessment

Educational Measurement: Issues and Practice, 2005

This article explores the shortcomings of conventional validity theory for guiding classroom assessment practice and suggests additional theoretical resources from sociocultural theorg and hermeneutics to complement and challenge conuentionaZ theorg. T o illuminate these concerns and possibilities in a concrete context, the author uses her own classroom experience in teaching a qualitative research methods course. The importance of examining cases of assessment practice in context for developing, teaching, and evaluating validity theorg is discussed.

Assessment in Medical Education

New England Journal of Medicine, 2007

A s an attending physician working with a student for a week, you receive a form that asks you to evaluate the student's fund of knowledge, procedural skills, professionalism, interest in learning, and "systems-based practice." You wonder which of these attributes you can reliably assess and how the data you provide will be used to further the student's education. You also wonder whether other tests of knowledge and competence that students must undergo before they enter practice are equally problematic. I n one way or another, most practicing physicians are involved in assessing the competence of trainees, peers, and other health professionals. As the example above suggests, however, they may not be as comfortable using educational assessment tools as they are using more clinically focused diagnostic tests. This article provides a conceptual framework for and a brief update on commonly used and emerging methods of assessment, discusses the strengths and limitations of each method, and identifies several challenges in the assessment of physicians' professional competence and performance.

Principles of Assessment: A Primer for Medical Educators in the Clinical Years

The Internet Journal of Medical Education, 2010

Whether practicing in a rural, community, or an academic setting, physicians from all clinical specialties will participate in assessment. These assessments may be for trainees, peers, and more recently, for self-assessment. Regardless of the subject, assessors may be uncomfortable making judgments because they are unfamiliar with assessment principles. This editorial review, although a primer and aimed at the novice, will also provide information for more experienced assessors when considering assessment purpose, design, and selection. Using concrete examples, these fundamental principles are illustrated so that physicians can be confident that their evaluations are accurate, insightful and meaningful.

Assessment in medical education-Replies

A s an attending physician working with a student for a week, you receive a form that asks you to evaluate the student's fund of knowledge, procedural skills, professionalism, interest in learning, and "systems-based practice." You wonder which of these attributes you can reliably assess and how the data you provide will be used to further the student's education. You also wonder whether other tests of knowledge and competence that students must undergo before they enter practice are equally problematic. I n one way or another, most practicing physicians are involved in assessing the competence of trainees, peers, and other health professionals. As the example above suggests, however, they may not be as comfortable using educational assessment tools as they are using more clinically focused diagnostic tests. This article provides a conceptual framework for and a brief update on commonly used and emerging methods of assessment, discusses the strengths and limitations of each method, and identifies several challenges in the assessment of physicians' professional competence and performance. C ompe tence a nd Per for m a nce Elsewhere, Hundert and I have defined competence in medicine as "the habitual and judicious use of communication, knowledge, technical skills, clinical reasoning, emotions, values, and reflection in daily practice for the benefit of the individuals and communities being served." 1 In the United States, the assessment of medical residents, and increasingly of medical students, is largely based on a model that was developed by the Accreditation Council for Graduate Medical Education (ACGME). This model uses six interrelated domains of competence: medical knowledge, patient care, professionalism, communication and interpersonal skills, practice-based learning and improvement, and systems-based practice. 2 Competence is not an achievement but rather a habit of lifelong learning 3 ; assessment plays an integral role in helping physicians identify and respond to their own learning needs. Ideally, the assessment of competence (what the student or physician is able to do) should provide insight into actual performance (what he or she does habitually when not observed), as well as the capacity to adapt to change, find and generate new knowledge, and improve overall performance. 4 Competence is contextual, reflecting the relationship between a person's abilities and the tasks he or she is required to perform in a particular situation in the real world. 5 Common contextual factors include the practice setting, the local prevalence of disease, the nature of the patient's presenting symptoms, the patient's educational level, and other demographic characteristics of the patient and of the physician. Many aspects of competence, such as history taking and clinical reasoning, are also content-specific and not necessarily generalizable to all situations. A student's