When enough is enough: a conceptual basis for fair and defensible practice performance assessment (original) (raw)
Related papers
Procedures for establishing defensible programmes for assessing practice performance
Medical Education, 2002
The assessment of the performance of doctors in practice is becoming more widely accepted. While there are many potential purposes for such assessments, sometimes the consequences of the assessments will be Ôhigh stakesÕ. In these circumstances, any of the many elements of the assessment programme may potentially be challenged. These assessment programmes therefore need to be robust, fair and defensible, taken from the perspectives of consumer, assessee and assessor. In order to inform the design of defensible programmes for assessing practice performance, a group of education researchers at the 10th Cambridge Conference adopted a project man-agement approach to designing practice performance assessment programmes. This paper describes issues to consider in the articulation of the purposes and outcomes of the assessment, planning the programme, the administrative processes involved, including communication and preparation of assessees.
Medical Teacher, 2012
Background: The definitions of performance, competence and competency are not very clear in the literature. The assessment of performance and the selection of tools for this purpose depend upon a deep understanding of each of the above terms and the factors influencing performance. Aim: In this article, we distinguish between competence and competency and explain the relationship of competence and performance in the light of the Dreyfus model of skills acquisition. We briefly critique the application of the principles described by Miller to the modern assessment tools and distinguish between assessment of actual performance in workplace settings and the observed performance, demonstrated by the candidates in the workplace or simulated settings. Results: We describe a modification of the Dreyfus model applicable to assessments in healthcare and propose a new model for the assessment of performance and performance rating scale (PRS) based on this model. Conclusion: We propose that the use of adapted versions of this PRS will result in benchmarking of performance and allowing the candidates to track their progression of skills in various areas of clinical practice.
What should we assess in practice?
Journal of Nursing Management, 2009
This article reports on a PhD study and follow up work undertaken to review and develop a tool for assessment of practice. Background The assessment of practice in nursing and midwifery education and, other health professions has been the source of concern, criticism and research for a number of years with the conclusion that it might not be possible to develop an assessment tool that could encompass all aspects of professional practice. Methods A qualitative evaluation study was undertaken using the naturalistic method of inquiry. A combination of tools was used in order to collect the data and enable progressive focusing and cross checking of the findings. These included documentary analysis, a questionnaire and focus group and individual interviews. The data was collected from documents analysis, focus group interviews, individual interviews and questionnaires. Results The results showed that the assessment tool in use at the time did not encompass all criteria assessors used and six areas were identified as those to include in any future tool. Conclusion The six areas identified by subjects as those to include in any assessment tool were further developed with specific statements so that they could be used within a tool. Implications for Nursing Management and Education Within the changing nature of health care there is a need to review whether the tool used for assessing pre-registration education of nursing and midwifery students practice is 'fit for purpose'.
Practice as research in performance: from epistemology to evaluation
Digital Creativity, 2004
and-conditions-of-access.pdf This article may be used for research, teaching and private study purposes. Any substantial or systematic reproduction, redistribution , reselling , loan or sub-licensing, systematic supply or distribution in any form to anyone is expressly forbidden. The publisher does not give any warranty express or implied or make any representation that the contents will be complete or accurate or up to date. The accuracy of any instructions, formulae and drug doses should be independently verified with primary sources. The publisher shall not be liable for any loss, actions, claims, proceedings, demand or costs or damages whatsoever or howsoever caused arising directly or indirectly in connection with or arising out of the use of this material.
Advances in Health Sciences Education, 2007
Context: In-training assessment (ITA), defined as multiple assessments of performance in the setting of day-to-day practice, is an invaluable tool in assessment programmes which aim to assess professional competence in a comprehensive and valid way. Research on clinical performance ratings, however, consistently shows weaknesses concerning accuracy, reliability and validity. Attempts to improve the psychometric characteristics of ITA focusing on standardisation and objectivity of measurement thus far result in limited improvement of ITA-practices. Purpose: The aim of the paper is to demonstrate that the psychometric framework may limit more meaningful educational approaches to performance assessment, because it does not take into account key issues in the mechanics of the assessment process. Based on insights from other disciplines, we propose an approach to ITA that takes a constructivist, social-psychological perspective and integrates elements of theories of cognition, motivation and decision making. A central assumption in the proposed framework is that performance assessment is a judgment and decision making process, in which rating outcomes are influenced by interactions between individuals and the social context in which assessment occurs. Discussion: The issues raised in the article and the proposed assessment framework bring forward a number of implications for current performance assessment practice. It is argued that focusing on the context of performance assessment may be more effective in improving ITA practices than focusing strictly on raters and rating instruments. Furthermore, the constructivist approach towards assessment has important implications for assessment procedures as well as the evaluation of assessment quality. Finally, it is argued that further research into performance assessment should contribute towards a better understanding of the factors that influence rating outcomes, such as rater motivation, assessment procedures and other contextual variables.
Advances in Health Sciences Education, 2017
Workplace-Based Assessment (WBA) plays a pivotal role in present-day competency-based medical curricula. Validity in WBA mainly depends on how stakeholders (e.g. clinical supervisors and learners) use the assessments-rather than on the intrinsic qualities of instruments and methods. Current research on assessment in clinical contexts seems to imply that variable behaviours during performance assessment of both assessors and learners may well reflect their respective beliefs and perspectives towards WBA. We therefore performed a Q methodological study to explore perspectives underlying stakeholders' behaviours in WBA in a postgraduate medical training program. Five different perspectives on performance assessment were extracted: Agency, Mutuality, Objectivity, Adaptivity and Accountability. These perspectives reflect both differences and similarities in stakeholder perceptions and preferences regarding the utility of WBA. In comparing and contrasting the various perspectives, we identified two key areas of disagreement, specifically 'the locus of regulation of learning' (i.e., self-regulated versus externally regulated learning) and 'the extent to which assessment should be standardised' (i.e., tailored versus standardised assessment). Differing perspectives may variously affect stakeholders' acceptance, use-and, consequently, the effectiveness-of assessment programmes. Continuous interaction between all stakeholders is essential to monitor, adapt and improve assessment practices and to stimulate the development of a shared mental model. Better understanding of underlying stakeholder perspectives could be an important step in bridging the gap between psychometric and socio-constructivist approaches in WBA.
Selecting performance assessment methods for experienced physicians
Medical Education, 2002
Background While much is now known about how to assess the competence of medical practitioners in a controlled environment, less is known about how to measure the performance in practice of experienced doctors working in their own environments. The performance of doctors depends increasingly on how well they function in teams and how well the health care system around them functions.