Assessing clinical communication skills in physicians: are the skills context specific or generalizable (original) (raw)

Simultaneous Evaluation of Communication Skills by Standardized Patients and Medical Evaluators

Medical University

Introduction: The present study analyzes the evaluation of communication skills by standardized patients (SPs) and medical evaluators (Es) in an OSCE setting. Methods: The OSCE involved 189 sixth-year medical students, as well as 34 SPs and 63 Es. Communications skills were evaluated in 8 stations, simultaneously by SPs and Es. The SPs were actors who had been trained in the clinical case and who acted in accordance with a standardized script in a simulated clinical situation. The evaluators, also standardized, were Resident Doctors or staff Doctors from the Hospital Services involved. Results: The global scores awarded to students for communication skills were very similar in both groups, although the score awarded by Es was significantly higher, and a direct relationship was also observed between the mean scores awarded by both groups. Evaluators awarded significantly higher scores than SPs in 7 out of the 10 items on the checklist. Female medical students also scored significantl...

The Development and Partial Assessment of the Medical Communication Competence Scale

Health Communication, 1998

The purpose of this research was to develop and partially assess a self-report scale for measuring doctors' and patients' perceptions of self-communication and othercommunication competence during a medical interview. Previous research into the components of communication competence and medical discourse were used to develop the Medical Communication Competence Scale (MCCS). It was hypothesized that the items of the MCCS would form four clusters: information giving, information seeking, information verifying, and socioemotional communication. The cluster analysis results provided support for the hypothesis. Results of several other analyses provided additional support for the validity of the MCCS. Although considerable attention has been given to doctor-patient communication over the last three decades (Ong, DeHaes, Hoos, & Larnmes, 1995; Thompson, 1994), several researchers have observed that little is actually known about exactly how doctor-patient communication impacts health outcomes (e.g.

Communication skills: An essential component of medical curricula. Part I: Assessment of clinical communication: AMEE Guide No. 511

Medical Teacher, 2011

This AMEE Guide in Medical Education is Part 1 of a two part Guide covering the issues of Communication. This Guide has been written to provide guidance for those involved in planning the assessment of clinical communication and provides guidance and information relating to the assessment of various aspects of clinical communication; its underlying theory; its practical ability to show that an individual is competent and its relationship to students' daily performance. The advantages and disadvantages of assessing specific aspects of communication are also discussed. The Guide draws attention to the complexity of assessing the ability to communicate with patients and healthcare professionals, with issues of reliability and validity being highlighted for each aspect. Current debates within the area of clinical communication teaching are raised: when should the assessment of clinical communication occur in undergraduate medical education?; should clinical communication assessment be integrated with clinical skills assessment, or should the two be separate?; how important should the assessment of clinical communication be, and the question of possible failure of students if they are judged not competent in communication skills? It is the aim of the authors not only to provide a useful reference for those starting to develop their assessment processes, but also provide an opportunity for review and debate amongst those who already assess clinical communication within their curricula, and a resource for those who have a general interest in medical education who wish to learn more about communication skills assessment. is currently a Senior Teaching Fellow and Convenor of communication skills at the Medical School, University of St Andrews, UK. She is also deputy module co-ordinator for a second-year module. She became interested in medical education in 2005 when she joined the Health Psychology group within the school. Her current research Communication skills 7 interests are psychological and cognitive factors affecting communication and pedagogy. JO HART, PhD, is a Director of 2011 Programme Delivery & Development at Manchester Medical School, University of Manchester, UK. She is a Chartered Health Psychologist and has a PhD in health psychology and medical education. Her research interest is primarily in the area of behaviour change, along with work in factors affecting prediction of communication ability, and the process and effect of feedback.

Validation of the 5-item doctor-patient communication competency instrument for medical students (DPCC-MS) using two years of assessment data

BMC Medical Education, 2017

Background: Medical students on clinical rotations have to be assessed on several competencies at the end of each clinical rotation, pointing to the need for short, reliable, and valid assessment instruments of each competency. Doctor patient communication is a central competency targeted by medical schools however, there are no published short (i. e. less than 10 items), reliable and valid instruments to assess doctor-patient communication competency. The Faculty of Medicine of Laval University recently developed a 5-item Doctor-Patient Communication Competency instrument for Medical Students (DPCC-MS), based on the Patient Centered Clinical Method conceptual framework, which provides a global summative end-of-rotation assessment of doctor-patient communication. We conducted a psychometric validation of this instrument and present validity evidence based on the response process, internal structure and relation to other variables using two years of assessment data. Methods: We conducted the study in two phases. In phase 1, we drew on 4991 student DPCC-MS assessments (two years). We conducted descriptive statistics, a confirmatory factor analysis (CFA), and tested the correlation between the DPCC-MS and the Multiple Mini Interviews (MMI) scores. In phase 2, eleven clinical teachers assessed the performance of 35 medical students in an objective structured clinical examination station using the DPCC-MS, a 15-item instrument developed by Côté et al. (published in 2001), and a 2-item global assessment. We compared the DPCC-MS to the longer Côté et al. instrument based on internal consistency, coefficient of variation, convergent validity, and inter-rater reliability. Results: Phase 1: Cronbach's alpha was acceptable (.75 and .83). Inter-item correlations were positive and the discrimination index was above .30 for all items. CFA supported a unidimensional structure. DPCC-MS and MMI scores were correlated. Phase 2: The DPCC-MS and the Côté et al. instrument had similar internal consistency and convergent validity, but the DPCC-MS had better inter-rater reliability (mean ICC = .61). Conclusions: The DPCC-MS provides an internally consistent and valid assessment of medical students' communication with patients.

Measuring patient views of physician communication skills: Development and testing of the Communication Assessment Tool

Patient Education and Counseling, 2007

Objective: Interpersonal and communication skills have been identified as a core competency that must be demonstrated by physicians. We developed and tested a tool that can be used by patients to assess the interpersonal and communication skills of physicians-in-training and physicians-in-practice. Methods: We began by engaging in a systematic scale development process to obtain a psychometrically sound Communication Assessment Tool (CAT). This process yielded a 15-item instrument that is written at the fourth grade reading level and employs a five-point response scale, with 5 = excellent. Fourteen items focus on the physician and one targets the staff. Pilot testing established that the CAT differentiates between physicians who rated high or low on a separate satisfaction scale. We conducted a field test with physicians and patients from a variety of specialties and regions within the US to assess the feasibility of using the CAT in everyday practice. Results: Thirty-eight physicians and 950 patients (25 patients per physician) participated in the field test. The average patient-reported mean score per physician was 4.68 across all CAT items (S.D. = 0.54, range 3.97-4.95). The average proportion of excellent scores was 76.3% (S.D. = 11.1, range 45.7-95.1%). Overall scale reliability was high (Cronbach's alpha = 0.96); alpha coefficients were uniformly high when reliability was examined per doctor. Conclusion: The CAT is a reliable and valid instrument for measuring patient perceptions of physician performance in the area of interpersonal and communication skills. The field test demonstrated that the CAT can be successfully completed by both physicians and patients across clinical specialties. Reporting the proportion of ''excellent'' ratings given by patients is more useful than summarizing scores via means, which are highly skewed. Practice implications: Specialty boards, residency programs, medical schools, and practice plans may find the CAT valuable for both collecting information and providing feedback about interpersonal and communication skills. #

National survey of clinical communication assessment in medical education in the United Kingdom (UK)

BMC Medical Education, 2014

Background: All medical schools in the UK are required to be able to provide evidence of competence in clinical communication in their graduates. This is usually provided by summative assessment of clinical communication, but there is considerable variation in how this is carried out. This study aimed to gain insight into the current assessment of clinical communication in UK medical schools.

Comparison of two instruments for assessing communication skills in a general practice objective structured clinical examination

Medical Education, 2007

OBJECTIVE In recent decades, there has been increased interest in tools for assessing and improving the communication skills of general practice trainees. Recently, experts in the field rated the older Maas Global (MG) and the newer Common Ground (CG) instruments among the better communication skills assessment tools. This report seeks to establish their cross-validity. METHODS Eighty trainees were observed by 2 raters for each instrument in 2 standardised patient stations from the final year objective structured clinical examination for Belgian trainee general practitioners. Each instrument was assigned 6 raters. RESULTS Trainees showed the lowest mean scores for evaluating the consultation (MG7), summarising (MG11), addressing emotions (MG9) and addressing feelings (CG5). Inter-rater κ statistics revealed fair-to moderate agreement for the MG and slight-to-fair agreement for the CG. Cronbach's a was 0.78 for the MG and 0.89 for the CG. A generalisability study was only feasible for the MG: it was more helpful to increase the number of cases than the number of raters. Agreement between the instruments was examined using κ statistics, Bland)Altman plots and multi-level analysis. Ranking the trainees for each instrument revealed similar results for the least competent trainees. Variances between and within trainees differed between instruments, whereas case specificity was comparable. Multi-level analysis also revealed a rater-item interaction effect. CONCLUSIONS The 2 instruments have convergent validity, but the drawbacks of the CG, which has fewer items to be scored, include lower inter-rater reliability and score variance within trainees.

The Evaluation of Physicians' Communication Skills From Multiple Perspectives

Annals of family medicine, 2018

To examine how family physicians', patients', and trained clinical raters' assessments of physician-patient communication compare by analysis of individual appointments. Analysis of survey data from patients attending face-to-face appointments with 45 family physicians at 13 practices in England. Immediately post-appointment, patients and physicians independently completed a questionnaire including 7 items assessing communication quality. A sample of videotaped appointments was assessed by trained clinical raters, using the same 7 communication items. Patient, physician, and rater communication scores were compared using correlation coefficients. Included were 503 physician-patient pairs; of those, 55 appointments were also evaluated by trained clinical raters. Physicians scored themselves, on average, lower than patients (mean physician score 74.5; mean patient score 94.4); 63.4% (319) of patient-reported scores were the maximum of 100. The mean of rater scores from 55 ...