Assessing communication competence: a review of current tools (original) (raw)

Assessing Family Medicine Residents' Communication Skills From the Patient's Perspective: Evaluating the Communication Assessment Tool

Journal of Graduate Medical Education, 2021

Background The Communication Assessment Tool (CAT), a paper-based patient survey, is 1 method to assess residents' interpersonal and communication skills. To further enhance the interpretation of the CAT, benchmark data are needed. Objective We sought to expand upon initial benchmarking data for the use of the CAT as an evaluation tool in family medicine residency programs. Methods Data were collected on 120 residents from 7 family medicine residency programs. Following an appointment with a resident, 1703 patients completed the CAT. Results The overall mean percentage of items rated as excellent was 73%. Significant differences were found in the overall percentage of items rated as “excellent” based on location of training (78% US graduate versus 71% international medical graduate) and native language of the resident (76% English speaking versus 69% non-English speaking). There were no significant differences found in the overall percentage of items rated as excellent based on ...

The Evaluation of Physicians' Communication Skills From Multiple Perspectives

Annals of family medicine, 2018

To examine how family physicians', patients', and trained clinical raters' assessments of physician-patient communication compare by analysis of individual appointments. Analysis of survey data from patients attending face-to-face appointments with 45 family physicians at 13 practices in England. Immediately post-appointment, patients and physicians independently completed a questionnaire including 7 items assessing communication quality. A sample of videotaped appointments was assessed by trained clinical raters, using the same 7 communication items. Patient, physician, and rater communication scores were compared using correlation coefficients. Included were 503 physician-patient pairs; of those, 55 appointments were also evaluated by trained clinical raters. Physicians scored themselves, on average, lower than patients (mean physician score 74.5; mean patient score 94.4); 63.4% (319) of patient-reported scores were the maximum of 100. The mean of rater scores from 55 ...

Measuring patient views of physician communication skills: Development and testing of the Communication Assessment Tool

Patient Education and Counseling, 2007

Objective: Interpersonal and communication skills have been identified as a core competency that must be demonstrated by physicians. We developed and tested a tool that can be used by patients to assess the interpersonal and communication skills of physicians-in-training and physicians-in-practice. Methods: We began by engaging in a systematic scale development process to obtain a psychometrically sound Communication Assessment Tool (CAT). This process yielded a 15-item instrument that is written at the fourth grade reading level and employs a five-point response scale, with 5 = excellent. Fourteen items focus on the physician and one targets the staff. Pilot testing established that the CAT differentiates between physicians who rated high or low on a separate satisfaction scale. We conducted a field test with physicians and patients from a variety of specialties and regions within the US to assess the feasibility of using the CAT in everyday practice. Results: Thirty-eight physicians and 950 patients (25 patients per physician) participated in the field test. The average patient-reported mean score per physician was 4.68 across all CAT items (S.D. = 0.54, range 3.97-4.95). The average proportion of excellent scores was 76.3% (S.D. = 11.1, range 45.7-95.1%). Overall scale reliability was high (Cronbach's alpha = 0.96); alpha coefficients were uniformly high when reliability was examined per doctor. Conclusion: The CAT is a reliable and valid instrument for measuring patient perceptions of physician performance in the area of interpersonal and communication skills. The field test demonstrated that the CAT can be successfully completed by both physicians and patients across clinical specialties. Reporting the proportion of ''excellent'' ratings given by patients is more useful than summarizing scores via means, which are highly skewed. Practice implications: Specialty boards, residency programs, medical schools, and practice plans may find the CAT valuable for both collecting information and providing feedback about interpersonal and communication skills. #

The Development and Partial Assessment of the Medical Communication Competence Scale

Health Communication, 1998

The purpose of this research was to develop and partially assess a self-report scale for measuring doctors' and patients' perceptions of self-communication and othercommunication competence during a medical interview. Previous research into the components of communication competence and medical discourse were used to develop the Medical Communication Competence Scale (MCCS). It was hypothesized that the items of the MCCS would form four clusters: information giving, information seeking, information verifying, and socioemotional communication. The cluster analysis results provided support for the hypothesis. Results of several other analyses provided additional support for the validity of the MCCS. Although considerable attention has been given to doctor-patient communication over the last three decades (Ong, DeHaes, Hoos, & Larnmes, 1995; Thompson, 1994), several researchers have observed that little is actually known about exactly how doctor-patient communication impacts health outcomes (e.g.

Validation of the 5-item doctor-patient communication competency instrument for medical students (DPCC-MS) using two years of assessment data

BMC Medical Education, 2017

Background: Medical students on clinical rotations have to be assessed on several competencies at the end of each clinical rotation, pointing to the need for short, reliable, and valid assessment instruments of each competency. Doctor patient communication is a central competency targeted by medical schools however, there are no published short (i. e. less than 10 items), reliable and valid instruments to assess doctor-patient communication competency. The Faculty of Medicine of Laval University recently developed a 5-item Doctor-Patient Communication Competency instrument for Medical Students (DPCC-MS), based on the Patient Centered Clinical Method conceptual framework, which provides a global summative end-of-rotation assessment of doctor-patient communication. We conducted a psychometric validation of this instrument and present validity evidence based on the response process, internal structure and relation to other variables using two years of assessment data. Methods: We conducted the study in two phases. In phase 1, we drew on 4991 student DPCC-MS assessments (two years). We conducted descriptive statistics, a confirmatory factor analysis (CFA), and tested the correlation between the DPCC-MS and the Multiple Mini Interviews (MMI) scores. In phase 2, eleven clinical teachers assessed the performance of 35 medical students in an objective structured clinical examination station using the DPCC-MS, a 15-item instrument developed by Côté et al. (published in 2001), and a 2-item global assessment. We compared the DPCC-MS to the longer Côté et al. instrument based on internal consistency, coefficient of variation, convergent validity, and inter-rater reliability. Results: Phase 1: Cronbach's alpha was acceptable (.75 and .83). Inter-item correlations were positive and the discrimination index was above .30 for all items. CFA supported a unidimensional structure. DPCC-MS and MMI scores were correlated. Phase 2: The DPCC-MS and the Côté et al. instrument had similar internal consistency and convergent validity, but the DPCC-MS had better inter-rater reliability (mean ICC = .61). Conclusions: The DPCC-MS provides an internally consistent and valid assessment of medical students' communication with patients.

Existing instruments for assessing physician communication skills: Are they valid in a computerized setting?

Patient Education and Counseling, 2013

Objectives: This study aims to highlight the differences in physicians' scores on two communication assessment tools: the SEGUE and an EMR-specific communication skills checklist. The first tool ignores the presence of the EMR in the exam room and the second, though not formally validated, rather focuses on it. Methods: We use the Wilcoxon Signed Ranks Test to compare physicians' scores on each of the tools during 16 simulated medical encounters that were rated by two different raters. Results: Results show a significant difference between physicians' scores on each tool (z = À3.519, p < 0.05 for the first rater, and z = À3.521, p < 0.05 for the second rater), while scores on the EMR-specific communication skills checklist were significantly and consistently lower. Conclusion: These results imply that current communication assessment tools that do not incorporate items that are relevant for communication tasks during EMR use may produce inaccurate results. Practice implications: We therefore suggest that a new instrument, possibly an extension of existing ones, should be developed and empirically validated. ß

Designing and Psychometric Assessing of Physician-Patient Communication Skills Tool

Journal of Research and Health

Background: Assessment of physicians’ communication skills with patients is essential to ensure effective treatment. Achieving such a goal requires the use of a valid, native, and culturally-based tool. This study aimed to design a physician-patient communication skills assessment tool and evaluate its validity and reliability among the medical students of Guilan University of Medical Sciences, Rasht City, Iran. Methods: In this cross-sectional, descriptive, and analytical study, out of 400 medical students (interns), 300 were selected by a stratified random sampling method. The initial tool with 30 items was evaluated by calculating the item impact index in the target group. Also, its ratio and content validity indexes were assessed by 10 experts’ views and factor analysis. The reliability of the research tool was confirmed by assessing the internal consistency by calculating the Cronbach alpha value. Results: Out of the 30 initial items, after calculating the item impact score in...

Assessing clinical communication skills in physicians: are the skills context specific or generalizable

BMC Medical Education, 2009

Background: Communication skills are essential for physicians to practice Medicine. Evidence for the validity and domain specificity of communication skills in physicians is equivocal and requires further research. This research was conducted to adduce evidence for content and context specificity of communication skills and to assess the usefulness of a generic instrument for assessing communication skills in International Medical Graduates (IMGs).

Assessing Competence in Communication and Interpersonal Skills: The Kalamazoo II Report

Academic Medicine, 2004

Accreditation of residency programs and certification of physicians requires assessment of competence in communication and interpersonal skills. Residency and continuing medical education program directors seek ways to teach and evaluate these competencies. This report summarizes the methods and tools used by educators, evaluators, and researchers in the field of physician-patient communication as determined by the participants in the "Kalamazoo II" conference held in April 2002. Communication and interpersonal skills form an integrated competence with two distinct parts. Communication skills are the performance of specific tasks and behaviors such as obtaining a medical history, explaining a diagnosis and prognosis, giving therapeutic instructions, and counseling. Interpersonal skills are inherently relational and process oriented; they are the effect communication has on another person such as relieving anxiety or establishing a trusting relationship. This report reviews three methods for assessment of communication and interpersonal skills: (1) checklists of observed behaviors during interactions with real or simulated patients; (2) surveys of patients' experience in clinical interactions; and (3) examinations using oral, essay, or multiple-choice response questions. These methods are incorporated into educational programs to assess learning needs, create learning opportunities, or guide feedback for learning. The same assessment tools, when administered in a standardized way, rated by an evaluator other than the teacher, and using a predetermined passing score, become a summative evaluation. The report summarizes the experience of using these methods in a variety of educational and evaluation programs and presents an extensive bibliography of literature on the topic. Professional conversation between patients and doctors shapes diagnosis, initiates therapy, and establishes a caring relationship. The degree to which these activities are successful depends, in large part, on the communication and interpersonal skills of the physician. This report focuses on how the physician's competence in professional conversation with patients might be measured. Valid, reliable, and practical measures can guide professional formation, determine readiness for independent practice, and deepen understanding of the communication itself.