Evaluation of high fidelity patient simulator in assessment of performance of anaesthetists (original) (raw)
Related papers
Anesthesiology research and practice, 2016
Introduction. Medical simulators are used for assessing clinical skills and increasingly for testing hypotheses. We developed and tested an approach for assessing performance in anesthesia residents using screen-based simulation that ensures expert raters remain blinded to subject identity and experimental condition. Methods. Twenty anesthesia residents managed emergencies in an operating room simulator by logging actions through a custom graphical user interface. Two expert raters rated performance based on these entries using custom Global Rating Scale (GRS) and Crisis Management Checklist (CMC) instruments. Interrater reliability was measured by calculating intraclass correlation coefficients (ICC), and internal consistency of the instruments was assessed with Cronbach's alpha. Agreement between GRS and CMC was measured using Spearman rank correlation (SRC). Results. Interrater agreement (GRS: ICC = 0.825, CMC: ICC = 0.878) and internal consistency (GRS: alpha = 0.838, CMC: a...
Anesthesiology, 2017
We sought to determine whether mannequin-based simulation can reliably characterize how board-certified anesthesiologists manage simulated medical emergencies. Our primary focus was to identify gaps in performance and to establish psychometric properties of the assessment methods. A total of 263 consenting board-certified anesthesiologists participating in existing simulation-based maintenance of certification courses at one of eight simulation centers were video recorded performing simulated emergency scenarios. Each participated in two 20-min, standardized, high-fidelity simulated medical crisis scenarios, once each as primary anesthesiologist and first responder. Via a Delphi technique, an independent panel of expert anesthesiologists identified critical performance elements for each scenario. Trained, blinded anesthesiologists rated video recordings using standardized rating tools. Measures included the percentage of critical performance elements observed and holistic (one to ni...
Anaesthesia and Intensive Care, 2017
We sought to evaluate the validity of two non-technical skills evaluation instruments, the Anaesthetists' Non-Technical Skills (ANTS) behavioural marker system and the Ottawa Global Rating Scale (GRS), to apply them to anaesthesia training. The content validity, response process, internal structure, relations with other variables and consequences were described for validity evidence. Simulated crisis management sessions were initiated during which two trained raters evaluated the performance of postgraduate first-, second-and third-year (PGY-1, PGY-2 and PGY-3) anaesthesia residents. The study included 70 participants, composed of 24 PGY-1, 24 PGY-2 and 22 PGY-3 residents. Both instruments differentiated the non-technical skills of PGY-1 from PGY-3 residents (P <0.05). Inter-rater agreement was measured using the intraclass correlation coefficient. For the ANTS instrument, the intraclass correlation coefficients for task management, team-working, situation awareness and decision-making were 0.79, 0.34, 0.81 and 0.70, respectively. For the Ottawa GRS, the intraclass correlation coefficients for overall performance, leadership, problem-solving, situation awareness, resource utilisation and communication skills were 0.86, 0.83, 0.84, 0.87, 0.80 and 0.86, respectively. The Cronbach's alpha for internal consistency of the ANTS instrument was 0.93, and was 0.96 for the Ottawa GRS. There was a high correlation between the ANTS and Ottawa GRS. The raters reported the ease of use of the Ottawa GRS compared to the ANTS. We found sufficient evidence of validity in the ANTS instrument and the Ottawa GRS for the evaluation of non-technical skills in a simulated anaesthesia setting, but the Ottawa GRS was more practical and had higher reliability.
BMC Emergency Medicine, 2016
Background: Critical illness is a time-sensitive process which requires practitioners to process vast quantities of data and make decisions rapidly. We have developed a tool, the Checklist for Early Recognition and Treatment of Acute Illness (CERTAIN), aimed at enhancing care delivery in such situations. To determine the efficacy of CERTAIN and similar cognitive aids, we developed rubric for evaluating provider performance in a simulated medical resuscitation environments. Methods: We recruited 18 clinicians with current valid ACLS certification for evaluation in three simulated medical scenarios designed to mimic typical medical decompensation events routinely experienced in clinical care. Subjects were stratified as experienced or novice based on prior critical care training. A checklist of critical actions was designed using face validity for each scenario to evaluate task completion and performance. Simulation sessions were video recorded and scored by two independent raters. Construct validity was assessed under the assumption that experienced clinicians should perform better than novice clinicians on each task. Reliability was assessed as percentage agreement, kappa statistics and Bland-Altman plots as appropriate. Results: Eleven experts and seven novices completed evaluation. The overall agreement on common checklist item completion was 84.8 %. The overall model achieved face validity and was consistent with our construct, with experienced clinicians trending towards better performance compared to novices for accuracy and speed of task completion. Conclusions: A standardized video assessment tool has potential to provide a valid and reliable method to assess 12 performances of clinicians facing simulated medical emergencies.
Anesthesia & Analgesia, 2009
BACKGROUND: Anesthesiologists and certified registered nurse anesthetists (CRNAs) must acquire the skills to recognize and manage a variety of acute intraoperative emergencies. A simulation-based assessment provides a useful and efficient means to evaluate these skills. In this study, we evaluated and compared the performance of board-certified anesthesiologists and CRNAs managing a set of simulated intraoperative emergencies. METHODS: We enrolled 26 CRNAs and 35 board-certified anesthesiologists in a prospective, randomized, single-blinded study. These 61 specialists each managed 8 of 12 randomly selected, scripted, intraoperative simulation exercises. Participants were expected to recognize and initiate appropriate therapy for intraoperative events during a 5-min period. Two primary raters scored 488 simulation exercises (61 participants ϫ 8 encounters). RESULTS: Anesthesiologists achieved a modestly higher mean overall score than CRNAs (66.6% Ϯ 11.7 [range ϭ 41.7%-86.7%] vs 59.9% Ϯ 10.2 [range ϭ 38.3%-80.4%] P Ͻ 0.01). There were no significant differences in performance between groups on individual encounters. The raters were consistent in their identification of key actions. The reliability of the eight-scenario assessment, with two raters for each scenario, was 0.80. CONCLUSION: Although anesthesiologists, on average, achieved a modestly higher overall score, there was marked and similar variability in both groups. This wide range suggests that certification in either discipline may not yield uniform acumen in management of simulated intraoperative emergencies. In both groups, there were practitioners who failed to diagnose and treat simulated emergencies. If this is reflective of clinical practice, it represents a patient safety concern. Simulationbased assessment provides a tool to determine the ability of practitioners to respond appropriately to clinical emergencies. If all practitioners could effectively manage these critical events, the standard of patient care and ultimately patient safety could be improved.
BMC Medical Education, 2021
Background Medical simulation trainings lead to an improvement in patient care by increasing technical and non-technical skills, procedural confidence and medical knowledge. For structured simulation-based trainings, objective assessment tools are needed to evaluate the performance during simulation and the learning progress. In surgical education, objective structured assessment of technical skills (OSATS) are widely used and validated. However, in emergency medicine and anesthesia there is a lack of validated assessment tools for technical skills. Thus, the aim of the present study was to develop and validate a novel Global Rating Scale (GRS) for emergency medical simulation trainings. Methods Following the development of the GRS, 12 teams of different experience in emergency medicine (4th year medical students, paramedics, emergency physicians) were involved in a pre-hospital emergency medicine simulation scenario and assessed by four independent raters. Subsequently, interrater ...
Performance of Residents and Anesthesiologists in a Simulation-based Skill Assessment
Anesthesiology, 2007
Background: Anesthesiologists and anesthesia residents are expected to acquire and maintain skills to manage a wide range of acute intraoperative anesthetic events. The purpose of this study was to determine whether an inventory of simulated intraoperative scenarios provided a reliable and valid measure of anesthesia residents' and anesthesiologists' skill.
Inefficacy of simulator-based training on anaesthesiologists' non-technical skills
Acta Anaesthesiologica Scandinavica, 2009
Background: Analysis of critical incidents in anaesthesia practice emphasizes the important role of non-technical skills (NTS) in improving patient safety. Therefore, debriefing with regard to NTS has been added to medical management (MM) simulator training to improve acute crisis competencies of anaesthesiologists. The purpose of this study was to compare the NTS and MM performance of two groups of anaesthesiologists in a simulated anaesthesia crisis after undergoing different types of training. Methods: Forty-two anaesthesiologists were randomly assigned to two different training groups, with each group participating in two simulation training sessions. One group's (NTS1MMG) training included extensive debriefing of NTS (resource management, planning, leadership and communication) and MM, while the other group (MMG) received a simpler debriefing that focused solely on MM. The quality and quantity of NTS and quality of MM performance were rated by reviewing the videotapes of the scenarios. Results: NTS1MMG did not prove superior to MMG with regard to behavioural and MM markers. The quality of NTS performances correlated significantly (Po0.01) with the quantity of NTS demonstration, and also correlated significantly with the MM actions (Po0.01) in both groups. Conclusion: A single session of training including debriefing of NTS and MM did not improve the NTS performances of anaesthesiologists when compared with anaesthesiologists who only received MM training. This might indicate that a more frequent or individual training is needed to improve participants' NTS performance.
Simulation-based Assessment in Anesthesiology
Anesthesiology, 2010
Simulations have taken a central role in the education and assessment of medical students, residents, and practicing physicians. The introduction of simulation-based assessments in anesthesiology, especially those used to establish various competencies, has demanded fairly rigorous studies concerning the psychometric properties of the scores. Most important, major efforts have been directed at identifying, and addressing, potential threats to the validity of simulation-based assessment scores. As a result, organizations that wish to incorporate simulation-based assessments into their evaluation practices can access information regarding effective test development practices, the selection of appropriate metrics, the minimization of measurement errors, and test score validation processes. The purpose of this article is to provide a broad overview of the use of simulation for measuring physician skills and competencies. For simulations used in anesthesiology, studies that describe adva...