An Assessment System for the Pediatrics Milestone Project (original) (raw)

Using cloud-based mobile technology for assessment of competencies among medical students

Valid, direct observation of medical student competency in clinical settings remains challenging and limits the opportunity to promote performance-based student advancement. The rationale for direct observation is to ascertain that students have acquired the core clinical competencies needed to care for patients. Too often student observation results in highly variable evaluations which are skewed by factors other than the student's actual performance. Among the barriers to effective direct observation and assessment include the lack of effective tools and strategies for assuring that transparent standards are used for judging clinical competency in authentic clinical settings. We developed a web-based content management system under the name, Just in Time Medicine (JIT), to address many of these issues. The goals of JIT were fourfold: First, to create a self-service interface allowing faculty with average computing skills to author customizable content and criterion-based assessment tools displayable on internet enabled devices, including mobile devices; second, to create an assessment and feedback tool capable of capturing learner progress related to hundreds of clinical skills; third, to enable easy access and utilization of these tools by faculty for learner assessment in authentic clinical settings as a means of just in time faculty development; fourth, to create a permanent record of the trainees' observed skills useful for both learner and program evaluation. From July 2010 through October 2012, we implemented a JIT enabled clinical evaluation exercise (CEX) among 367 third year internal medicine students. Observers (attending physicians and residents) performed CEX assessments using JIT to guide and document their observations, record their time observing and providing feedback to the students, and their overall satisfaction. Inter-rater reliability and validity were assessed with 17 observers who viewed six videotaped student-patient encounters and by measuring the correlation between student CEX scores and their scores on subsequent standardized-patient OSCE exams. A total of 3567 CEXs were completed by 516 observers. The average number of evaluations per student was 9.7 (±1.8 SD) and the average number of CEXs completed per observer was 6.9 (±15.8 SD). Observers spent less than 10 min on 43-50% of the CEXs and 68.6% on feedback sessions. A majority of observers (92%) reported satisfaction with the CEX. Inter-rater reliability was measured at 0.69 among all observers viewing the videotapes and these ratings adequately discriminated competent from non-competent performance. The measured CEX grades correlated with subsequent student performance on an end-of-year OSCE. We conclude that the use of JIT is feasible in capturing discrete How to cite this article Ferenchick and Solomon (2013), Using cloud-based mobile technology for assessment of competencies among medical students. PeerJ 1:e164; DOI 10.7717/peerj.164 clinical performance data with a high degree of user satisfaction. Our embedded checklists had adequate inter-rater reliability and concurrent and predictive validity.

“Evaluation of a best practice approach to assess undergraduate clinical skills in Paediatrics”

BMC Medical Education, 2020

BackgroundThe Objective Structured Clinical Examination (OSCE) has been used in pediatrics since the 1980s. Its main drawback is that large numbers of children are needed to make up for the fatigue factor inherent in prolonged testing periods. Also, examinations mainly include children between 7 and 16 years old. We describe the summative examination used in our institution to evaluate medical students’ clinical competencies in pediatrics with realistic available resources and for a wider age-range. We also evaluated different factors known to influence medical students’ performances.MethodsThis retrospective, descriptive, observational study evaluated the 740 distinct pediatric examination results of fourth-year medical students over 5 years. Their summative examination combined two different assessment methods: a structured real-patient examination (SRPE) using standardized assessment grids for the most frequent pediatric diagnoses, and a computer-based written examination (CBWE)....

Assessment in Medical Education

New England Journal of Medicine, 2007

A s an attending physician working with a student for a week, you receive a form that asks you to evaluate the student's fund of knowledge, procedural skills, professionalism, interest in learning, and "systems-based practice." You wonder which of these attributes you can reliably assess and how the data you provide will be used to further the student's education. You also wonder whether other tests of knowledge and competence that students must undergo before they enter practice are equally problematic. I n one way or another, most practicing physicians are involved in assessing the competence of trainees, peers, and other health professionals. As the example above suggests, however, they may not be as comfortable using educational assessment tools as they are using more clinically focused diagnostic tests. This article provides a conceptual framework for and a brief update on commonly used and emerging methods of assessment, discusses the strengths and limitations of each method, and identifies several challenges in the assessment of physicians' professional competence and performance.

The effect of a grading incentive and a problem-specific mobile electronic clinical evaluation tool (eCEX) in the direct observation of medical student’s clinical competencies: A pilot study

The Internet Journal of Medical Education, 2010

Background: Direct observation of medical students' clinical skills by faculty is uncommon. However, such observation enhances the validity of medical students' clinical performance evaluation. Objectives: We developed and tested a the effect of a grading incentive along with a mobile, competency-specific assessment tool (eCEX) involving eleven faculty members and twelve students during their internal medicine clerkship. Methods: Six students were required to use the eCEX to for ten discrete clinical observations (e.g. performing an abdominal exam in a patient with abdominal pain) while the other six did not use the eCEX and were had a requirement for a single observed comprehensive history and physical examination. Results: The average number of direct observations per student, per 8 week clerkship, increased from 14.7 + 27 to 26.2 + 16 (p = 0.032). Observing a focused physical examination accounted for the majority of the directly-observed encounters. Students and the faculty both agreed that the CEX helped them understand which problem-specific competencies were targeted for the assessment. Both faculty and students rated the program as "easy to use". Faculty generally agreed that eCEX improved their ability to provide feedback to the students.Conclusions: The eCEX holds promise of facilitating direct observation and evaluation of medical students' clinical skills The effect of a grading incentive and a problem-specific mobile electronic clinical evaluation tool (eCEX) in the direct observation of medical student's clinical competencies: A pilot study

Assessment in medical education-Replies

A s an attending physician working with a student for a week, you receive a form that asks you to evaluate the student's fund of knowledge, procedural skills, professionalism, interest in learning, and "systems-based practice." You wonder which of these attributes you can reliably assess and how the data you provide will be used to further the student's education. You also wonder whether other tests of knowledge and competence that students must undergo before they enter practice are equally problematic. I n one way or another, most practicing physicians are involved in assessing the competence of trainees, peers, and other health professionals. As the example above suggests, however, they may not be as comfortable using educational assessment tools as they are using more clinically focused diagnostic tests. This article provides a conceptual framework for and a brief update on commonly used and emerging methods of assessment, discusses the strengths and limitations of each method, and identifies several challenges in the assessment of physicians' professional competence and performance. C ompe tence a nd Per for m a nce Elsewhere, Hundert and I have defined competence in medicine as "the habitual and judicious use of communication, knowledge, technical skills, clinical reasoning, emotions, values, and reflection in daily practice for the benefit of the individuals and communities being served." 1 In the United States, the assessment of medical residents, and increasingly of medical students, is largely based on a model that was developed by the Accreditation Council for Graduate Medical Education (ACGME). This model uses six interrelated domains of competence: medical knowledge, patient care, professionalism, communication and interpersonal skills, practice-based learning and improvement, and systems-based practice. 2 Competence is not an achievement but rather a habit of lifelong learning 3 ; assessment plays an integral role in helping physicians identify and respond to their own learning needs. Ideally, the assessment of competence (what the student or physician is able to do) should provide insight into actual performance (what he or she does habitually when not observed), as well as the capacity to adapt to change, find and generate new knowledge, and improve overall performance. 4 Competence is contextual, reflecting the relationship between a person's abilities and the tasks he or she is required to perform in a particular situation in the real world. 5 Common contextual factors include the practice setting, the local prevalence of disease, the nature of the patient's presenting symptoms, the patient's educational level, and other demographic characteristics of the patient and of the physician. Many aspects of competence, such as history taking and clinical reasoning, are also content-specific and not necessarily generalizable to all situations. A student's

Using mobile technology in assessment of entrustable professional activities in undergraduate medical education

Perspectives on Medical Education

Background The adoption of competency-based medical education requires objective assessments of a learner’s capability to carry out clinical tasks within workplace-based learning settings. This study involved an evaluation of the use of mobile technology to record entrustable professional activity assessments in an undergraduate clerkship curriculum. Approach A paper-based form was adapted to a mobile platform called eClinic Card. Students documented workplace-based assessments throughout core clerkship and preceptors confirmed accuracy via mobile phones. Assessment scores for the 2017–2018 academic year were collated and analyzed for all core rotations, and preceptors and students were surveyed regarding the mobile assessment experience. Evaluation The mobile system enabled 80 students and 624 preceptors to document 6850 assessment submissions across 47 clinical sites over a 48-week core clerkship curriculum. Students’ scores demonstrated progressive improvement across all entrusta...

Development of an Office-based Curriculum of Common Pediatric Primary Care Skills for Residents

Academic Medicine, 2002

Objective: Dutch higher education is freely accessible for those who have proper high school qualifications. However, admission to medical schools has been limited by government to regulate manpower planning. Selection has been carried out by a national lottery approach since 1972, but in 2000, the Dutch government asked medical schools to experiment with qualitative selection procedures at their own institutions. The University Medical Center Utrecht School of Medical Sciences has used a technique derived from assessment-center approaches to assist in the medical school admission process. Dutch assessment centers use observation procedures in which candidates act in simulated activities that are characteristic of the vacant position. Description: In April 2001, 61 candidates for 23 places were invited for selection days. After a selection interview, candidates were asked to perform activities that are characteristic of course requirements: (1) studying a three-to-five page text about diagnostic and therapeutic procedures of disease A during one hour; (2) explaining the studied procedures to another candidate and receiving information about disease B, studied by this other candidate, during one hour; (3) answering the questions of a standardized patient about disease A in 15 minutes; and (4) answering the questions of a standardized patient about disease B in 15 minutes. A threeperson selection committee behind a one-way screen observed the two 15-minute interviews with the standardized patients. The selection committee independently scored content quality of the information that was given to the standardized patients as well as the quality of attitude towards and communication with both patients. The average scores for these three criteria were weighted equally to arrive at a total score. In addition, each candidate received a score resulting from the interview with the other candidate who explained disease B. This score was combined with the other three to a final score. Discussion: The Utrecht medical curriculum may be viewed as a hybrid PBL program. Integration of basic and clinical sciences, patient contacts from the start, training of skills in communication with standardized patients, physical examination, extensive small-group teaching, structured independent studying, and collaboration to prepare for short presentations to peers were all characteristic of the medical school curriculum. Thus, the assessment-center technique reflected the characteristics of the medical school curriculum. First analyses showed satisfactory reliabilities of the three scores (0.79 to 0.92); the average agreement between raters was 0.60. Correlation analysis between scores supported the internal convergent and discriminant validity of the assessment activities. The predictive validity remains to be studied.

e-Assessment in Medical Education

2016

In education, assessment is a continuous process which may perform numerous functions, such as diagnosing, supporting, motivating, controlling or summarising. As a continuous process, assessment takes time, and therefore, where possible, it is replaced by e-assessment. In medical sciences, theoretical knowledge is often assessed using electronic tests based on multiple choice questions. However, the method of preparation to this type of exams and the mere filling of the test form may be problematic for students. Given the above, the authors have decided to monitor the students' approach to the online tests in order to eliminate or correct any arising problems. This document presents the results of surveys taken in the academic years 2014/15 and 2015/16. In addition, the article shows the role of formative and summative evaluation in eassessment as well as the need to apply standards for the development of assessment tools and systems. Even with many advantages, the e-assessment ...

An internet-based tool for evaluating third-year medical student performance

The American Journal of Surgery, 2003

Background: Shortened non-primary care medical school clerkships have increased time pressures for accurate assessment of student knowledge, skills, and attitudes. Paper-based student evaluations suffer from low response rates, inefficient data acquisition and analysis, and difficulty obtaining input from multiple evaluators. This project describes the development of a Web-based model for evaluating third-year medical student performance, improving evaluation response rates, and including multiple evaluators' input. Methods: A secure Web-based system was designed to maintain evaluation data (11-item competency-based evaluations, oral examinations, National Board of Medical Examiners surgery test, and objective structured clinical examination) for the third-year surgery clerkship. Historical response rate, completion time, and administrative effort data were compared with data obtained using the Web-based model. Results: Faculty response rates improved from 71.3% to 89.9%, with response times decreased from 28.0 Ϯ 3.0 to 9.0 Ϯ 0.7 days using the Web-based model. Administrative time requirements decreased from 5 days to 2 hours per rotation, and manual data entry, analysis, and reporting were eliminated through e-mail evaluator notification, direct data entry, and real-time analysis. Evaluator satisfaction was subjectively higher using the Web-based model.