Problem Doctors: Is There a System-Level Solution? (original) (raw)
Related papers
Physicians in health care management: 2. Managing performance: who, what, how and when?
Canadian Medical Association Journal
Physicians are becoming more involved in performance management as hospitals restructure to increase effectiveness. Although physicians are not hospital employees, they are subject to performance appraisals because the hospitals are accountable to patients and the community for the quality of hospital services. The performance of a health care professional may be appraised by the appropriate departmental manager, by other professionals in a team or program or by peers, based on prior agreement on expectations. Appraisal approaches vary.
Doctor performance and public accountability
The Lancet, 2004
Public concern about the quality of health care has motivated governments, health-care funders, and clinicians to expand efforts to improve professional performance. In this paper, we illustrate such efforts from the perspective of three countries, the UK, the USA, and the Netherlands. The earliest strategies, which included continuing professional education, clinical audits, and peer review, were aimed at the individual doctor, and produced only modest effects. Other efforts, such as national implementation of practice guidelines, effective use of information technologies, and intensive involvement by doctors in continuous quality-improvement activities, are aimed more broadly at health-care systems. Much is yet unknown about whether these or other strategies-such as centralised supervision or regulation of quality improvement, or use of financial incentives-are effective. As demands for greater public accountability rise, continuing performance improvement efforts of each of our countries offer us opportunities to learn from one another.
Redefining Accountability in Quality and Safety at Academic Medical Centers
Quality management in health care, 2016
W ith potentially up to 400 000 preventable deaths occurring each year, 1 large gaps in the quality of patient experience, 2 and with up to a third of every health care dollar spent on therapies that do not benefit, 3 there is significant room to improve quality and safety. Academic medical centers and health systems are poised to take a leadership role in this effort because of their large integrated footprint and ability to innovate. While the causes of these shortcomings are multifaceted, including the lack of a valid national measurement system, the focus on an apprenticeship approach rather than systems engineering approach to improve quality, and the failure to invest in the science of health care delivery, one specific and immediate opportunity to improve resides in redefining accountability for quality and safety. Accountability has historically resided at the individual physician level. However, as medicine and health care systems have become increasingly complex, this framework must also expand. We describe the model for shared leadership accountability for quality and safety within Johns Hopkins Medicine (JHM), the academic health system that encompasses the Johns Hopkins Health System and the Johns Hopkins University School of Medicine.
Focusing Measures for Performance-Based Privileging of Physicians on Improvement
The Joint Commission Journal on Quality and Patient Safety, 2008
T he Joint Commission has required that accredited hospitals perform ongoing professional practice evaluation 1-or what we at Cincinnati Children's Medical Center (CCHMC) have termed performance-based privileging (PBP)-for their medical staff reappointment and reprivileging process.* PBP is a process by which evidence regarding the acceptable performance of providers in their clinical specialty informs their reappointment. Data must be provider specific, time trended, internally aggregated for group comparison, and where possible, externally benchmarked. 1 † At the same time, professional organizations have begun to outline specific measures for assessing providers, creating a situation where health care organizations must navigate multiple, sometimes conflicting, mandates to monitor provider performance. CCHMC has previously reported on the development of tests for knowledge related to evidence-based guidelines 2 and on the development and revision of PBP measures for the radiology department. 3-5 A literature review conducted in early 2007 and updated in 2008 identified a recent article describing North Mississippi Medical Center's PBP process, 6 as well as a 10-year old description of provider assessment in military hospitals, 7 but little else was found in the literature regarding PBP per se. PBP arose against a backdrop of national efforts to collect and disseminate health care quality information, which include the Healthcare Effectiveness Data and Information Set (HEDIS), sponsored by the National Committee for Quality Assurance (NCQA) 8 ; the Consumer Assessment of Healthcare Providers and Systems (CAHPS) survey sponsored by NCQA and the Agency for Healthcare Research and Quality (AHRQ) 9 ; and several sets of quality and safety indicators also sponsored Article-at-a-Glance Background: The Joint Commission requires ongoing professional practice evaluation-or what Cincinnati Children's Medical Center (CCHMC) has termed performance-based privileging (PBP)-for the medical staff reappointment and reprivileging process. Building a System: CCHMC is a 475-bed academic medical center affiliated with the University of Cincinnati College of Medicine. Medical staff members are reappointed every two years, with divisions having staggered reappointment dates throughout the two-year cycle. In 2004, CCHMC devised a model in which the 38 divisions retained responsibility for development of measures; collection, maintenance, display, and monitoring of individual provider performance data; and sharing of data with providers, while medical staff services retained responsibility for ensuring compliance with timelines, technical assistance related to measure development, and the collection and display of data. Each clinical division developed a preliminary list of measures. The original PBP process was tested in 2005 and has been revised several times in response to division feedback. Discussion: Members of all 38 clinical divisions have now been reappointed to the medical staff at least twice using measures that have become more robust, meaningful, and outcome oriented. Many measures support organizational or divisional quality imoprovement aims, are evidence based, or build on initiatives sponsored by external bodies and specialty societies. Examples of measures are shared via the PBP intranet, personal consultations, and an annual provider performance improvement conference. Yet, challenges remain, such as the absence of real-time, providerspecific, risk-adjusted data and the difficulty of attributing provider-specific outcomes when most complex and highrisk care is managed by a team.
CJEM, 2000
Evaluation of physician practice is necessary, both to provide feedback for self-improvement and to guide department heads during yearly evaluations. Objective: To develop and implement a peer-based performance evaluation tool and to measure reliability and physician satisfaction. Methods: Each emergency physician in an urban emergency department evaluated their peers by completing a survey consisting of 21 questions on effectiveness in 4 categories: clinical practice, interaction with coworkers and the public, nonclinical departmental responsibilities, and academic activities. A sample of emergency nurses evaluated each emergency physician on a subset of 5 of the questions. Factor analysis was used to assess the reliability of the questions and categories. Intra-class correlation coefficients were calculated to determine inter-rater reliability. After receiving their peer evaluations, each physician rated the process’s usefulness to the individual and the department. Results: 225 s...
Availability of data for measuring physician quality performance
The American journal of managed care, 2009
To evaluate measurement of physician quality performance, which is increasingly used by health plans as the basis of quality improvement, network design, and financial incentives, despite concerns about data and methodological challenges. Evaluation of health plan administrative claims and enrollment data. Using administrative data from 9 health plans, we analyzed results for 27 well-accepted quality measures and evaluated how many quality events (patients eligible for a measure) were available per primary care physician and how different approaches for attributing patients to physicians affect the number of quality events per physician. Fifty-seven percent of primary care physicians had at least 1 patient who was eligible for at least 1 of the selected quality measures. Most physicians had few quality events for any single measure. As an example, for a measure evaluating appropriate treatment for children with upper respiratory tract infections, physicians on average had 14 quality...
Failure in Medical Practice: Human Error, System Failure, or Case Severity?
Healthcare
The success rate in medical practice will probably never reach 100%. Success rates depend on many factors. Defining the success rate is both a technical and a philosophical issue. In opposition to the concept of success, medical failure should also be discussed. Its causality is multifactorial and extremely complex. Its actual rate and its real impact are unknown. In medical practice, failure depends not only on the human factor but also on the medical system and has at its center a very important variable—the patient. To combat errors, capturing, tracking, and analyzing them at an institutional level are important. Barriers such as the fear of consequences or a specific work climate or culture can affect this process. Although important data regarding medical errors and their consequences can be extracted by analyzing patient outcomes or using quality indicators, patient stories (clinical cases) seem to have the greatest impact on our subconscious as medical doctors and nurses and ...
A Multifaceted Organizational Physician Assessment Program
Mayo Clinic Proceedings: Innovations, Quality & Outcomes, 2017
Objective: To provide validity evidence for a multifaceted organizational program for assessing physician performance and evaluate the practical and psychometric consequences of 2 approaches to scoring (mean vs top box scores). Participants and Methods: Participants included physicians with a predominantly outpatient practice in general internal medicine (n¼95), neurology (n¼99), and psychiatry (n¼39) at Mayo Clinic from January 1, 2013, through December 31, 2014. Study measures included hire year, patient complaint and compliment rates, note-signing timeliness, cost per episode of care, and Likert-scaled surveys from patients, learners, and colleagues (scored using mean ratings and top box percentages). Results: Physicians had a mean AE SD of 0.32AE1.78 complaints and 0.12AE0.76 compliments per 100 outpatient visits. Most notes were signed on time (mean AE SD, 96%AE6.6%). Mean AE SD cost was 0.56AE0.59 SDs above the institutional average. Mean AE SD scores were 3.77AE0.25 on 4-point and 4.06AE0.31 to 4.94AE0.08 on 5-point Likert-scaled surveys. Mean AE SD top box scores ranged from 18.6%AE16.8% to 90.7%AE10.5%. Learner survey scores were positively associated with patient survey scores (r¼0.26; P¼.003) and negatively associated with years in practice (r¼À0.20; P¼.02). Conclusion: This study provides validity evidence for 7 assessments commonly used by medical centers to measure physician performance and reports that top box scores amplify differences among highperforming physicians. These findings inform the most appropriate uses of physician performance data and provide practical guidance to organizations seeking to implement similar assessment programs or use existing performance data in more meaningful ways.