Evaluating Rater Competency for CNS Clinical Trials : Journal of Clinical Psychopharmacology (original) (raw)
Brief Reports
UBC-PharmaStar, Wayne, PA and Department of Psychiatry, Massachusetts General Hospital, Boston, MA.
Presented at the 158th Annual Meeting of the American Psychiatric Association; May 25, 2005; Atlanta, GA.
Received October 05, 2005; accepted after revision March 14, 2006
Address correspondence and reprint requests to Steven D. Targum, MD, c/o Oxford BioScience, 222 Berkeley Street, Suite 1650, Boston, MA, 02116. E-mail: [email protected].
Abstract
Clinical trials rely on ratings accuracy to document a beneficial drug effect. This study examined rater competency with clinical nervous system rating instruments relative to previous clinical experience and participation in specific rater training programs. One thousand two hundred forty-one raters scored videotaped interviews of the Hamilton Anxiety Scale (HAM-A), Hamilton Depression Scale (HAM-D), and Young Mania Rating Scale (YMRS) during rater training programs conducted at 9 different investigator meetings. Scoring deviations relative to established acceptable scores were used to evaluate individual rater competency. Rater competency was not achieved by clinical experience alone. Previous clinical experience with mood-disordered patients ranged from none at all (18%) to 40 years in 1 rater. However, raters attending their first-ever training session (n = 485) were not differentiated on the basis of clinical experience on the HAM-A (P = 0.054), HAM-D (P = 0.06), or YMRS (P = 0.66). Alternatively, participation in repeated rater training sessions significantly improved rater competency on the HAM-A (P = 0.002), HAM-D (P < 0.001), and YMRS (P < 0.001). Furthermore, raters with clinical experience still improved with rater training. Using 5 years of clinical experience as a minimum cutoff (n = 795), raters who had participated in 5 or more training sessions significantly outperformed comparably experienced raters attending their first-ever training session on the HAM-A (P = 0.003), HAM-D (P < 0.001), and YMRS (P < 0.001). The findings show that rater training improves rater competency at all levels of clinical experience. Furthermore, more stringent criteria for rater eligibility and comprehensive rater training programs can improve ratings competency.
© 2006 Lippincott Williams & Wilkins, Inc.