Models to predict relapse in psychosis: A systematic review (original) (raw)

Implementing Precision Psychiatry: A Systematic Review of Individualized Prediction Models for Clinical Practice

Schizophrenia Bulletin, 2020

Background The impact of precision psychiatry for clinical practice has not been systematically appraised. This study aims to provide a comprehensive review of validated prediction models to estimate the individual risk of being affected with a condition (diagnostic), developing outcomes (prognostic), or responding to treatments (predictive) in mental disorders. Methods PRISMA/RIGHT/CHARMS-compliant systematic review of the Web of Science, Cochrane Central Register of Reviews, and Ovid/PsycINFO databases from inception until July 21, 2019 (PROSPERO CRD42019155713) to identify diagnostic/prognostic/predictive prediction studies that reported individualized estimates in psychiatry and that were internally or externally validated or implemented. Random effect meta-regression analyses addressed the impact of several factors on the accuracy of prediction models. Findings Literature search identified 584 prediction modeling studies, of which 89 were included. 10.4% of the total studies in...

PROBAST: A Tool to Assess the Risk of Bias and Applicability of Prediction Model Studies

Annals of Internal Medicine, 2019

Clinical prediction models combine multiple predictors to estimate the risk of whether a particular condition is present (diagnostic) or whether a certain event will occur in the future (prognostic). PROBAST, a tool for assessing the risk of bias (ROB) and applicability of diagnostic and prognostic prediction model studies, considered existing ROB tools as well as reporting guidelines and was developed by a steering group, informed by a Delphi procedure involving 38 experts and refinement through piloting. PROBAST is grouped into four domains: participants, predictors, outcomes, and analysis. These domains contain a total of twenty signalling questions to facilitate structured judgement of ROB. We define ROB to occur when shortcomings in study design, conduct or analysis lead to systematically distorted estimates of model predictive performance. PROBAST enables a focussed and transparent approach to assessing the ROB and applicability of studies developing, validating or updating prediction models for individualised predictions. Although PROBAST was designed for use in systematic reviews, it can be used more generally in critical appraisal of prediction model studies. Potential users include organisations supporting decision making, researchers and clinicians with an interest in evidence-based medicine or involved in guideline development as well as journal editors and manuscript reviewers.

Empirical evidence of the impact of study characteristics on the performance of prediction models: a meta-epidemiological study

BMJ Open

ObjectivesTo empirically assess the relation between study characteristics and prognostic model performance in external validation studies of multivariable prognostic models.DesignMeta-epidemiological study.Data sources and study selectionOn 16 October 2018, we searched electronic databases for systematic reviews of prognostic models. Reviews from non-overlapping clinical fields were selected if they reported common performance measures (either the concordance (c)-statistic or the ratio of observed over expected number of events (OE ratio)) from 10 or more validations of the same prognostic model.Data extraction and analysesStudy design features, population characteristics, methods of predictor and outcome assessment, and the aforementioned performance measures were extracted from the included external validation studies. Random effects meta-regression was used to quantify the association between the study characteristics and model performance.ResultsWe included 10 systematic review...

Individualized Prediction of Transition to Psychosis in 1,676 Individuals at Clinical High Risk: Development and Validation of a Multivariable Prediction Model Based on Individual Patient Data Meta-Analysis

Frontiers in Psychiatry, 2019

Background: The Clinical High Risk state for Psychosis (CHR-P) has become the cornerstone of modern preventive psychiatry. The next stage of clinical advancements rests on the ability to formulate a more accurate prognostic estimate at the individual subject level. Individual Participant Data Meta-Analyses (IPD-MA) are robust evidence synthesis methods that can also offer powerful approaches to the development and validation of personalized prognostic models. The aim of the study was to develop and validate an individualized, clinically based prognostic model for forecasting transition to psychosis from a CHR-P stage. Methods: A literature search was performed between January 30, 2016, and February 6, 2016, consulting PubMed, Psychinfo, Picarta, Embase, and ISI Web of Science, using search terms ("ultra high risk" OR "clinical high risk" OR "at risk mental state") AND [(conver* OR transition* OR onset OR emerg* OR develop*) AND psychosis] for both longitudinal and intervention CHR-P studies. Clinical knowledge was used to a priori select predictors: age, gender, CHR-P subgroup, the severity of attenuated positive psychotic symptoms, the severity of attenuated negative psychotic symptoms, and level of functioning at baseline. The model, thus, developed was validated with an extended form of internal validation. Results: Fifteen of the 43 studies identified agreed to share IPD, for a total sample size of 1,676. There was a high level of heterogeneity between the CHR-P studies with regard to inclusion criteria, type of assessment instruments, transition criteria, preventive treatment offered. The internally validated prognostic performance of the model was higher than chance but only moderate [Harrell's C-statistic 0.655, 95% confidence interval (CIs), 0.627-0.682]. Conclusion: This is the first IPD-MA conducted in the largest samples of CHR-P ever collected to date. An individualized prognostic model based on clinical predictors available in clinical routine was developed and internally validated, reaching only moderate prognostic performance. Although personalized risk prediction is of great value in the clinical practice, future developments are essential, including the refinement of the prognostic model and its external validation. However, because of the current high diagnostic, prognostic, and therapeutic heterogeneity of CHR-P studies, IPD-MAs in this population may have an limited intrinsic power to deliver robust prognostic models.

Improving Prognostic Accuracy in Subjects at Clinical High Risk for Psychosis: Systematic Review of Predictive Models and Meta-analytical Sequential Testing Simulation

Schizophrenia bulletin, 2016

Discriminating subjects at clinical high risk (CHR) for psychosis who will develop psychosis from those who will not is a prerequisite for preventive treatments. However, it is not yet possible to make any personalized prediction of psychosis onset relying only on the initial clinical baseline assessment. Here, we first present a systematic review of prognostic accuracy parameters of predictive modeling studies using clinical, biological, neurocognitive, environmental, and combinations of predictors. In a second step, we performed statistical simulations to test different probabilistic sequential 3-stage testing strategies aimed at improving prognostic accuracy on top of the clinical baseline assessment. The systematic review revealed that the best environmental predictive model yielded a modest positive predictive value (PPV) (63%). Conversely, the best predictive models in other domains (clinical, biological, neurocognitive, and combined models) yielded PPVs of above 82%. Using on...

External validation of multivariable prediction models: a systematic review of methodological conduct and reporting

BMC Medical Research Methodology, 2014

Background: Before considering whether to use a multivariable (diagnostic or prognostic) prediction model, it is essential that its performance be evaluated in data that were not used to develop the model (referred to as external validation). We critically appraised the methodological conduct and reporting of external validation studies of multivariable prediction models. Methods: We conducted a systematic review of articles describing some form of external validation of one or more multivariable prediction models indexed in PubMed core clinical journals published in 2010. Study data were extracted in duplicate on design, sample size, handling of missing data, reference to the original study developing the prediction models and predictive performance measures. Results: 11,826 articles were identified and 78 were included for full review, which described the evaluation of 120 prediction models. in participant data that were not used to develop the model. Thirty-three articles described both the development of a prediction model and an evaluation of its performance on a separate dataset, and 45 articles described only the evaluation of an existing published prediction model on another dataset. Fifty-seven percent of the prediction models were presented and evaluated as simplified scoring systems. Sixteen percent of articles failed to report the number of outcome events in the validation datasets. Fifty-four percent of studies made no explicit mention of missing data. Sixty-seven percent did not report evaluating model calibration whilst most studies evaluated model discrimination. It was often unclear whether the reported performance measures were for the full regression model or for the simplified models. Conclusions: The vast majority of studies describing some form of external validation of a multivariable prediction model were poorly reported with key details frequently not presented. The validation studies were characterised by poor design, inappropriate handling and acknowledgement of missing data and one of the most key performance measures of prediction models i.e. calibration often omitted from the publication. It may therefore not be surprising that an overwhelming majority of developed prediction models are not used in practice, when there is a dearth of well-conducted and clearly reported (external validation) studies describing their performance on independent participant data.

Development and validation of multivariable prediction models of remission, recovery, and quality of life outcomes in people with first episode psychosis: a machine learning approach

The Lancet Digital Health, 2019

Background Outcomes for people with first-episode psychosis are highly heterogeneous. Few reliable validated methods are available to predict the outcome for individual patients in the first clinical contact. In this study, we aimed to build multivariable prediction models of 1-year remission and recovery outcomes using baseline clinical variables in people with first-episode psychosis. Methods In this machine learning approach, we applied supervised machine learning, using regularised regression and nested leave-one-site-out cross-validation, to baseline clinical data from the English Evaluating the Development and Impact of Early Intervention Services (EDEN) study (n=1027), to develop and internally validate prediction models at 1-year follow-up. We assessed four binary outcomes that were recorded at 1 year: symptom remission, social recovery, vocational recovery, and quality of life (QoL). We externally validated the prediction models by selecting from the top predictor variables identified in the internal validation models the variables shared with the external validation datasets comprised of two Scottish longitudinal cohort studies (n=162) and the OPUS trial, a randomised controlled trial of specialised assertive intervention versus standard treatment (n=578). Findings The performance of prediction models was robust for the four 1-year outcomes of symptom remission (area under the receiver operating characteristic curve [AUC]

Models Predicting Psychosis in Patients With High Clinical Risk: A Systematic Review

Frontiers in Psychiatry, 2020

The present study reviews predictive models used to improve prediction of psychosis onset in individuals at clinical high risk for psychosis (CHR), using clinical, biological, neurocognitive, environmental, and combinations of predictors. Methods: A systematic literature search on PubMed was carried out (from 1998 through 2019) to find all studies that developed or validated a model predicting the transition to psychosis in CHR subjects. Results: We found 1,406 records. Thirty-eight of them met the inclusion criteria; 11 studies using clinical predictive models, seven studies using biological models, five studies using neurocognitive models, five studies using environmental models, and 18 studies using combinations of predictive models across different domains. While the highest positive predictive value (PPV) in clinical, biological, neurocognitive, and combined predictive models were relatively high (all above 83), the highest PPV across environmental predictive models was modest (63%). Moreover, none of the combined models showed a superiority when compared with more parsimonious models (using only neurocognitive, clinical, biological, or environmental factors). Conclusions: The use of predictive models may allow high prognostic accuracy for psychosis prediction in CHR individuals. However, only ten studies had performed an internal validation of their models. Among the models with the highest PPVs, only the biological and neurocognitive but not the combined models underwent validation. Further validation of predicted models is needed to ensure external validity. Keywords: clinical high risk for psychosis (CHR), attenuated psychotic symptoms (APS), brief and limited intermittent psychotic symptoms (BLIPS), genetic risk and deterioration syndrome (GRD), predictive model

CarriĆ³n RE, Cornblatt BA, Burton CZ, Tso IF, Auther A, Adelsheim S, Carter CS, Calkins R, Taylor SF, McFarlane WR. (2016). Personalized prediction of psychosis: External validation of the NAPLS 2 Psychosis Risk Calculator with the EDIPPP project. American Journal of Psychiatry, 173(10), 989-996.

Objective-In the current issue, Cannon and colleagues, as part of the second phase of the North American Prodrome Longitudinal Study (NAPLS2), report on a risk calculator for the individualized prediction of developing a psychotic disorder in a 2-year period. The present study represents an external validation of the NAPLS2 psychosis risk calculator using an independent sample of subjects at clinical high risk for psychosis collected as part of the Early Detection, Intervention, and Prevention of Psychosis Program (EDIPPP).