Generating high-fidelity synthetic patient data for assessing machine learning healthcare software - PubMed (original) (raw)

Generating high-fidelity synthetic patient data for assessing machine learning healthcare software

Allan Tucker et al. NPJ Digit Med. 2020.

Abstract

There is a growing demand for the uptake of modern artificial intelligence technologies within healthcare systems. Many of these technologies exploit historical patient health data to build powerful predictive models that can be used to improve diagnosis and understanding of disease. However, there are many issues concerning patient privacy that need to be accounted for in order to enable this data to be better harnessed by all sectors. One approach that could offer a method of circumventing privacy issues is the creation of realistic synthetic data sets that capture as many of the complexities of the original data set (distributions, non-linear relationships, and noise) but that does not actually include any real patient data. While previous research has explored models for generating synthetic data sets, here we explore the integration of resampling, probabilistic graphical modelling, latent variable identification, and outlier analysis for producing realistic synthetic data based on UK primary care patient data. In particular, we focus on handling missingness, complex interactions between variables, and the resulting sensitivity analysis statistics from machine learning classifiers, while quantifying the risks of patient re-identification from synthetic datapoints. We show that, through our approach of integrating outlier analysis with graphical modelling and resampling, we can achieve synthetic data sets that are not significantly different from original ground truth data in terms of feature distributions, feature dependencies, and sensitivity analysis statistics when inferring machine learning classifiers. What is more, the risk of generating synthetic data that is identical or very similar to real patients is shown to be low.

PubMed Disclaimer

Conflict of interest statement

The authors declare no competing interests.

Figures

Fig. 1

Fig. 1. Resultant graph structure for BNs learnt from samples of ground truth data.

Confidences of 100% are represented by black arcs while those <100% are represented by varying widths in grey.

Fig. 2

Fig. 2

Plots of sample distributions and statistics of the original ground truth data when all missing data are deleted along with plots, distributions, and statistics from the synthetic data that are generated using a BN inferred from the ground truth.

Fig. 3

Fig. 3

Plots of sample distributions and statistics of the original ground truth data including missing data as well as plots for the synthetic data that models missing data with “Miss Nodes/States” and with latent variables.

Fig. 4

Fig. 4

Five-sample sensitivity analyses for a Bayesian generalised linear classifier on GT and SYN data (latent model) for fixed sample size of 100,000, including ROC and PR curves, and AUC and Granger statistics.

Fig. 5

Fig. 5. Bayesian network architectures.

a A Bayesian network with four nodes. b A Bayesian network classifier with class node C. c A dynamic Bayesian network with two time-slices, t and _t_−1. d A Hidden Markov model with latent variable H.

Fig. 6

Fig. 6. Methods to capture missing data and unmeasured effects.

a A binary “Miss Node” pointing to all continuous nodes in a Bayesian network. b A “Miss State” for discrete nodes. c A latent variable with m states to capture Missing Not at Random data and other unmeasured effects (in both discrete and continuous nodes).

Similar articles

Cited by

References

    1. The Lancet Editorial. Personalised medicine in the UK. Lancet, 391, e1 (2018). - PubMed
    1. FDA. Proposed Regulatory Framework for Modification to Artificial Intelligence / Machine Learning (AI/ML)–Based Software as a Medical Device (SaMD). https://www.fda.gov/media/122535/download (2020).
    1. Goodman, B. & Flaxman, S. European Union regulations on algorithmic decision-making and a right to explanation. Preprint at http://arxiv.org/abs/1606.08813 (2016).
    1. BBC 2017. Google DeepMind NHS app test broke UK privacy law. https://www.bbc.co.uk/news/technology-40483202 (2017).
    1. Wachter, S., Mittelstadt, B. & Floridi, L. Why a right to explanation of automated decision-making does not exist in the general data protection regulation, International Data Privacy Law. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2903469 (2016).

LinkOut - more resources