Explainability as fig leaf? An exploration of experts’ ethical expectations towards machine learning in psychiatry (original) (raw)

The Challenge of Ethics in the Use of Artificial Intelligence in Mental Health Research

2nd RISEUP-PPD International Conference "Knowledge and Implementation Gaps in Peripartum Depression: Innovation and Future Directions", Sofia, September 21-22, 2023

Artificial Intelligence (AI) is present in various areas of society such as medicine, education, or science. Being able to store large amounts of data uninterruptedly thus proves to be more advantageous when compared to the learning capacity of the human being. The large amount of data collected, which includes details of private lives in common applications, includes personal health data that can now be more easily inferred from points outside the medical context. This new ability to create health data from online content, outside the medical context, which has been referred to as "emerging medical data", often occurs without the knowledge or consent of users, which has generated concern and proved to be a challenge. This communication aims to contribute to this area, making known the results of the work developed on the issues of ethics in research using AI, particularly around mental health research. For this purpose, a literature review was conducted covering the period between 2016 and 2023. In general, a generalized concern with ethical issues was identified, which can hardly keep up with the technological pace. However, the OECD and the European Union have already defined some guidelines and suggested further development of the management of large amounts of data, for example through the FAIR principles – Findability, Accessibility, Interoperability, and Reusability – and a more careful observation of ethical issues at the various stages of the data cycle. Keywords: artificial intelligence, mental health, ethics in research, FAIR principles

From machine learning to student learning: pedagogical challenges for psychiatry

Psychological Medicine, 2020

We agree with Georg Starke et al. (Starke et al. 2020): despite the current lack of direct clinical applications, artificial intelligence (AI) will undeniably transform the future of psychiatry. AI has led to algorithms that can perform more and more complex tasks by interpreting and learning from data (Dobrev 2012). AI applications in psychiatry are receiving more attention, with a 3-fold increase in the number of PubMed / MEDLINE articles on IA in psychiatry over the past three years (N=567 results). The impact of AI on the entire psychiatric profession is likely to be significant (Torous et al. 2015; Huys, Maia, and Frank 2016; Grisanzio et al. 2018; Brown et al. 2019). These effects will be felt not only through the advent of advanced applications in brain imaging (Starke et al. 2020) but also through the stratification and refinement of our clinical categories, a more profound challenge which “lies in its long-embattled nosology” (Kendler 2016). These technical challenges are subsumed by ethical ones. In particular, the risk of non-transparency and reductionism in psychiatric practice is a burning issue. Clinical medicine has already developed the overarching ethical principles of respect for autonomy, non-maleficence, beneficence, and justice (Beauchamp and Childress 2001). The need for the principle of Explainability should be added to this list, specifically regarding the issues involved by AI (Floridi et al. 2018). Explainability concerns the understanding of how a given algorithm works (Intelligibility) and who is responsible for the way it works (Accountability). We totally agree with Starke et al. (2020) that Explainability is essential and constitutes a real challenge for future developments in AI. In addition, however, we think that this ethical issue requires dedicated pedagogical training that must be underpinned by a solid epistemological framework.

Opportunities, applications, challenges and ethical implications of artificial intelligence in psychiatry: a narrative review

The Egyptian Journal of Neurology, Psychiatry and Neurosurgery

Background Artificial intelligence (AI) has made significant advances in recent years, and its applications in psychiatry have gained increasing attention. The use of AI in psychiatry offers the potential to improve patient outcomes and provide valuable insights for healthcare workers. However, the potential benefits of AI in psychiatry are accompanied by several challenges and ethical implications that require consideration. In this review, we explore the use of AI in psychiatry and its applications in monitoring mental illness, treatment, prediction, diagnosis, and deep learning. We discuss the potential benefits of AI in terms of improved patient outcomes, efficiency, and cost-effectiveness. However, we also address the challenges and ethical implications associated with the use of AI in psychiatry, including issues of accuracy, privacy, and the risk of perpetuating existing biases in the field. Results This is a review article, thus not applicable. Conclusion Despite the challen...

Machine learning applications in healthcare and the role of informed consent: Ethical and practical considerations

Clinical Ethics

Informed consent is at the core of the clinical relationship. With the introduction of machine learning (ML) in healthcare, the role of informed consent is challenged. This paper addresses the issue of whether patients must be informed about medical ML applications and asked for consent. It aims to expose the discrepancy between ethical and practical considerations, while arguing that this polarization is a false dichotomy: in reality, ethics is applied to specific contexts and situations. Bridging this gap and considering the whole picture is essential for advancing the debate. In the light of the possible future developments of the situation and the technologies, as well as the benefits that informed consent for ML can bring to shared decision-making, the present analysis concludes that it is necessary to prepare the ground for a possible future requirement of informed consent for medical ML.

Ethical Machine Learning in Health

arXiv (Cornell University), 2020

The use of machine learning (ML) in health care raises numerous ethical concerns, especially as models can amplify existing health inequities. Here, we outline ethical considerations for equitable ML in the advancement of health care. Specifically, we frame ethics of ML in health care through the lens of social justice. We describe ongoing efforts and outline challenges in a proposed pipeline of ethical ML in health, ranging from problem selection to post-deployment considerations. We close by summarizing recommendations to address these challenges.

Artificial intelligence and the future of psychiatry: Qualitative findings from a global physician survey

DIGITAL HEALTH

Background The potential for machine learning to disrupt the medical profession is the subject of ongoing debate within biomedical informatics. Objective This study aimed to explore psychiatrists’ opinions about the potential impact innovations in artificial intelligence and machine learning on psychiatric practice Methods In Spring 2019, we conducted a web-based survey of 791 psychiatrists from 22 countries worldwide. The survey measured opinions about the likelihood future technology would fully replace physicians in performing ten key psychiatric tasks. This study involved qualitative descriptive analysis of written responses (“comments”) to three open-ended questions in the survey. Results Comments were classified into four major categories in relation to the impact of future technology on: (1) patient-psychiatrist interactions; (2) the quality of patient medical care; (3) the profession of psychiatry; and (4) health systems. Overwhelmingly, psychiatrists were skeptical that tec...

Explainable, Trustworthy, and Ethical Machine Learning for Healthcare: A Survey

2021

With the advent of machine learning (ML) applications in daily life, the questions about liability, trust, and interpretability of their outputs are raising, especially for healthcare applications. The black-box nature of ML models is a roadblock for clinical utilization. Therefore, to gain the trust of clinicians and patients, researchers need to provide explanations of how and why the model is making a specific decision. With the promise of enhancing the trust and transparency of black-box models, researchers are in the phase of maturing the field of eXplainable ML (XML). In this paper, we provide a comprehensive review of explainable and interpretable ML techniques implemented for providing the reasons behind their decisions for various healthcare applications. Along with highlighting various security, safety, and robustness challenges that hinder the trustworthiness of ML we also discussed the ethical issues of healthcare ML and describe how explainable and trustworthy ML can re...