Introducing ELLIPS: An Ethics-Centered Approach to Research on LLM-Based Inference of Psychiatric Conditions (original) (raw)
Related papers
JMIR Mental Health
Recent developments in artificial intelligence technologies have come to a point where machine learning algorithms can infer mental status based on someone’s photos and texts posted on social media. More than that, these algorithms are able to predict, with a reasonable degree of accuracy, future mental illness. They potentially represent an important advance in mental health care for preventive and early diagnosis initiatives, and for aiding professionals in the follow-up and prognosis of their patients. However, important issues call for major caution in the use of such technologies, namely, privacy and the stigma related to mental disorders. In this paper, we discuss the bioethical implications of using such technologies to diagnose and predict future mental illness, given the current scenario of swiftly growing technologies that analyze human language and the online availability of personal information given by social media. We also suggest future directions to be taken to minim...
Artificial Intelligence in mental health and the biases of language based models
PLOS ONE, 2020
BackgroundThe rapid integration of Artificial Intelligence (AI) into the healthcare field has occurred with little communication between computer scientists and doctors. The impact of AI on health outcomes and inequalities calls for health professionals and data scientists to make a collaborative effort to ensure historic health disparities are not encoded into the future. We present a study that evaluates bias in existing Natural Language Processing (NLP) models used in psychiatry and discuss how these biases may widen health inequalities. Our approach systematically evaluates each stage of model development to explore how biases arise from a clinical, data science and linguistic perspective.Design/MethodsA literature review of the uses of NLP in mental health was carried out across multiple disciplinary databases with defined Mesh terms and keywords. Our primary analysis evaluated biases within ‘GloVe’ and ‘Word2Vec’ word embeddings. Euclidean distances were measured to assess rel...
Research Ethics, 2024
The integration of artificial intelligence (AI), particularly large language models (LLMs) like OpenAI’s ChatGPT, into clinical research could significantly enhance the informed consent process. This paper critically examines the ethical implications of employing LLMs to facilitate consent in clinical research. LLMs could offer considerable benefits, such as improving participant understanding and engagement, broadening participants’ access to the relevant information for informed consent, and increasing the efficiency of consent procedures. However, these theoretical advantages are accompanied by ethical risks, including the potential for misinformation, coercion, and challenges in accountability. Given the complex nature of consent in clinical research, which involves both written documentation (in the form of participant information sheets and informed consent forms) and in-person conversations with a researcher, the use of LLMs raises significant concerns about the adequacy of existing regulatory frameworks. Institutional Review Boards (IRBs) will need to consider substantial reforms to accommodate the integration of LLM-based consent processes. We explore five potential models for LLM implementation, ranging from supplementary roles to complete replacements of current consent processes, and offer recommendations for researchers and IRBs to navigate the ethical landscape. Thus, we aim to provide practical recommendations to facilitate the ethical introduction of LLM-based consent in research settings by considering factors such as participant understanding, information accuracy, human oversight and types of LLM applications in clinical research consent.
The Challenge of Ethics in the Use of Artificial Intelligence in Mental Health Research
2nd RISEUP-PPD International Conference "Knowledge and Implementation Gaps in Peripartum Depression: Innovation and Future Directions", Sofia, September 21-22, 2023
Artificial Intelligence (AI) is present in various areas of society such as medicine, education, or science. Being able to store large amounts of data uninterruptedly thus proves to be more advantageous when compared to the learning capacity of the human being. The large amount of data collected, which includes details of private lives in common applications, includes personal health data that can now be more easily inferred from points outside the medical context. This new ability to create health data from online content, outside the medical context, which has been referred to as "emerging medical data", often occurs without the knowledge or consent of users, which has generated concern and proved to be a challenge. This communication aims to contribute to this area, making known the results of the work developed on the issues of ethics in research using AI, particularly around mental health research. For this purpose, a literature review was conducted covering the period between 2016 and 2023. In general, a generalized concern with ethical issues was identified, which can hardly keep up with the technological pace. However, the OECD and the European Union have already defined some guidelines and suggested further development of the management of large amounts of data, for example through the FAIR principles – Findability, Accessibility, Interoperability, and Reusability – and a more careful observation of ethical issues at the various stages of the data cycle. Keywords: artificial intelligence, mental health, ethics in research, FAIR principles
Safety of Large Language Models in Addressing Depression
Cureus, 2023
Background Generative artificial intelligence (AI) models, exemplified by systems such as ChatGPT, Bard, and Anthropic, are currently under intense investigation for their potential to address existing gaps in mental health support. One implementation of these large language models involves the development of mental healthfocused conversational agents, which utilize pre-structured prompts to facilitate user interaction without requiring specialized knowledge in prompt engineering. However, uncertainties persist regarding the safety and efficacy of these agents in recognizing severe depression and suicidal tendencies. Given the wellestablished correlation between the severity of depression and the risk of suicide, improperly calibrated conversational agents may inadequately identify and respond to crises. Consequently, it is crucial to investigate whether publicly accessible repositories of mental health-focused conversational agents can consistently and safely address crisis scenarios before considering their adoption in clinical settings. This study assesses the safety of publicly available ChatGPT-3.5 conversational agents by evaluating their responses to a patient simulation indicating worsening depression and suicidality. Conclusions Current generative AI-based conversational agents are slow to escalate mental health risk scenarios, postponing referral to a human to potentially dangerous levels. More rigorous testing and oversight of conversational agents are needed before deployment in mental healthcare settings. Additionally, further investigation should explore if sustained engagement worsens outcomes and whether enhanced accessibility outweighs the risks of improper escalation. Advancing AI safety in mental health remains imperative as these technologies continue rapidly advancing.
AI and Ethics
The increasing implementation of programs supported by machine learning in medical contexts will affect psychiatry. It is crucial to accompany this development with careful ethical considerations informed by empirical research involving experts from the field, to identify existing problems, and to address them with fine-grained ethical reflection. We conducted semi-structured qualitative interviews with 15 experts from Germany and Switzerland with training in medicine and neuroscience on the assistive use of machine learning in psychiatry. We used reflexive thematic analysis to identify key ethical expectations and attitudes towards machine learning systems. Experts’ ethical expectations towards machine learning in psychiatry partially challenge orthodoxies from the field. We relate these challenges to three themes, namely (1) ethical challenges of machine learning research, (2) the role of explainability in research and clinical application, and (3) the relation of patients, physic...
Knowledge has become more open and accessible to a large audience with the "democratization of information" facilitated by technology. This paper provides an ethical perspective on utilizing Generative Artificial Intelligence (GenAI) for the democratization of mental health knowledge and practice. It explores the historical context of democratizing information, transitioning from restricted access to widespread availability due to the internet, open-source movements, and most recently, GenAI technologies such as Large Language Models (LLMs). The paper highlights why GenAI technologies represent a new phase in the democratization movement, offering unparalleled access to highly advanced technology as well as information. In the realm of mental health, this requires a delicate and nuanced ethical deliberation. Including GenAI in mental health may allow, among other things, improved accessibility to mental health care, personalized responses, conceptual flexibility, and could facilitate a flattening of traditional hierarchies between health care providers and patients. At the same time, it also entails significant risks and challenges that must be carefully addressed. To navigate these complexities, the paper proposes a strategic questionnaire for assessing AI based mental health applications. This tool evaluates both the benefits and the risks, emphasizing the need for a balanced and ethical approach for GenAI integration in mental health. The paper calls for a cautious yet positive approach to GenAI in mental health, advocating for the active engagement of mental health professionals in guiding GenAI development. It emphasizes the importance of ensuring that GenAI advancements are not only technologically sound but also ethically grounded and patient centered.
Cureus
The rapid progress in artificial intelligence (AI) and the emergence of large language models (LLMs), like GPT-4, create a unique opportunity to transform nursing care planning. In this editorial, we explore the potential applications of AI in the nursing process, with a focus on patient data assessment and interpretation, communication with patients and families, identifying gaps in care plans, and ongoing professional development. We also examine the ethical concerns and challenges associated with AI integration in healthcare, such as data privacy and security, fairness and bias, accountability and responsibility, and the delicate balance between human-AI collaboration. To implement LLMs responsibly and effectively in nursing care planning, we recommend prioritizing robust data security measures, transparent and unbiased algorithms, clear accountability guidelines, and human-AI collaboration. By addressing these issues, we can improve nursing care planning and ensure the best possible care for patients.
Generative AI and medical ethics: The state of play
Journal of Medical Ethics, 2023
Since their public launch a little over a year ago, Large Language Models (LLMs) have inspired a flurry of analysis about what their implications might be for medical ethics, and for society more broadly. (1) Much of the recent debate has moved beyond categorical evaluations of the permissibility or impermissibility of LLM use in different general contexts (for example, at work or school), to more fine-grained discussions of the criteria that should govern their appropriate use in specific domains or toward certain ends (2). With each passing week, it seems more and more inevitable that LLMs will be a pervasive feature of many, if not most, of our lives. It won’t be possible—and wouldn’t be desirable—to prohibit them across the board. We need to learn how to live with LLMs; to identify and mitigate the risks they pose to us, to our fellow creatures, and the environment; and to harness and guide their powers to better ends. This will require thoughtful regulation, sustained cooperation across nations, cultures, and fields of inquiry; and all of this must be grounded in good ethics.
2023
Large language models (LLMs) such as Open AI's GPT-3 and-4 (which power ChatGPT) and Google's PaLM, built on artificial intelligence, hold immense potential to support, augment, or even eventually fully automate psychotherapy. Enthusiasm about such applications is mounting in the field as well as industry. These developments promise to address insufficient mental healthcare system capacity and scale individual access to personalized treatments. However, clinical psychology is an uncommonly high stakes application domain for AI systems, as responsible and evidence-based therapy requires nuanced expertise. This paper provides a roadmap for the ambitious yet responsible application of clinical LLMs in psychotherapy. First, a technical overview of clinical LLMs is presented. Second, the stages of integration of LLMs into psychotherapy are discussed while highlighting parallels to the development of autonomous vehicle technology. Third, potential applications of LLMs in clinical care, training, and research are discussed, highlighting areas of risk given the complex nature of psychotherapy. Fourth, recommendations for the responsible development and evaluation of clinical LLMs are provided, which include centering clinical science, involving robust interdisciplinary collaboration, and attending to issues like assessment, risk detection, transparency, and bias. Lastly, a vision is outlined for how LLMs might enable a new generation of studies of evidence-based interventions at scale, and how these studies may challenge assumptions about psychotherapy.