Jessica Morley | University of Oxford (original) (raw)

Drafts by Jessica Morley

Research paper thumbnail of How to design a governable digital health ecosystem

It has been suggested that to overcome the challenges facing the UK’s National Health Service (NH... more It has been suggested that to overcome the challenges facing the UK’s National Health Service (NHS) of an ageing population and reduced available funding, the NHS should be transformed into a more informationally mature and heterogeneous organisation, reliant on data-based and algorithmically-driven interactions between human, artificial, and hybrid (semi-artificial) agents. This transformation process would offer significant benefit to patients, clinicians, and the overall system, but it would also rely on a fundamental transformation of the healthcare system in a way that poses significant governance challenges. In this article, we argue that a fruitful way to overcome these challenges is by adopting a pro-ethical approach to design that analyses the system as a whole, keeps society-in-the-loop throughout the process, and distributes responsibility evenly across all nodes in the system.

Research paper thumbnail of Empowerment or Engagement? Digital Health Technologies for Mental Healthcare

We argue that while digital health technologies (e.g. artificial intelligence, smartphones, and v... more We argue that while digital health technologies (e.g. artificial intelligence, smartphones, and virtual reality) present significant opportunities for improving the delivery of healthcare, key concepts that are used to evaluate and understand their impact can obscure significant ethical issues related to patient engagement and experience. Specifically, we focus on the concept of empowerment and ask whether it is adequate for addressing some significant ethical concerns that relate to digital health technologies for mental healthcare. We frame these concerns using five key ethical principles for AI ethics (i.e. autonomy, beneficence, non-maleficence, justice, and explicability), which have their roots in the bioethical literature, in order to critically evaluate the role that digital health technologies will have in the future of digital healthcare.

Research paper thumbnail of From What to How: An Initial Review of Publicly Available AI Ethics Tools, Methods and Research to Translate Principles into Practices

The debate about the ethical implications of Artificial Intelligence dates from the 1960s (Wiener... more The debate about the ethical implications of Artificial Intelligence dates from the 1960s (Wiener, 1960) (Samuel, 1960). However, in recent years symbolic AI has been complemented and sometimes replaced by (Deep) Neural Networks and Machine Learning (ML) techniques. This has vastly increased its potential utility and impact on society, with the consequence that the ethical debate has gone mainstream. Such a debate has primarily focused on principles-the 'what' of AI ethics (beneficence, non-maleficence, autonomy, justice and explicability)-rather than on practices, the 'how.' Awareness of the potential issues is increasing at a fast rate, but the AI community's ability to take action to mitigate the associated risks is still at its infancy. Therefore, our intention in presenting this research is to contribute to closing the gap between principles and practices by constructing a typology that may help practically-minded developers 'apply ethics' at each stage of the pipeline, and to signal to researchers where further work is needed. The focus is exclusively on Machine Learning, but it is hoped that the results of this research may be easily applicable to other branches of AI. The article outlines the research method for creating this typology, the initial findings, and provides a summary of future research needs. 2

Papers by Jessica Morley

Research paper thumbnail of The ethical debate about the gig economy: a review and critical analysis

The gig economy is a phenomenon that is rapidly expanding, redefining the nature of work and cont... more The gig economy is a phenomenon that is rapidly expanding, redefining the nature of work and contributing to a significant change in how contemporary economies are organised. Its expansion is not unproblematic. This article provides a clear and systematic analysis of the main ethical challenges caused by the gig economy. Following a brief overview of the gig economy, its scope and scale, we map the key ethical problems that it gives rise to, as they are discussed in the relevant literature. We map them onto three categories: the new organisation of work (what is done), the new nature of work (how it is done), and the new status of workers (who does it). We then evaluate a recent initiative from the EU that seeks to address the challenges of the gig economy. The 2019 report of the European High-Level Expert Group on the Impact of the Digital Transformation on EU Labour Markets is a positive step in the right direction. However, we argue that ethical concerns relating to algorithmic systems as mechanisms of control, and the discrimination, exclusion and disconnectedness faced by gig workers require further deliberation and policy response. A brief conclusion completes the analysis. The appendix presents the methodology underpinning our literature review.

Research paper thumbnail of Ethical Guidelines for SARS-CoV-2 Digital Tracking and Tracing Systems

pre_print, 2020

The World Health Organisation declared COVID-19 a global pandemic on 11th March 2020, recognising... more The World Health Organisation declared COVID-19 a global pandemic on 11th March 2020, recognising that the underlying SARS-CoV-2 has caused the greatest global crisis since World War II. In this article, we present a framework to evaluate whether and to what extent the use of digital systems that track and/or trace potentially infected individuals is not only legal but also ethical. Digital tracking and tracing (DTT) systems may severely limit fundamental rights and freedoms, but they ought not to be deployed in a vacuum of guidance, to ensure that they are ethically justifiable, i.e. coherent with society’s expectations and values. Interventions must be necessary to achieve a specific public health objective, proportional to the seriousness of the public health threat, scientifically sound to support their effectiveness, and time-bounded (1,2). However, this is insufficient. This is why in this article we present a more inclusive framework also comprising twelve enabling factors to guide the design and development of ethical DTT systems.

Research paper thumbnail of Online information of vaccines: information quality is an ethical responsibility of search engines

arXiv:1912.00898, 2019

The fact that Internet companies may record our personal data and track our online behaviour for ... more The fact that Internet companies may record our personal data and track our online behaviour for commercial or political purpose has emphasized aspects related to online privacy. This has also led to the development of search engines that promise no tracking and privacy. Search engines also have a major role in spreading low-quality health information such as that of anti-vaccine websites. This study investigates the relationship between search engines’ approach to privacy and the scientific quality of the information they return. We analyzed the first 30 webpages returned searching “vaccines autism” in English, Spanish, Italian and French. The results show that “alternative” search engines (Duckduckgo, Ecosia, Qwant, Swisscows and Mojeek) may return more anti-vaccine pages (10-53%) than Google.com (0%). Some localized versions of Google, however, returned more anti-vaccine webpages (up to 10%) than Google.com. Our study suggests that designing a search engine that is privacy savvy and avoids issues with filter bubbles that can result from user-tracking is necessary but insufficient; instead mechanisms should be developed to test search engines from the perspective of information quality (particularly for health-related webpages), before they can be deemed trustworthy providers of public health information.

Research paper thumbnail of The Debate on the Ethics of AI in Health Care: a Reconstruction and Critical Review

SSRN, 2019

Healthcare systems across the globe are struggling with increasing costs and worsening outcomes. ... more Healthcare systems across the globe are struggling with increasing costs and worsening outcomes. This presents those responsible for overseeing healthcare with a challenge. Increasingly, policymakers, politicians, clinical entrepreneurs and computer and data scientists argue that a key part of the solution will be 'Artificial Intelligence' (AI)-particularly Machine Learning (ML). This argument stems not from the belief that all healthcare needs will soon be taken care of by "robot doctors." Instead, it is an argument that rests on the classic counterfactual definition of AI as an umbrella term for a range of techniques that can be used to make machines complete tasks in a way that would be considered intelligent were they to be completed by a human. Automation of this nature could offer great opportunities for the improvement of healthcare services and ultimately patients' health by significantly improving human clinical capabilities in diagnosis, drug discovery, epidemiology, personalised medicine, and operational efficiency. However, if these AI solutions are to be embedded in clinical practice, then at least three issues need to be considered: the technical possibilities and limitations; the ethical, regulatory and legal framework; and the governance framework. In this article, we report on the results of a systematic analysis designed to provide a clear overview of the second of these elements: the ethical, regulatory and legal framework. We find that ethical issues arise at six levels of abstraction (individual, interpersonal, group, institutional, sectoral, and societal) and can be categorised as epistemic, normative, or overarching. We conclude by stressing how important it is that the ethical challenges raised by implementing AI in healthcare settings are tackled proactively rather than reactively and map the key considerations for policymakers to each of the ethical concerns highlighted.

Research paper thumbnail of The Debate on the Ethics of AI in Health Care: a Reconstruction and Critical Review

Pre-Print, 2019

Healthcare systems across the globe are struggling with increasing costs and worsening outcomes. ... more Healthcare systems across the globe are struggling with increasing costs and worsening outcomes. This presents those responsible for overseeing healthcare with a challenge. Increasingly, policymakers, politicians, clinical entrepreneurs and computer and data scientists argue that a key part of the solution will be 'Artificial Intelligence' (AI)-particularly Machine Learning (ML). This argument stems not from the belief that all healthcare needs will soon be taken care of by "robot doctors." Instead, it is an argument that rests on the classic counterfactual definition of AI as an umbrella term for a range of techniques that can be used to make machines complete tasks in a way that would be considered intelligent were they to be completed by a human. Automation of this nature could offer great opportunities for the improvement of healthcare services and ultimately patients' health by significantly improving human clinical capabilities in diagnosis, drug discovery, epidemiology, personalised medicine, and operational efficiency. However, if these AI solutions are to be embedded in clinical practice, then at least three issues need to be considered: the technical possibilities and limitations; the ethical, regulatory and legal framework; and the governance framework. In this article, we report on the results of a systematic analysis designed to provide a clear overview of the second of these elements: the ethical, regulatory and legal framework. We find that ethical issues arise at six levels of abstraction (individual, interpersonal, group, institutional, sectoral, and societal) and can be categorised as epistemic, normative, or overarching. We conclude by stressing how important it is that the ethical challenges raised by implementing AI in healthcare settings are tackled proactively rather than reactively and map the key considerations for policymakers to each of the ethical concerns highlighted.

Research paper thumbnail of The Chinese Approach to Artificial Intelligence: an Analysis of Policy and Regulation

SSRN, 2019

In July 2017, China's State Council released the country's strategy for developing artificial int... more In July 2017, China's State Council released the country's strategy for developing artificial intelligence (AI), entitled 'New Generation Artificial Intelligence Development Plan' (新 一代人工智能发展规划). This strategy outlined China's aims to become the world leader in AI by 2030, to monetise AI into a trillion-yuan ($150 billion) industry, and to emerge as the driving force in defining ethical norms and standards for AI. Several reports have analysed specific aspects of China's AI policies or have assessed the country's technical capabilities. Instead, in this article, we focus on the socio-political background and policy debates that are shaping China's AI strategy. In particular, we analyse the main strategic areas in which China is investing in AI and the concurrent ethical debates that are delimiting its use. Through focusing on the policy backdrop, we seek to provide a more comprehensive understanding of China's AI policy by bringing together debates and analyses of a wide array of policy documents.

Research paper thumbnail of Digital Psychiatry: Ethical Risks and Opportunities for Public Health and Well-Being

Common mental health disorders are rising globally, creating a strain on public healthcare system... more Common mental health disorders are rising globally, creating a strain on public healthcare systems. This has led to a renewed interest in the role that digital technologies may have for improving mental health outcomes. One result of this interest is the development and use of artificial intelligence for assessing, diagnosing, and treating mental health issues, which we refer to as 'digital psychiatry'. This article focuses on the increasing use of digital psychiatry outside of clinical settings, in the following sectors: education, employment, financial services, social media, and the digital well-being industry. We analyse the ethical risks of deploying digital psychiatry in these sectors, emphasising key problems and opportunities for public health, and offer recommendations for protecting and promoting public health and well-being in information societies.

Research paper thumbnail of NHS AI Lab: why we need to be ethically mindful about AI for healthcare

On 8 th August 2019, Secretary of State for Health and Social Care, Matt Hancock, announced the c... more On 8 th August 2019, Secretary of State for Health and Social Care, Matt Hancock, announced the creation of a £250 million NHS AI Lab. This significant investment is justified on the belief that transforming the UK's National Health Service (NHS) into a more informationally mature and heterogeneous organisation, reliant on data-based and algorithmically-driven interactions, will offer significant benefit to patients, clinicians, and the overall system. These opportunities are realistic and should not be wasted. However, they may be missed (one may recall the troubled Care.data programme) if the ethical challenges posed by this transformation are not carefully considered from the start, and then addressed thoroughly, systematically, and in a socially participatory way. To deal with this serious risk, the NHS AI Lab should create an Ethics Advisory Board and monitor, analyse, and address the normative and overarching ethical issues that arise at the individual, interpersonal, group, institutional and societal levels in AI for healthcare. NHS AI Lab: why we need to be ethically mindful about AI for healthcare

Research paper thumbnail of Developing effective policy to support Artificial Intelligence in health and care

EuroHealth, 2019

The increased availability of data has enabled the development of Artificially Intelligent System... more The increased availability of data has enabled the development of Artificially Intelligent Systems (AIS) for health, but implementing these systems and capitalising on the associated opportunities is not straightforward. To mitigate these risks, outdated governance mechanisms need to be updated and key questions answered. To achieve this, whilst still supporting innovation, a new joint organisation for digital, data and technology in the English NHS (NHSX) is developing a 'principled proportionate governance' model that involves focusing on proactively and objectively evaluating current AIS technology and regularly involving all those who rely on and serve the health and care system.

Research paper thumbnail of How to design a governable digital health ecosystem

Statement of Contribution: JM is the main author of this article, to which LF has contributed. Ab... more Statement of Contribution: JM is the main author of this article, to which LF has contributed. Abstract It has been suggested that to overcome the challenges facing the UK's National Health Service (NHS) of an ageing population and reduced available funding, the NHS should be transformed into a more informationally mature and heterogeneous organisation, reliant on data-based and algorithmically-driven interactions between human, artificial, and hybrid (semi-artificial) agents. This transformation process would offer significant benefit to patients, clinicians, and the overall system, but it would also rely on a fundamental transformation of the healthcare system in a way that poses significant governance challenges. In this article, we argue that a fruitful way to overcome these challenges is by adopting a pro-ethical approach to design that analyses the system as a whole, keeps society-in-the-loop throughout the process, and distributes responsibility evenly across all nodes in the system.

Research paper thumbnail of Enabling digital health companionship is better than empowerment

The Lancet Digital Health, 2019

In this paper we argue that international policies that encourage the adoption of digital health ... more In this paper we argue that international policies that encourage the adoption of digital health tools through the rhetoric of individual 'empowerment' fail to detail how such tools empower citizens or patients and so governments risk using this rhetoric in a potentially deceptive manner. Instead, we argue that policies should focus on how data derived from DHTs can enable better care at the level of systems, population, group, or individuals.

Research paper thumbnail of The Limits of Empowerment: How to Reframe the Role of mHealth Tools in the Healthcare Ecosystem

Science and Engineering Ethics, 2019

This article highlights the limitations of the tendency to frame health-and wellbe-ing-related di... more This article highlights the limitations of the tendency to frame health-and wellbe-ing-related digital tools (mHealth technologies) as empowering devices, especially as they play an increasingly important role in the National Health Service (NHS) in the UK. It argues that mHealth technologies should instead be framed as digital companions. This shift from empowerment to companionship is advocated by showing the conceptual, ethical, and methodological issues challenging the narrative of empowerment, and by arguing that such challenges, as well as the risk of medical paternalism, can be overcome by focusing on the potential for mHealth tools to mediate the relationship between recipients of clinical advice and givers of clinical advice, in ways that allow for contextual flexibility in the balance between patiency and agency. The article concludes by stressing that reframing the narrative cannot be the only means for avoiding harm caused to the NHS as a healthcare system by the introduction of mHealth tools. Future discussion will be needed on the overarching role of responsible design.

Research paper thumbnail of How to design a governable digital health ecosystem

It has been suggested that to overcome the challenges facing the UK’s National Health Service (NH... more It has been suggested that to overcome the challenges facing the UK’s National Health Service (NHS) of an ageing population and reduced available funding, the NHS should be transformed into a more informationally mature and heterogeneous organisation, reliant on data-based and algorithmically-driven interactions between human, artificial, and hybrid (semi-artificial) agents. This transformation process would offer significant benefit to patients, clinicians, and the overall system, but it would also rely on a fundamental transformation of the healthcare system in a way that poses significant governance challenges. In this article, we argue that a fruitful way to overcome these challenges is by adopting a pro-ethical approach to design that analyses the system as a whole, keeps society-in-the-loop throughout the process, and distributes responsibility evenly across all nodes in the system.

Research paper thumbnail of Empowerment or Engagement? Digital Health Technologies for Mental Healthcare

We argue that while digital health technologies (e.g. artificial intelligence, smartphones, and v... more We argue that while digital health technologies (e.g. artificial intelligence, smartphones, and virtual reality) present significant opportunities for improving the delivery of healthcare, key concepts that are used to evaluate and understand their impact can obscure significant ethical issues related to patient engagement and experience. Specifically, we focus on the concept of empowerment and ask whether it is adequate for addressing some significant ethical concerns that relate to digital health technologies for mental healthcare. We frame these concerns using five key ethical principles for AI ethics (i.e. autonomy, beneficence, non-maleficence, justice, and explicability), which have their roots in the bioethical literature, in order to critically evaluate the role that digital health technologies will have in the future of digital healthcare.

Research paper thumbnail of From What to How: An Initial Review of Publicly Available AI Ethics Tools, Methods and Research to Translate Principles into Practices

The debate about the ethical implications of Artificial Intelligence dates from the 1960s (Wiener... more The debate about the ethical implications of Artificial Intelligence dates from the 1960s (Wiener, 1960) (Samuel, 1960). However, in recent years symbolic AI has been complemented and sometimes replaced by (Deep) Neural Networks and Machine Learning (ML) techniques. This has vastly increased its potential utility and impact on society, with the consequence that the ethical debate has gone mainstream. Such a debate has primarily focused on principles-the 'what' of AI ethics (beneficence, non-maleficence, autonomy, justice and explicability)-rather than on practices, the 'how.' Awareness of the potential issues is increasing at a fast rate, but the AI community's ability to take action to mitigate the associated risks is still at its infancy. Therefore, our intention in presenting this research is to contribute to closing the gap between principles and practices by constructing a typology that may help practically-minded developers 'apply ethics' at each stage of the pipeline, and to signal to researchers where further work is needed. The focus is exclusively on Machine Learning, but it is hoped that the results of this research may be easily applicable to other branches of AI. The article outlines the research method for creating this typology, the initial findings, and provides a summary of future research needs. 2

Research paper thumbnail of The ethical debate about the gig economy: a review and critical analysis

The gig economy is a phenomenon that is rapidly expanding, redefining the nature of work and cont... more The gig economy is a phenomenon that is rapidly expanding, redefining the nature of work and contributing to a significant change in how contemporary economies are organised. Its expansion is not unproblematic. This article provides a clear and systematic analysis of the main ethical challenges caused by the gig economy. Following a brief overview of the gig economy, its scope and scale, we map the key ethical problems that it gives rise to, as they are discussed in the relevant literature. We map them onto three categories: the new organisation of work (what is done), the new nature of work (how it is done), and the new status of workers (who does it). We then evaluate a recent initiative from the EU that seeks to address the challenges of the gig economy. The 2019 report of the European High-Level Expert Group on the Impact of the Digital Transformation on EU Labour Markets is a positive step in the right direction. However, we argue that ethical concerns relating to algorithmic systems as mechanisms of control, and the discrimination, exclusion and disconnectedness faced by gig workers require further deliberation and policy response. A brief conclusion completes the analysis. The appendix presents the methodology underpinning our literature review.

Research paper thumbnail of Ethical Guidelines for SARS-CoV-2 Digital Tracking and Tracing Systems

pre_print, 2020

The World Health Organisation declared COVID-19 a global pandemic on 11th March 2020, recognising... more The World Health Organisation declared COVID-19 a global pandemic on 11th March 2020, recognising that the underlying SARS-CoV-2 has caused the greatest global crisis since World War II. In this article, we present a framework to evaluate whether and to what extent the use of digital systems that track and/or trace potentially infected individuals is not only legal but also ethical. Digital tracking and tracing (DTT) systems may severely limit fundamental rights and freedoms, but they ought not to be deployed in a vacuum of guidance, to ensure that they are ethically justifiable, i.e. coherent with society’s expectations and values. Interventions must be necessary to achieve a specific public health objective, proportional to the seriousness of the public health threat, scientifically sound to support their effectiveness, and time-bounded (1,2). However, this is insufficient. This is why in this article we present a more inclusive framework also comprising twelve enabling factors to guide the design and development of ethical DTT systems.

Research paper thumbnail of Online information of vaccines: information quality is an ethical responsibility of search engines

arXiv:1912.00898, 2019

The fact that Internet companies may record our personal data and track our online behaviour for ... more The fact that Internet companies may record our personal data and track our online behaviour for commercial or political purpose has emphasized aspects related to online privacy. This has also led to the development of search engines that promise no tracking and privacy. Search engines also have a major role in spreading low-quality health information such as that of anti-vaccine websites. This study investigates the relationship between search engines’ approach to privacy and the scientific quality of the information they return. We analyzed the first 30 webpages returned searching “vaccines autism” in English, Spanish, Italian and French. The results show that “alternative” search engines (Duckduckgo, Ecosia, Qwant, Swisscows and Mojeek) may return more anti-vaccine pages (10-53%) than Google.com (0%). Some localized versions of Google, however, returned more anti-vaccine webpages (up to 10%) than Google.com. Our study suggests that designing a search engine that is privacy savvy and avoids issues with filter bubbles that can result from user-tracking is necessary but insufficient; instead mechanisms should be developed to test search engines from the perspective of information quality (particularly for health-related webpages), before they can be deemed trustworthy providers of public health information.

Research paper thumbnail of The Debate on the Ethics of AI in Health Care: a Reconstruction and Critical Review

SSRN, 2019

Healthcare systems across the globe are struggling with increasing costs and worsening outcomes. ... more Healthcare systems across the globe are struggling with increasing costs and worsening outcomes. This presents those responsible for overseeing healthcare with a challenge. Increasingly, policymakers, politicians, clinical entrepreneurs and computer and data scientists argue that a key part of the solution will be 'Artificial Intelligence' (AI)-particularly Machine Learning (ML). This argument stems not from the belief that all healthcare needs will soon be taken care of by "robot doctors." Instead, it is an argument that rests on the classic counterfactual definition of AI as an umbrella term for a range of techniques that can be used to make machines complete tasks in a way that would be considered intelligent were they to be completed by a human. Automation of this nature could offer great opportunities for the improvement of healthcare services and ultimately patients' health by significantly improving human clinical capabilities in diagnosis, drug discovery, epidemiology, personalised medicine, and operational efficiency. However, if these AI solutions are to be embedded in clinical practice, then at least three issues need to be considered: the technical possibilities and limitations; the ethical, regulatory and legal framework; and the governance framework. In this article, we report on the results of a systematic analysis designed to provide a clear overview of the second of these elements: the ethical, regulatory and legal framework. We find that ethical issues arise at six levels of abstraction (individual, interpersonal, group, institutional, sectoral, and societal) and can be categorised as epistemic, normative, or overarching. We conclude by stressing how important it is that the ethical challenges raised by implementing AI in healthcare settings are tackled proactively rather than reactively and map the key considerations for policymakers to each of the ethical concerns highlighted.

Research paper thumbnail of The Debate on the Ethics of AI in Health Care: a Reconstruction and Critical Review

Pre-Print, 2019

Healthcare systems across the globe are struggling with increasing costs and worsening outcomes. ... more Healthcare systems across the globe are struggling with increasing costs and worsening outcomes. This presents those responsible for overseeing healthcare with a challenge. Increasingly, policymakers, politicians, clinical entrepreneurs and computer and data scientists argue that a key part of the solution will be 'Artificial Intelligence' (AI)-particularly Machine Learning (ML). This argument stems not from the belief that all healthcare needs will soon be taken care of by "robot doctors." Instead, it is an argument that rests on the classic counterfactual definition of AI as an umbrella term for a range of techniques that can be used to make machines complete tasks in a way that would be considered intelligent were they to be completed by a human. Automation of this nature could offer great opportunities for the improvement of healthcare services and ultimately patients' health by significantly improving human clinical capabilities in diagnosis, drug discovery, epidemiology, personalised medicine, and operational efficiency. However, if these AI solutions are to be embedded in clinical practice, then at least three issues need to be considered: the technical possibilities and limitations; the ethical, regulatory and legal framework; and the governance framework. In this article, we report on the results of a systematic analysis designed to provide a clear overview of the second of these elements: the ethical, regulatory and legal framework. We find that ethical issues arise at six levels of abstraction (individual, interpersonal, group, institutional, sectoral, and societal) and can be categorised as epistemic, normative, or overarching. We conclude by stressing how important it is that the ethical challenges raised by implementing AI in healthcare settings are tackled proactively rather than reactively and map the key considerations for policymakers to each of the ethical concerns highlighted.

Research paper thumbnail of The Chinese Approach to Artificial Intelligence: an Analysis of Policy and Regulation

SSRN, 2019

In July 2017, China's State Council released the country's strategy for developing artificial int... more In July 2017, China's State Council released the country's strategy for developing artificial intelligence (AI), entitled 'New Generation Artificial Intelligence Development Plan' (新 一代人工智能发展规划). This strategy outlined China's aims to become the world leader in AI by 2030, to monetise AI into a trillion-yuan ($150 billion) industry, and to emerge as the driving force in defining ethical norms and standards for AI. Several reports have analysed specific aspects of China's AI policies or have assessed the country's technical capabilities. Instead, in this article, we focus on the socio-political background and policy debates that are shaping China's AI strategy. In particular, we analyse the main strategic areas in which China is investing in AI and the concurrent ethical debates that are delimiting its use. Through focusing on the policy backdrop, we seek to provide a more comprehensive understanding of China's AI policy by bringing together debates and analyses of a wide array of policy documents.

Research paper thumbnail of Digital Psychiatry: Ethical Risks and Opportunities for Public Health and Well-Being

Common mental health disorders are rising globally, creating a strain on public healthcare system... more Common mental health disorders are rising globally, creating a strain on public healthcare systems. This has led to a renewed interest in the role that digital technologies may have for improving mental health outcomes. One result of this interest is the development and use of artificial intelligence for assessing, diagnosing, and treating mental health issues, which we refer to as 'digital psychiatry'. This article focuses on the increasing use of digital psychiatry outside of clinical settings, in the following sectors: education, employment, financial services, social media, and the digital well-being industry. We analyse the ethical risks of deploying digital psychiatry in these sectors, emphasising key problems and opportunities for public health, and offer recommendations for protecting and promoting public health and well-being in information societies.

Research paper thumbnail of NHS AI Lab: why we need to be ethically mindful about AI for healthcare

On 8 th August 2019, Secretary of State for Health and Social Care, Matt Hancock, announced the c... more On 8 th August 2019, Secretary of State for Health and Social Care, Matt Hancock, announced the creation of a £250 million NHS AI Lab. This significant investment is justified on the belief that transforming the UK's National Health Service (NHS) into a more informationally mature and heterogeneous organisation, reliant on data-based and algorithmically-driven interactions, will offer significant benefit to patients, clinicians, and the overall system. These opportunities are realistic and should not be wasted. However, they may be missed (one may recall the troubled Care.data programme) if the ethical challenges posed by this transformation are not carefully considered from the start, and then addressed thoroughly, systematically, and in a socially participatory way. To deal with this serious risk, the NHS AI Lab should create an Ethics Advisory Board and monitor, analyse, and address the normative and overarching ethical issues that arise at the individual, interpersonal, group, institutional and societal levels in AI for healthcare. NHS AI Lab: why we need to be ethically mindful about AI for healthcare

Research paper thumbnail of Developing effective policy to support Artificial Intelligence in health and care

EuroHealth, 2019

The increased availability of data has enabled the development of Artificially Intelligent System... more The increased availability of data has enabled the development of Artificially Intelligent Systems (AIS) for health, but implementing these systems and capitalising on the associated opportunities is not straightforward. To mitigate these risks, outdated governance mechanisms need to be updated and key questions answered. To achieve this, whilst still supporting innovation, a new joint organisation for digital, data and technology in the English NHS (NHSX) is developing a 'principled proportionate governance' model that involves focusing on proactively and objectively evaluating current AIS technology and regularly involving all those who rely on and serve the health and care system.

Research paper thumbnail of How to design a governable digital health ecosystem

Statement of Contribution: JM is the main author of this article, to which LF has contributed. Ab... more Statement of Contribution: JM is the main author of this article, to which LF has contributed. Abstract It has been suggested that to overcome the challenges facing the UK's National Health Service (NHS) of an ageing population and reduced available funding, the NHS should be transformed into a more informationally mature and heterogeneous organisation, reliant on data-based and algorithmically-driven interactions between human, artificial, and hybrid (semi-artificial) agents. This transformation process would offer significant benefit to patients, clinicians, and the overall system, but it would also rely on a fundamental transformation of the healthcare system in a way that poses significant governance challenges. In this article, we argue that a fruitful way to overcome these challenges is by adopting a pro-ethical approach to design that analyses the system as a whole, keeps society-in-the-loop throughout the process, and distributes responsibility evenly across all nodes in the system.

Research paper thumbnail of Enabling digital health companionship is better than empowerment

The Lancet Digital Health, 2019

In this paper we argue that international policies that encourage the adoption of digital health ... more In this paper we argue that international policies that encourage the adoption of digital health tools through the rhetoric of individual 'empowerment' fail to detail how such tools empower citizens or patients and so governments risk using this rhetoric in a potentially deceptive manner. Instead, we argue that policies should focus on how data derived from DHTs can enable better care at the level of systems, population, group, or individuals.

Research paper thumbnail of The Limits of Empowerment: How to Reframe the Role of mHealth Tools in the Healthcare Ecosystem

Science and Engineering Ethics, 2019

This article highlights the limitations of the tendency to frame health-and wellbe-ing-related di... more This article highlights the limitations of the tendency to frame health-and wellbe-ing-related digital tools (mHealth technologies) as empowering devices, especially as they play an increasingly important role in the National Health Service (NHS) in the UK. It argues that mHealth technologies should instead be framed as digital companions. This shift from empowerment to companionship is advocated by showing the conceptual, ethical, and methodological issues challenging the narrative of empowerment, and by arguing that such challenges, as well as the risk of medical paternalism, can be overcome by focusing on the potential for mHealth tools to mediate the relationship between recipients of clinical advice and givers of clinical advice, in ways that allow for contextual flexibility in the balance between patiency and agency. The article concludes by stressing that reframing the narrative cannot be the only means for avoiding harm caused to the NHS as a healthcare system by the introduction of mHealth tools. Future discussion will be needed on the overarching role of responsible design.