Luciano Floridi | Yale University (original) (raw)
Papers by Luciano Floridi
Artificial intelligence's impact on healthcare is undeniable. What is less clear is whether it wi... more Artificial intelligence's impact on healthcare is undeniable. What is less clear is whether it will be ethically justifiable. Just as we know that AI can be used to diagnose disease, predict risk, develop personalized treatment plans, monitor patients remotely, or automate triage, we also know that it can pose significant threats to patient safety and the reliability (or trustworthiness) of the healthcare sector as a whole. These ethical risks arise from (a) flaws in the evidence base of healthcare AI (epistemic concerns); (b) the potential of AI to transform fundamentally the meaning of health, the nature of healthcare, and the practice of medicine (normative concerns); and (c) the 'black box' nature of the AI development pipeline, which undermines the effectiveness of existing accountability mechanisms (traceability concerns). In this chapter, we systematically map (a)-(c) to six different levels of abstraction: individual, interpersonal, group, institutional, sectoral, and societal. The aim is to help policymakers, regulators, and other high-level stakeholders delineate the scope of regulation and other 'softer' Governing measures for AI in healthcare. We hope that by doing so, we may enable global healthcare systems to capitalize safely and reliably on the many life-saving and improving benefits of healthcare AI.
Wikipedia is an essential source of information online, so efforts to combat misinformation on th... more Wikipedia is an essential source of information online, so efforts to combat misinformation on this platform are critical to the health of the information ecosystem. However, few studies have comprehensively examined misinformation dynamics within Wikipedia. We address this gap by investigating Wikipedia editing communities during the 2024 US Presidential Elections, focusing on the dynamics of misinformation. We assess the effectiveness of Wikipedia's existing measures against misinformation dissemination over time, using a combination of quantitative and qualitative methods to study edits posted on politicians' pages. We find that the volume of Wikipedia edits and the risk of misinformation increase significantly during politically charged moments. We also find that a significant portion of misinformation is detected by existing editing mechanisms, particularly overt cases such as factual inaccuracies and vandalism. Based on this assessment, we conclude by offering some recommendations for addressing misinformation within Wikipedia's editing ecosystem.
Philosophy & Technology
This article argues that the current hype surrounding artificial intelligence (AI) exhibits chara... more This article argues that the current hype surrounding artificial intelligence (AI) exhibits characteristics of a tech bubble, based on parallels with five previous technological bubbles: the Dot-Com Bubble, the Telecom Bubble, the Chinese Tech Bubble, the Cryptocurrency Boom, and the Tech Stock Bubble. The AI hype cycle shares with them some essential features, including the presence of potentially disruptive technology, speculation outpacing reality, the emergence of new valuation paradigms, significant retail investor participation, and a lack of adequate regulation. The article also highlights other specific similarities, such as the proliferation of AI startups, inflated valuations, and the ethical concerns associated with the technology. While acknowledging AI's transformative potential, the article calls for pragmatic caution, evidence-based planning, and critical thinking in approaching the current hype. It concludes by offering some recommendations to minimise the negative impact of the impending bubble burst, emphasising the importance of focusing on sustainable business models and real-world applications, maintaining a balanced perspective on AI's potential and limitations, and supporting the development of effective regulatory frameworks to guide the technology's design, development, and deployment.
The recent success of Generative AI (GenAI) has heralded a new era in content creation, dissemina... more The recent success of Generative AI (GenAI) has heralded a new era in content creation, dissemination, and consumption. This technological revolution is reshaping our understanding of content, challenging traditional notions of authorship, and transforming the relationship between content producers and consumers. As we approach an increasingly AI-integrated world, examining the implications of this paradigm shift is crucial. This article explores the future of content in the age of GenAI, analysing the evolving definition of content, the transformations brought about by GenAI systems, and emerging models of content production and dissemination. By examining these aspects, we can gain valuable insights into the challenges and opportunities that lie ahead in the realm of content creation and consumption and, hopefully, manage them more successfully.
The US Government has stated its desire for the US to be the home of the world's most advanced Ar... more The US Government has stated its desire for the US to be the home of the world's most advanced Artificial Intelligence (AI). Arguably, it currently is. However, a limitation looms large on the horizon as the energy demands of advanced AI look set to outstrip both current energy production and transmission capacity. Although algorithmic and hardware efficiency will improve, such progress is unlikely to keep up with the exponential growth in compute power needed in modern AI systems. Furthermore, even with sufficient gains in energy efficiency, overall use is still expected to increase in a contemporary Jevons paradox. All these factors set the US AI ambition, alongside broader electrification, on a crash course with the US government's ambitious clean energy targets. Something will likely have to give. For now, it seems that the dilemma is leading to a de-prioritization of AI compute allocated to safety-related projects alongside a slowing of the pace of transition to renewable energy sources. Worryingly, the dilemma does not appear to be considered a risk of AI, and its resolution does not have clear ownership in the US Government.
Background: There are more than 350,000 health apps available in public app stores. The extolled ... more Background: There are more than 350,000 health apps available in public app stores. The extolled benefits of health apps are numerous and well documented. However, there are also concerns that poor-quality apps, marketed directly to consumers, threaten the tenets of evidence-based medicine and expose individuals to the risk of harm. This study addresses this issue by assessing the overall quality of evidence publicly available to support the effectiveness claims of health apps marketed directly to consumers.
Methodology: To assess the quality of evidence available to the public to support the effectiveness claims of health apps marketed directly to consumers, an audit was conducted of a purposive sample of apps available on the Apple App Store.
Results: We find the quality of evidence available to support the effectiveness claims of health apps marketed directly to consumers to be poor. Less than half of the 220 apps (44%) we audited state that they have evidence to support their claims of effectiveness and, of these allegedly evidence-based apps, more than 70% rely on either very low or low-quality evidence. For the minority of app developers that do publish studies, significant methodological limitations are commonplace. Finally, there is a pronounced tendency for apps – particularly mental health and diagnostic apps – to either borrow evidence generated in other (typically offline) contexts or to rely exclusively on unsubstantiated, unpublished user metrics as evidence to support their effectiveness claims.
Conclusions: Health apps represent a significant opportunity for individual consumers and healthcare systems. Nevertheless, this opportunity will be missed if the health apps market continues to be flooded by poor quality, poorly evidenced, and potentially unsafe apps. It must be accepted that a continuing lag in generating high-quality evidence of app effectiveness and safety is not inevitable: it is a choice. Just because it will be challenging to raise the quality of the evidence base available to support the claims of health apps, this does not mean that the bar for evidence quality should be lowered. Innovation for innovation’s sake must not be prioritized over public health and safety.
Artificial Intelligence and computer games have been closely related since the first single-playe... more Artificial Intelligence and computer games have been closely related since the first single-player games were made. From AI-powered companions and foes to procedurally generated environments, the history of digital games runs parallel to the history of AI. However, recent advances in language models have made possible the creation of conversational AI agents that can converse with human players in natural language, interact with a game's world in their own right and integrate these capabilities by adjusting their actions according to communications and vice versa. This creates the potential for a significant shift in games' ability to simulate a highly complex environment, inhabited by a variety of AI agents with which human players can interact just as they would interact with the digital avatar of another person. This article begins by introducing the concept of conversational AI agents and justifying their technical feasibility. We build on this by introducing a taxonomy of conversational AI agents in multiplayer games, describing their potential uses and, for each use category, discussing the associated opportunities and risks. We then explore the implications of the increased flexibility and autonomy that such agents introduce to games, covering how they will change the nature of games and in-game advertising, as well as their interoperability across games and other platforms. Finally, we suggest game worlds filled with human and conversational AI agents can serve as a microcosm of the real world.
Over the last decade the figure of the AI Ethicist has seen significant growth in the ICT market.... more Over the last decade the figure of the AI Ethicist has seen significant growth in the ICT market. However, only a few studies have taken an interest in this professional profile, and they have yet to provide a normative discussion of its expertise and skills. The goal of this article is to initiate such discussion. We argue that AI Ethicists should be experts and use a heuristic to identify them. Then, we focus on their specific kind of moral expertise, drawing on a parallel with the expertise of Ethics Consultants in clinical settings and on the bioethics literature on the topic. Finally, we highlight the differences between Health Care Ethics Consultants and AI Ethicists and derive the expertise and skills of the latter from the roles that AI Ethicists should have in an organisation.
U.S.-China AI competition has created a 'race to the bottom', where each nation's attempts to cut... more U.S.-China AI competition has created a 'race to the bottom', where each nation's attempts to cut each other off artificial intelligence (AI) computing resources through protectionist policies comes at a cost-greater energy consumption. This article shows that heightened energy consumption stems from six key areas: 1) Limited access to the latest and most energy-efficient hardware; 2) Unintended spillover effects in the consumer space due to the dual-use nature of AI technology and processes; 3) Duplication in manufacturing processes, particularly in areas lacking comparative advantage; 4) The loosening of environmental standards to onshore manufacturing; 5) The potential for weaponizing the renewable energy supply chain, which supports AI infrastructure, hindering the pace of the renewable energy transition; 6) The loss of synergy in AI advancement, including the development of more energy-efficient algorithms and hardware, due to the transition towards a more autarkic information system and trade. By investigating the unintended consequences of the U.S.-China AI competition policies, the article highlights the need to redesign AI competition to reduce unintended consequences on the environment, consumers, and other countries.
This article addresses the question of how ‘Country of Origin Information’ (COI) reports — that i... more This article addresses the question of how ‘Country of Origin Information’ (COI) reports — that is, research developed and used to support decision-making in the asylum process — can be published in an ethical manner. The article focuses on the risk that published COI reports could be misused and thereby harm the subjects of the reports and/or those involved in their development. It supports a situational approach to assessing data ethics when publishing COI reports, whereby COI service providers must weigh up the benefits and harms of publication based, inter alia, on the foreseeability and probability of harm due to potential misuse of the research, the public good nature of the research, and the need to balance the rights and duties of the various actors in the asylum process, including asylum seekers themselves. Although this article focuses on the specific question of ‘how to publish COI reports in an ethical manner’, it also intends to promote further research on data ethics in the asylum process, particularly in relation to refugees, where more foundational issues should be considered.
As the range of potential uses for Artificial Intelligence (AI), in particular machine learning (... more As the range of potential uses for Artificial Intelligence (AI), in particular machine learning (ML), has increased, so has awareness of the associated ethical issues. This increased awareness has led to the realisation that existing legislation and regulation provides insufficient protection to individuals, groups, society, and the environment from AI harms. In response to this realisation, there has been a proliferation of principle-based ethics codes, guidelines and frameworks. However, it has become increasingly clear that a significant gap exists between the theory of AI ethics principles and the practical design of AI systems. In previous work , we analysed whether it is possible to close this gap between the 'what' and the 'how' of AI ethics through the use of tools and methods designed to help AI developers, engineers, and designers translate principles into practice. We concluded that this method of closure is currently ineffective as almost all existing translational tools and methods are either too flexible (and thus vulnerable to ethics washing) or too strict (unresponsive to context). This raised the question: if, even with technical guidance, AI ethics is challenging to embed in the process of algorithmic design, is the entire pro-ethical design endeavour rendered futile? And, if no, then how can AI ethics be made useful for AI practitioners? This is the question we seek to address here by exploring why principles and technical translational tools are still needed even if they are limited, and how these limitations can be potentially overcome by providing theoretical grounding of a concept that has been termed 'Ethics as a Service'
In this article we analyse the role that artificial intelligence (AI) could play, and is playing,... more In this article we analyse the role that artificial intelligence (AI) could play, and is playing, to combat global climate change. We identify two crucial opportunities that AI offers in this domain: it can help improve and expand current understanding of climate change, and it can contribute to combatting the climate crisis effectively. However, the development of AI also raises two sets of problems when considering climate change: the possible exacerbation of social and ethical challenges already associated with AI, and the contribution to climate change of the greenhouse gases emitted by training data and computation-intensive AI systems. We assess the carbon footprint of AI research, and the factors that influence AI's greenhouse gas (GHG) emissions in this domain. We find that the carbon footprint of AI research may be significant and highlight the need for more evidence concerning the trade-off between the GHG emissions generated by AI research and the energy and resource efficiency gains that AI can offer. In light of our analysis, we argue that leveraging the opportunities offered by AI for global climate change whilst limiting its risks is a gambit which requires responsive, evidence-based, and effective governance to become a winning strategy. We conclude by identifying the European Union as being especially well-placed to play a leading role in this policy response and provide 13 recommendations that are designed to identify and harness the opportunities of AI for combatting climate change, while reducing its impact on the environment.
In this article, we compare the artificial intelligence strategies of China and the European Unio... more In this article, we compare the artificial intelligence strategies of China and the European Union, assessing the key similarities and differences regarding what the high-level aims of each governance strategy are, how the development and use of AI is promoted in the public and private sectors, and whom these policies are meant to benefit. We characterise China’s strategy by its primary focus on fostering innovation and a more recent emphasis on “common prosperity”, and the EU’s on promoting ethical outcomes through protecting fundamental rights. Building on this comparative analysis, we consider the areas where the EU and China could learn from and improve upon each other’s approaches to AI governance to promote more ethical outcomes. We outline policy recommendations for both European and Chinese policymakers that would support them in achieving this aim.
Digital sovereignty seems to be something very important, given the popularity of the topic these... more Digital sovereignty seems to be something very important, given the popularity of the topic these days. True. But it also sounds like a technical issue, which concerns only specialists. False. Digital sovereignty, and the fight for it, touch everyone, even those who do not have a mobile phone or have never used an online service. To understand why, let me start with four episodes. I shall add a fifth shortly. June 18, 2020: The British government, after having failed to develop a centralised, coronavirus app not based on the API provided by Google-Apple, 1 gave up, ditched the whole project (Burgess 2020) and accepted to start developing a new app in the future that would be fully compatible with the decentralised solution supported by the two American companies. This U-turn was not the first: Italy (Longo 2020) and Germany (Busvine and Rinke 2020; Lomas 2020) had done the same, only much earlier. Note that, in the context of an online webinar on COVID-19 contact tracing applications, organised by RENEW EUROPE (a liberal, pro-European political group of the European Parliament), Gary Davis, Global Director of Privacy & Law Enforcement Requests at Apple (and previously Deputy Commissioner at the Irish Data Protection Commissioner's Office), stated that
Technologies to rapidly alert people when they have been in contact with someone carrying the cor... more Technologies to rapidly alert people when they have been in contact with someone carrying the coronavirus SARS-CoV-2 are part of a strategy to bring the pandemic under control. Currently, at least 47 contact-tracing apps are available globally. They are already in use in Australia, South Korea and Singapore, for instance. And many other governments are testing or considering them. Here we set out 16 questions to assess whether — and to what extent — a contact-tracing app is ethically justifiable.
Health-care systems worldwide face increasing demand, a rise in chronic disease, and resource con... more Health-care systems worldwide face increasing demand, a rise in chronic disease, and resource constraints. At the same time, the use of digital health technologies in all care settings has led to an expansion of data. For this reason, policy makers, politicians, clinical entrepreneurs, and computer and data scientists argue that a key part of health-care solutions will be artificial Intelligence (AI), particularly machine learning AI forms a key part of the National Health Service (NHS) Long-Term Plan (2019) in England, the US National Institutes of Health Strategic Plan for Data Science (2018), and China’s Healthy China 2030 strategy (2016). The willingness to embrace the potential future of medical care, expressed in these national strategies, is a positive development. Health-care providers should, however, be mindful of the risks that arise from AI’s ability to change the intrinsic nature of how health care is delivered. This paper outlines and discusses these potential risks.
Applications of artificial intelligence (AI) for cybersecurity tasks are attracting greater atten... more Applications of artificial intelligence (AI) for cybersecurity tasks are attracting greater attention from the private and the public sectors. Estimates indicate that the market for AI in cybersecurity will grow from US$1 billion in 2016 to a US$34.8 billion net worth by 2025. The latest national cybersecurity and defense strategies of several governments explicitly mention AI capabilities. At the same time, initiatives to define new standards and certification procedures to elicit users’ trust in AI are emerging on a global scale. However, trust in AI (both machine learning and neural networks) to deliver cybersecurity tasks is a double-edged sword: it can improve substantially cybersecurity practices, but can also facilitate new forms of attacks to the AI applications themselves, which may pose severe security threats. We argue that trust in AI for cybersecurity is unwarranted and that, to reduce security risks, some form of control to ensure the deployment of ‘reliable AI’ for cybersecurity is necessary. To this end, we offer three recommendations focusing on the design, development and deployment of AI for cybersecurity.
Artificial Intelligence (AI) is already having a major impact on society. As a result, many organ... more Artificial Intelligence (AI) is already having a major impact on society. As a result, many organizations have launched a wide range of initiatives to establish ethical principles for the adoption of socially beneficial AI. Unfortunately, the sheer volume of proposed principles threatens to overwhelm and confuse. How might this problem of ‘principle proliferation’ be solved? In this paper, we report the results of a fine-grained analysis of several of the highest profile sets of ethical principles for AI. We assess whether these principles converge upon a set of agreed-upon principles, or diverge, with significant disagreement over what constitutes ‘ethical AI.’ Our analysis finds a high degree of overlap among the sets of principles we analyze. We then identify an overarching framework consisting of five core principles for ethical AI. Four of them are core principles commonly used in bioethics: beneficence, non-maleficence, autonomy, and justice. On the basis of our comparative analysis, we argue that a new principle is needed in addition: explicability, understood as incorporating both the epistemological sense of intelligibility (as an answer to the question ‘how does it work?’) and in the ethical sense of accountability (as an answer to the question: ‘who is responsible for the way it works?’). In the ensuing discussion, we note the limitations and assess the implications of this ethical framework for future efforts to create laws, rules, technical standards, and best practices for ethical AI in a wide range of contexts.
The article develops a correctness theory of truth (CTT) for semantic information. After the intr... more The article develops a correctness theory of truth (CTT) for semantic information. After the introduction, in section two, semantic information is shown to be translatable into propositional semantic information (i). In section three, i is polarised into a query (Q) and a result (R), qualified by a specific context, a level of abstraction and a purpose. This polarization is normalised in section four, where [Q + R] is transformed into a Boolean question and its relative yes/no answer [Q + A]. This completes the reduction of the truth of i to the correctness of A. In sections five and six, it is argued that (1) A is the correct answer to Q if and only if (2) A correctly saturates (in a Fregean sense) Q by verifying and validating it (in the computer science’s sense of “verification” and “validation”); that (2) is the case if and only if (3) [Q + A] generates an adequate model (m) of the relevant system (s) identified by Q; that (3) is the case if and only if (4) m is a proxy of s (in the computer science’s sense of “proxy”) and (5) proximal access to m commutes with the distal access to s (in the category theory’s sense of “commutation”); and that (5) is the case if and only if (6) reading/writing (accessing, in the computer science’s technical sense of the term) m enables one to read/write (access) s. The last section draws a general conclusion about the nature of CTT as a theory for systems designers not just systems users.
The paper introduces a new model of telepresence. First, it criticises the standard model of pres... more The paper introduces a new model of telepresence. First, it criticises the standard model of presence as epistemic failure, showing it to be inadequate. It then replaces it with a new model of presence as successful observability. It further provides reasons to distinguish between two types of presence, backward and forward. The new model is then tested against two ethical issues whose nature has been modified by the development of digital information and communication technologies, namely pornography and privacy, and shown to be effective.
Artificial intelligence's impact on healthcare is undeniable. What is less clear is whether it wi... more Artificial intelligence's impact on healthcare is undeniable. What is less clear is whether it will be ethically justifiable. Just as we know that AI can be used to diagnose disease, predict risk, develop personalized treatment plans, monitor patients remotely, or automate triage, we also know that it can pose significant threats to patient safety and the reliability (or trustworthiness) of the healthcare sector as a whole. These ethical risks arise from (a) flaws in the evidence base of healthcare AI (epistemic concerns); (b) the potential of AI to transform fundamentally the meaning of health, the nature of healthcare, and the practice of medicine (normative concerns); and (c) the 'black box' nature of the AI development pipeline, which undermines the effectiveness of existing accountability mechanisms (traceability concerns). In this chapter, we systematically map (a)-(c) to six different levels of abstraction: individual, interpersonal, group, institutional, sectoral, and societal. The aim is to help policymakers, regulators, and other high-level stakeholders delineate the scope of regulation and other 'softer' Governing measures for AI in healthcare. We hope that by doing so, we may enable global healthcare systems to capitalize safely and reliably on the many life-saving and improving benefits of healthcare AI.
Wikipedia is an essential source of information online, so efforts to combat misinformation on th... more Wikipedia is an essential source of information online, so efforts to combat misinformation on this platform are critical to the health of the information ecosystem. However, few studies have comprehensively examined misinformation dynamics within Wikipedia. We address this gap by investigating Wikipedia editing communities during the 2024 US Presidential Elections, focusing on the dynamics of misinformation. We assess the effectiveness of Wikipedia's existing measures against misinformation dissemination over time, using a combination of quantitative and qualitative methods to study edits posted on politicians' pages. We find that the volume of Wikipedia edits and the risk of misinformation increase significantly during politically charged moments. We also find that a significant portion of misinformation is detected by existing editing mechanisms, particularly overt cases such as factual inaccuracies and vandalism. Based on this assessment, we conclude by offering some recommendations for addressing misinformation within Wikipedia's editing ecosystem.
Philosophy & Technology
This article argues that the current hype surrounding artificial intelligence (AI) exhibits chara... more This article argues that the current hype surrounding artificial intelligence (AI) exhibits characteristics of a tech bubble, based on parallels with five previous technological bubbles: the Dot-Com Bubble, the Telecom Bubble, the Chinese Tech Bubble, the Cryptocurrency Boom, and the Tech Stock Bubble. The AI hype cycle shares with them some essential features, including the presence of potentially disruptive technology, speculation outpacing reality, the emergence of new valuation paradigms, significant retail investor participation, and a lack of adequate regulation. The article also highlights other specific similarities, such as the proliferation of AI startups, inflated valuations, and the ethical concerns associated with the technology. While acknowledging AI's transformative potential, the article calls for pragmatic caution, evidence-based planning, and critical thinking in approaching the current hype. It concludes by offering some recommendations to minimise the negative impact of the impending bubble burst, emphasising the importance of focusing on sustainable business models and real-world applications, maintaining a balanced perspective on AI's potential and limitations, and supporting the development of effective regulatory frameworks to guide the technology's design, development, and deployment.
The recent success of Generative AI (GenAI) has heralded a new era in content creation, dissemina... more The recent success of Generative AI (GenAI) has heralded a new era in content creation, dissemination, and consumption. This technological revolution is reshaping our understanding of content, challenging traditional notions of authorship, and transforming the relationship between content producers and consumers. As we approach an increasingly AI-integrated world, examining the implications of this paradigm shift is crucial. This article explores the future of content in the age of GenAI, analysing the evolving definition of content, the transformations brought about by GenAI systems, and emerging models of content production and dissemination. By examining these aspects, we can gain valuable insights into the challenges and opportunities that lie ahead in the realm of content creation and consumption and, hopefully, manage them more successfully.
The US Government has stated its desire for the US to be the home of the world's most advanced Ar... more The US Government has stated its desire for the US to be the home of the world's most advanced Artificial Intelligence (AI). Arguably, it currently is. However, a limitation looms large on the horizon as the energy demands of advanced AI look set to outstrip both current energy production and transmission capacity. Although algorithmic and hardware efficiency will improve, such progress is unlikely to keep up with the exponential growth in compute power needed in modern AI systems. Furthermore, even with sufficient gains in energy efficiency, overall use is still expected to increase in a contemporary Jevons paradox. All these factors set the US AI ambition, alongside broader electrification, on a crash course with the US government's ambitious clean energy targets. Something will likely have to give. For now, it seems that the dilemma is leading to a de-prioritization of AI compute allocated to safety-related projects alongside a slowing of the pace of transition to renewable energy sources. Worryingly, the dilemma does not appear to be considered a risk of AI, and its resolution does not have clear ownership in the US Government.
Background: There are more than 350,000 health apps available in public app stores. The extolled ... more Background: There are more than 350,000 health apps available in public app stores. The extolled benefits of health apps are numerous and well documented. However, there are also concerns that poor-quality apps, marketed directly to consumers, threaten the tenets of evidence-based medicine and expose individuals to the risk of harm. This study addresses this issue by assessing the overall quality of evidence publicly available to support the effectiveness claims of health apps marketed directly to consumers.
Methodology: To assess the quality of evidence available to the public to support the effectiveness claims of health apps marketed directly to consumers, an audit was conducted of a purposive sample of apps available on the Apple App Store.
Results: We find the quality of evidence available to support the effectiveness claims of health apps marketed directly to consumers to be poor. Less than half of the 220 apps (44%) we audited state that they have evidence to support their claims of effectiveness and, of these allegedly evidence-based apps, more than 70% rely on either very low or low-quality evidence. For the minority of app developers that do publish studies, significant methodological limitations are commonplace. Finally, there is a pronounced tendency for apps – particularly mental health and diagnostic apps – to either borrow evidence generated in other (typically offline) contexts or to rely exclusively on unsubstantiated, unpublished user metrics as evidence to support their effectiveness claims.
Conclusions: Health apps represent a significant opportunity for individual consumers and healthcare systems. Nevertheless, this opportunity will be missed if the health apps market continues to be flooded by poor quality, poorly evidenced, and potentially unsafe apps. It must be accepted that a continuing lag in generating high-quality evidence of app effectiveness and safety is not inevitable: it is a choice. Just because it will be challenging to raise the quality of the evidence base available to support the claims of health apps, this does not mean that the bar for evidence quality should be lowered. Innovation for innovation’s sake must not be prioritized over public health and safety.
Artificial Intelligence and computer games have been closely related since the first single-playe... more Artificial Intelligence and computer games have been closely related since the first single-player games were made. From AI-powered companions and foes to procedurally generated environments, the history of digital games runs parallel to the history of AI. However, recent advances in language models have made possible the creation of conversational AI agents that can converse with human players in natural language, interact with a game's world in their own right and integrate these capabilities by adjusting their actions according to communications and vice versa. This creates the potential for a significant shift in games' ability to simulate a highly complex environment, inhabited by a variety of AI agents with which human players can interact just as they would interact with the digital avatar of another person. This article begins by introducing the concept of conversational AI agents and justifying their technical feasibility. We build on this by introducing a taxonomy of conversational AI agents in multiplayer games, describing their potential uses and, for each use category, discussing the associated opportunities and risks. We then explore the implications of the increased flexibility and autonomy that such agents introduce to games, covering how they will change the nature of games and in-game advertising, as well as their interoperability across games and other platforms. Finally, we suggest game worlds filled with human and conversational AI agents can serve as a microcosm of the real world.
Over the last decade the figure of the AI Ethicist has seen significant growth in the ICT market.... more Over the last decade the figure of the AI Ethicist has seen significant growth in the ICT market. However, only a few studies have taken an interest in this professional profile, and they have yet to provide a normative discussion of its expertise and skills. The goal of this article is to initiate such discussion. We argue that AI Ethicists should be experts and use a heuristic to identify them. Then, we focus on their specific kind of moral expertise, drawing on a parallel with the expertise of Ethics Consultants in clinical settings and on the bioethics literature on the topic. Finally, we highlight the differences between Health Care Ethics Consultants and AI Ethicists and derive the expertise and skills of the latter from the roles that AI Ethicists should have in an organisation.
U.S.-China AI competition has created a 'race to the bottom', where each nation's attempts to cut... more U.S.-China AI competition has created a 'race to the bottom', where each nation's attempts to cut each other off artificial intelligence (AI) computing resources through protectionist policies comes at a cost-greater energy consumption. This article shows that heightened energy consumption stems from six key areas: 1) Limited access to the latest and most energy-efficient hardware; 2) Unintended spillover effects in the consumer space due to the dual-use nature of AI technology and processes; 3) Duplication in manufacturing processes, particularly in areas lacking comparative advantage; 4) The loosening of environmental standards to onshore manufacturing; 5) The potential for weaponizing the renewable energy supply chain, which supports AI infrastructure, hindering the pace of the renewable energy transition; 6) The loss of synergy in AI advancement, including the development of more energy-efficient algorithms and hardware, due to the transition towards a more autarkic information system and trade. By investigating the unintended consequences of the U.S.-China AI competition policies, the article highlights the need to redesign AI competition to reduce unintended consequences on the environment, consumers, and other countries.
This article addresses the question of how ‘Country of Origin Information’ (COI) reports — that i... more This article addresses the question of how ‘Country of Origin Information’ (COI) reports — that is, research developed and used to support decision-making in the asylum process — can be published in an ethical manner. The article focuses on the risk that published COI reports could be misused and thereby harm the subjects of the reports and/or those involved in their development. It supports a situational approach to assessing data ethics when publishing COI reports, whereby COI service providers must weigh up the benefits and harms of publication based, inter alia, on the foreseeability and probability of harm due to potential misuse of the research, the public good nature of the research, and the need to balance the rights and duties of the various actors in the asylum process, including asylum seekers themselves. Although this article focuses on the specific question of ‘how to publish COI reports in an ethical manner’, it also intends to promote further research on data ethics in the asylum process, particularly in relation to refugees, where more foundational issues should be considered.
As the range of potential uses for Artificial Intelligence (AI), in particular machine learning (... more As the range of potential uses for Artificial Intelligence (AI), in particular machine learning (ML), has increased, so has awareness of the associated ethical issues. This increased awareness has led to the realisation that existing legislation and regulation provides insufficient protection to individuals, groups, society, and the environment from AI harms. In response to this realisation, there has been a proliferation of principle-based ethics codes, guidelines and frameworks. However, it has become increasingly clear that a significant gap exists between the theory of AI ethics principles and the practical design of AI systems. In previous work , we analysed whether it is possible to close this gap between the 'what' and the 'how' of AI ethics through the use of tools and methods designed to help AI developers, engineers, and designers translate principles into practice. We concluded that this method of closure is currently ineffective as almost all existing translational tools and methods are either too flexible (and thus vulnerable to ethics washing) or too strict (unresponsive to context). This raised the question: if, even with technical guidance, AI ethics is challenging to embed in the process of algorithmic design, is the entire pro-ethical design endeavour rendered futile? And, if no, then how can AI ethics be made useful for AI practitioners? This is the question we seek to address here by exploring why principles and technical translational tools are still needed even if they are limited, and how these limitations can be potentially overcome by providing theoretical grounding of a concept that has been termed 'Ethics as a Service'
In this article we analyse the role that artificial intelligence (AI) could play, and is playing,... more In this article we analyse the role that artificial intelligence (AI) could play, and is playing, to combat global climate change. We identify two crucial opportunities that AI offers in this domain: it can help improve and expand current understanding of climate change, and it can contribute to combatting the climate crisis effectively. However, the development of AI also raises two sets of problems when considering climate change: the possible exacerbation of social and ethical challenges already associated with AI, and the contribution to climate change of the greenhouse gases emitted by training data and computation-intensive AI systems. We assess the carbon footprint of AI research, and the factors that influence AI's greenhouse gas (GHG) emissions in this domain. We find that the carbon footprint of AI research may be significant and highlight the need for more evidence concerning the trade-off between the GHG emissions generated by AI research and the energy and resource efficiency gains that AI can offer. In light of our analysis, we argue that leveraging the opportunities offered by AI for global climate change whilst limiting its risks is a gambit which requires responsive, evidence-based, and effective governance to become a winning strategy. We conclude by identifying the European Union as being especially well-placed to play a leading role in this policy response and provide 13 recommendations that are designed to identify and harness the opportunities of AI for combatting climate change, while reducing its impact on the environment.
In this article, we compare the artificial intelligence strategies of China and the European Unio... more In this article, we compare the artificial intelligence strategies of China and the European Union, assessing the key similarities and differences regarding what the high-level aims of each governance strategy are, how the development and use of AI is promoted in the public and private sectors, and whom these policies are meant to benefit. We characterise China’s strategy by its primary focus on fostering innovation and a more recent emphasis on “common prosperity”, and the EU’s on promoting ethical outcomes through protecting fundamental rights. Building on this comparative analysis, we consider the areas where the EU and China could learn from and improve upon each other’s approaches to AI governance to promote more ethical outcomes. We outline policy recommendations for both European and Chinese policymakers that would support them in achieving this aim.
Digital sovereignty seems to be something very important, given the popularity of the topic these... more Digital sovereignty seems to be something very important, given the popularity of the topic these days. True. But it also sounds like a technical issue, which concerns only specialists. False. Digital sovereignty, and the fight for it, touch everyone, even those who do not have a mobile phone or have never used an online service. To understand why, let me start with four episodes. I shall add a fifth shortly. June 18, 2020: The British government, after having failed to develop a centralised, coronavirus app not based on the API provided by Google-Apple, 1 gave up, ditched the whole project (Burgess 2020) and accepted to start developing a new app in the future that would be fully compatible with the decentralised solution supported by the two American companies. This U-turn was not the first: Italy (Longo 2020) and Germany (Busvine and Rinke 2020; Lomas 2020) had done the same, only much earlier. Note that, in the context of an online webinar on COVID-19 contact tracing applications, organised by RENEW EUROPE (a liberal, pro-European political group of the European Parliament), Gary Davis, Global Director of Privacy & Law Enforcement Requests at Apple (and previously Deputy Commissioner at the Irish Data Protection Commissioner's Office), stated that
Technologies to rapidly alert people when they have been in contact with someone carrying the cor... more Technologies to rapidly alert people when they have been in contact with someone carrying the coronavirus SARS-CoV-2 are part of a strategy to bring the pandemic under control. Currently, at least 47 contact-tracing apps are available globally. They are already in use in Australia, South Korea and Singapore, for instance. And many other governments are testing or considering them. Here we set out 16 questions to assess whether — and to what extent — a contact-tracing app is ethically justifiable.
Health-care systems worldwide face increasing demand, a rise in chronic disease, and resource con... more Health-care systems worldwide face increasing demand, a rise in chronic disease, and resource constraints. At the same time, the use of digital health technologies in all care settings has led to an expansion of data. For this reason, policy makers, politicians, clinical entrepreneurs, and computer and data scientists argue that a key part of health-care solutions will be artificial Intelligence (AI), particularly machine learning AI forms a key part of the National Health Service (NHS) Long-Term Plan (2019) in England, the US National Institutes of Health Strategic Plan for Data Science (2018), and China’s Healthy China 2030 strategy (2016). The willingness to embrace the potential future of medical care, expressed in these national strategies, is a positive development. Health-care providers should, however, be mindful of the risks that arise from AI’s ability to change the intrinsic nature of how health care is delivered. This paper outlines and discusses these potential risks.
Applications of artificial intelligence (AI) for cybersecurity tasks are attracting greater atten... more Applications of artificial intelligence (AI) for cybersecurity tasks are attracting greater attention from the private and the public sectors. Estimates indicate that the market for AI in cybersecurity will grow from US$1 billion in 2016 to a US$34.8 billion net worth by 2025. The latest national cybersecurity and defense strategies of several governments explicitly mention AI capabilities. At the same time, initiatives to define new standards and certification procedures to elicit users’ trust in AI are emerging on a global scale. However, trust in AI (both machine learning and neural networks) to deliver cybersecurity tasks is a double-edged sword: it can improve substantially cybersecurity practices, but can also facilitate new forms of attacks to the AI applications themselves, which may pose severe security threats. We argue that trust in AI for cybersecurity is unwarranted and that, to reduce security risks, some form of control to ensure the deployment of ‘reliable AI’ for cybersecurity is necessary. To this end, we offer three recommendations focusing on the design, development and deployment of AI for cybersecurity.
Artificial Intelligence (AI) is already having a major impact on society. As a result, many organ... more Artificial Intelligence (AI) is already having a major impact on society. As a result, many organizations have launched a wide range of initiatives to establish ethical principles for the adoption of socially beneficial AI. Unfortunately, the sheer volume of proposed principles threatens to overwhelm and confuse. How might this problem of ‘principle proliferation’ be solved? In this paper, we report the results of a fine-grained analysis of several of the highest profile sets of ethical principles for AI. We assess whether these principles converge upon a set of agreed-upon principles, or diverge, with significant disagreement over what constitutes ‘ethical AI.’ Our analysis finds a high degree of overlap among the sets of principles we analyze. We then identify an overarching framework consisting of five core principles for ethical AI. Four of them are core principles commonly used in bioethics: beneficence, non-maleficence, autonomy, and justice. On the basis of our comparative analysis, we argue that a new principle is needed in addition: explicability, understood as incorporating both the epistemological sense of intelligibility (as an answer to the question ‘how does it work?’) and in the ethical sense of accountability (as an answer to the question: ‘who is responsible for the way it works?’). In the ensuing discussion, we note the limitations and assess the implications of this ethical framework for future efforts to create laws, rules, technical standards, and best practices for ethical AI in a wide range of contexts.
The article develops a correctness theory of truth (CTT) for semantic information. After the intr... more The article develops a correctness theory of truth (CTT) for semantic information. After the introduction, in section two, semantic information is shown to be translatable into propositional semantic information (i). In section three, i is polarised into a query (Q) and a result (R), qualified by a specific context, a level of abstraction and a purpose. This polarization is normalised in section four, where [Q + R] is transformed into a Boolean question and its relative yes/no answer [Q + A]. This completes the reduction of the truth of i to the correctness of A. In sections five and six, it is argued that (1) A is the correct answer to Q if and only if (2) A correctly saturates (in a Fregean sense) Q by verifying and validating it (in the computer science’s sense of “verification” and “validation”); that (2) is the case if and only if (3) [Q + A] generates an adequate model (m) of the relevant system (s) identified by Q; that (3) is the case if and only if (4) m is a proxy of s (in the computer science’s sense of “proxy”) and (5) proximal access to m commutes with the distal access to s (in the category theory’s sense of “commutation”); and that (5) is the case if and only if (6) reading/writing (accessing, in the computer science’s technical sense of the term) m enables one to read/write (access) s. The last section draws a general conclusion about the nature of CTT as a theory for systems designers not just systems users.
The paper introduces a new model of telepresence. First, it criticises the standard model of pres... more The paper introduces a new model of telepresence. First, it criticises the standard model of presence as epistemic failure, showing it to be inadequate. It then replaces it with a new model of presence as successful observability. It further provides reasons to distinguish between two types of presence, backward and forward. The new model is then tested against two ethical issues whose nature has been modified by the development of digital information and communication technologies, namely pornography and privacy, and shown to be effective.
Zan Boag: Technology in various forms has been a part of human life for some time now, but, as ph... more Zan Boag: Technology in various forms has been a part of human life for some time now, but, as philosophers such as Heidegger argue, recently there has been a profound change in the nature of technology itself. What's so different about current technologies? Luciano Floridi: What is different is that it is no longer just a matter of interacting with the world by other means: a wheel rather than pushing stuff, or an engine rather than a horse. We have this new environment where we are spending more and more time – a digital environment, where agency is most successful because the technologies that we have are meant to interact successfully in a digital environment. Think of a fish in a swimming pool or in a lake. Well, we are kind of scuba diving now in the infosphere, whereas the artificial agents that we have, those are the fish – they live within an environment that is their environment. The digital interacting with the digital – software, databases, big data, algorithms, you name it – they are the natives, they are the locals. We are being pushed into an environment where we are scuba diving. You can't start imagining what it means for an artificial agent to interact with something that is made of its own same stuff.
Interview for Mercedes Benz Magazin
intervista con Marzia Apice per ANSA NEWS
Intervista con Fabio Chiusi per L'Espresso, 26 Febbraio 2017
Intervista con Antonio Dini
Intervista con Antonio Dini per L'Impresa - Sole24 Ore - Prima Parte
Intervista a Luciano Floridi, Professore di Filosofia ed Etica dell’informazione, Oxford Universi... more Intervista a Luciano Floridi, Professore di Filosofia ed Etica dell’informazione, Oxford University
di Agnese Bertello
For more than 100 videos of lectures, seminars, talks, interviews and debates covering topics dis... more For more than 100 videos of lectures, seminars, talks, interviews and debates covering topics discussed in paper uploaded in academia.edu please visit the YouTube channel:
The goal of the book is to present the latest research on the new challenges of data technologies... more The goal of the book is to present the latest research on the new challenges of data technologies. It will offer an overview of the social, ethical and legal problems posed by group profiling, big data and predictive analysis and of the different approaches and methods that can be used to address them. In doing so, it will help the reader to gain a better grasp of the ethical and legal conundrums posed by group profiling. The volume first maps the current and emerging uses of new data technologies and clarifies the promises and dangers of group profiling in real life situations. It then balances this with an analysis of how far the current legal paradigm grants group rights to privacy and data protection, and discusses possible routes to addressing these problems. Finally, an afterword gathers the conclusions reached by the different authors and discuss future perspectives on regulating new data technologies.
Springer
Online Service Providers (OSPs)—such as AOL, Facebook, Google, Microsoft, and Twitter—are increas... more Online Service Providers (OSPs)—such as AOL, Facebook, Google, Microsoft, and Twitter—are increasingly expected to act as good citizens, by aligning their goals with the needs of societies, supporting the rights of their users (Madelin 2011; Taddeo and Floridi 2015), and performing their tasks according to “principles of efficiency, justice, fairness, and respect of current social and cultural values” (McQuail 1992, 47). These expectations raise questions as to what kind of responsibilities OSPs should bear, and which ethical principles should guide their actions. Addressing these questions is a crucial step to understand and shape the role of OSPs in mature information societies (Floridi 2016). Without a clear understanding of their responsibilities, we risk ascribing to OSPs a role that is either too powerful or too little independent. The FBI vs. Apple case,1 Google’s and Yahoo!’s experiences in China,2 or the involvement of OSPs within the NSA’s PRISM program3 offer good examples of the case in point. However, defining OSPs’ responsibilities is challenging. Three aspects are particularly problematic: disentangling the implications of OSPs’ gatekeeping role in information societies; defining fundamental principles to guide OSPs’ conduct; and contextualising OSPs’ role within the broader changes brought about by the information revolution.
This is the Introduction to The Routledge Handbook of Philosophy of Information (Routledge Handbo... more This is the Introduction to The Routledge Handbook of Philosophy of Information (Routledge Handbooks in Philosophy) Hardcover, 2016
Unsere Computer werden immer schneller, kleiner und billiger; wir produzieren jeden Tag genug Dat... more Unsere Computer werden immer schneller, kleiner und billiger; wir produzieren jeden Tag genug Daten, um alle Bibliotheken der USA damit zu füllen; und im Durchschnitt trägt jeder Mensch heute mindestens einen Gegenstand bei sich, der mit dem Internet verbunden ist. Wir erleben gerade eine explosionsartige Entwicklung von Informationsund Kommunikationstechnologien. Luciano Floridi, einer der weltweit führenden Informationstheoretiker, zeigt in seinem meisterhaften Buch, dass wir uns nach den Revolutionen der Physik (Kopernikus), Biologie (Darwin) und Psychologie (Freud) nun inmitten einer vierten Revolution befinden, die unser ganzes Leben verändert. Die Trennung zwischen online und offline schwindet, denn wir interagieren zunehmend mit smarten, responsiven Objekten, um unseren Alltag zu bewältigen oder miteinander zu kommunizieren. Der Mensch kreiert sich eine neue Umwelt, eine »Infosphäre«. Persönlichkeitsprofile, die wir online erzeugen, beginnen, in unseren Alltag zurückzuwirken, sodass wir immer mehr ein »Onlife« leben. Informations- und Kommunikationstechnologien bestimmen die Art, wie wir einkaufen, arbeiten, für unsere Gesundheit vorsorgen, Beziehungen pflegen, unsere Freizeit gestalten, Politik betreiben und sogar, wie wir Krieg führen. Aber sind diese Entwicklungen wirklich zu unserem Vorteil? Was sind ihre Risiken? Floridi weist den Weg zu einem neuen ethischen und ökologischen Denken, um die Herausforderungen der digitalen Revolution und der Informationsgesellschaft zu meistern. Ein Buch von großer Aktualität und theoretischer Brillanz.
This book presents the latest research on the challenges and solutions affecting the equilibrium ... more This book presents the latest research on the challenges and solutions affecting the equilibrium between freedom of speech, freedom of information, information security, and the right to informational privacy. Given the complexity of the topics addressed, the book shows how old legal and ethical frameworks may need to be not only updated, but also supplemented and complemented by new conceptual solutions. Neither a conservative attitude (“more of the same”) nor a revolutionary zeal (“never seen before”) is likely to lead to satisfactory solutions. Instead, more reflection and better conceptual design are needed, not least to harmonise different perspectives and legal frameworks internationally. The focus of the book is on how we may reconcile high levels of information security with robust degrees of informational privacy, also in connection with recent challenges presented by phenomena such as “big data” and security scandals, as well as new legislation initiatives, such as those concerning “the right to be forgotten” and the use of personal data in biomedical research. The book seeks to offer analyses and solutions of the new tensions, in order to build a fair, shareable, and sustainable balance in this vital area of human interactions.
This book offers an overview of the ethical problems posed by Information Warfare, and of the dif... more This book offers an overview of the ethical problems posed by Information Warfare, and of the different approaches and methods used to solve them, in order to provide the reader with a better grasp of the ethical conundrums posed by this new form of warfare.
The volume is divided into three parts, each comprising four chapters. The first part focuses on issues pertaining to the concept of Information Warfare and the clarifications that need to be made in order to address its ethical implications. The second part collects contributions focusing on Just War Theory and its application to the case of Information Warfare. The third part adopts alternative approaches to Just War Theory for analysing the ethical implications of this phenomenon. Finally, an afterword by Neelie Kroes - Vice President of the European Commission and European Digital Agenda Commissioner - concludes the volume. Her contribution describes the interests and commitments of the European Digital Agenda with respect to research for the development and deployment of robots in various circumstances, including warfare.
Luciano Floridi develops an original ethical framework for dealing with the new challenges posed ... more Luciano Floridi develops an original ethical framework for dealing with the new challenges posed by Information and Communication Technologies (ICTs). ICTs have profoundly changed many aspects of life, including the nature of entertainment, work, communication, education, health care, industrial production and business, social relations, and conflicts. They have had a radical and widespread impact on our moral lives and on contemporary ethical debates. Privacy, ownership, freedom of speech, responsibility, technological determinism, the digital divide, and pornography online are only some of the pressing issues that characterise the ethical discourse in the information society. They are the subject of Information Ethics (IE), the new philosophical area of research that investigates the ethical impact of ICTs on human life and society.
Since the seventies, IE has been a standard topic in many curricula. In recent years, there has been a flourishing of new university courses, international conferences, workshops, professional organizations, specialized periodicals and research centres. However, investigations have so far been largely influenced by professional and technical approaches, addressing mainly legal, social, cultural and technological problems. This book is the first philosophical monograph entirely and exclusively dedicated to it.
Floridi lays down, for the first time, the conceptual foundations for IE. He does so systematically, by pursuing three goals:
a) a metatheoretical goal: it describes what IE is, its problems, approaches and methods;
b) an introductory goal: it helps the reader to gain a better grasp of the complex and multifarious nature of the various concepts and phenomena related to computer ethics;
c) an analytic goal: it answers several key theoretical questions of great philosophical interest, arising from the investigation of the ethical implications of ICTs.
Although entirely independent of The Philosophy of Information (OUP, 2011), Floridi's previous book, The Ethics of Information complements it as new work on the foundations of the philosophy of information.
Who are we, and how do we relate to each other? Luciano Floridi, one of the leading figures in co... more Who are we, and how do we relate to each other? Luciano Floridi, one of the leading figures in contemporary philosophy, argues that the explosive developments in Information and Communication Technologies (ICTs) is changing the answer to these fundamental human questions.
As the boundaries between life online and offline break down, and we become seamlessly connected to each other and surrounded by smart, responsive objects, we are all becoming integrated into an "infosphere". Personas we adopt in social media, for example, feed into our 'real' lives so that we begin to live, as Floridi puts in, "onlife". Following those led by Copernicus, Darwin, and Freud, this metaphysical shift represents nothing less than a fourth revolution.
"Onlife" defines more and more of our daily activity - the way we shop, work, learn, care for our health, entertain ourselves, conduct our relationships; the way we interact with the worlds of law, finance, and politics; even the way we conduct war. In every department of life, ICTs have become environmental forces which are creating and transforming our realities. How can we ensure that we shall reap their benefits? What are the implicit risks? Are our technologies going to enable and empower us, or constrain us? Floridi argues that we must expand our ecological and ethical approach to cover both natural and man-made realities, putting the 'e' in an environmentalism that can deal successfully with the new challenges posed by our digital technologies and information society.
- Result of “the Onlife Initiative,” a one-year project funded by the European Commission to stud... more - Result of “the Onlife Initiative,” a one-year project funded by the European Commission to study the deployment of ICTs and its effects on the human condition
- Inspires reflection on the ways in which a hyperconnected world forces the rethinking of the conceptual frameworks on which policies are built
- Draws upon the work of a group of scholars from a wide range of disciplines including, anthropology, cognitive science, computer science, law, philosophy, political science
What is the impact of information and communication technologies (ICTs) on the human condition? In order to address this question, in 2012 the European Commission organized a research project entitled The Onlife Initiative: concept reengineering for rethinking societal concerns in the digital transition. This volume collects the work of the Onlife Initiative. It explores how the development and widespread use of ICTs have a radical impact on the human condition.
ICTs are not mere tools but rather social forces that are increasingly affecting our self-conception (who we are), our mutual interactions (how we socialise); our conception of reality (our metaphysics); and our interactions with reality (our agency). In each case, ICTs have a huge ethical, legal, and political significance, yet one with which we have begun to come to terms only recently.
The impact exercised by ICTs is due to at least four major transformations: the blurring of the distinction between reality and virtuality; the blurring of the distinction between human, machine and nature; the reversal from information scarcity to information abundance; and the shift from the primacy of stand-alone things, properties, and binary relations, to the primacy of interactions, processes and networks.
Such transformations are testing the foundations of our conceptual frameworks. Our current conceptual toolbox is no longer fitted to address new ICT-related challenges. This is not only a problem in itself. It is also a risk, because the lack of a clear understanding of our present time may easily lead to negative projections about the future. The goal of The Manifesto, and of the whole book that contextualises, is therefore that of contributing to the update of our philosophy. It is a constructive goal. The book is meant to be a positive contribution to rethinking the philosophy on which policies are built in a hyperconnected world, so that we may have a better chance of understanding our ICT-related problems and solving them satisfactorily.
The Manifesto launches an open debate on the impacts of ICTs on public spaces, politics and societal expectations toward policymaking in the Digital Agenda for Europe’s remit. More broadly, it helps start a reflection on the way in which a hyperconnected world calls for rethinking the referential frameworks on which policies are built.
Luciano Floridi presents a book that will set the agenda for the philosophy of information. PI is... more Luciano Floridi presents a book that will set the agenda for the philosophy of information. PI is the philosophical field concerned with (1) the critical investigation of the conceptual nature and basic principles of information, including its dynamics, utilisation, and sciences, and (2) the elaboration and application of information-theoretic and computational methodologies to philosophical problems. This book lays down, for the first time, the conceptual foundations for this new area of research. It does so systematically, by pursuing three goals. Its metatheoretical goal is to describe what the philosophy of information is, its problems, approaches, and methods. Its introductory goal is to help the reader to gain a better grasp of the complex and multifarious nature of the various concepts and phenomena related to information. Its analytic goal is to answer several key theoretical questions of great philosophical interest, arising from the investigation of semantic information.
We live an information-soaked existence - information pours into our lives through television, ra... more We live an information-soaked existence - information pours into our lives through television, radio, books, and of course, the Internet. Some say we suffer from 'infoglut'. But what is information? The concept of 'information' is a profound one, rooted in mathematics, central to whole branches of science, yet with implications on every aspect of our everyday lives: DNA provides the information to create us; we learn through the information fed to us; we relate to each other through information transfer - gossip, lectures, reading. Information is not only a mathematically powerful concept, but its critical role in society raises wider ethical issues: who owns information? Who controls its dissemination? Who has access to information? Luciano Floridi, a philosopher of information, cuts across many subjects, from a brief look at the mathematical roots of information - its definition and measurement in 'bits'- to its role in genetics (we are information), and its social meaning and value. He ends by considering the ethics of information, including issues of ownership, privacy, and accessibility; copyright and open source. For those unfamiliar with its precise meaning and wide applicability as a philosophical concept, 'information' may seem a bland or mundane topic. Those who have studied some science or philosophy or sociology will already be aware of its centrality and richness. But for all readers, whether from the humanities or sciences, Floridi gives a fascinating and inspirational introduction to this most fundamental of ideas.
Information and Communication Technologies (ICTs) have profoundly changed many aspects of life, ... more Information and Communication Technologies (ICTs) have profoundly changed many aspects of life, including the nature of entertainment, work, communication, education, healthcare, industrial production and business, social relations and conflicts. They have had a radical and widespread impact on our moral lives and hence on contemporary ethical debates. The Cambridge Handbook of Information and Computer Ethics provides an ambitious and authoritative introduction to the field, with discussions of a range of topics including privacy, ownership, freedom of speech, responsibility, technological determinism, the digital divide, cyber warfare, and online pornography. It offers an accessible and thoughtful survey of the transformations brought about by ICTs and their implications for the future of human life and society, for the evaluation of behaviour, and for the evolution of moral values and rights. It will be a valuable book for all who are interested in the ethical aspects of the information society in which we live.
The Cambridge Handbook of Information and Computer Ethics provides an ambitious and authoritative introduction to the field, with discussions of a range of topics including privacy, ownership, freedom of speech, responsibility, technological determinism, the digital divide, and online pornography.
Review 'Philosophy and Computing is a stimulating and ambitious book that helps lay a foundation... more Review
'Philosophy and Computing is a stimulating and ambitious book that helps lay a foundation for the new and vitally important field of Philosophy of Information. This is a worthy addition to the brand new and rapidly developing field of Philosophy of Information, a field that will revolutionise philosophy in the Information Age.' - Terrell Ward Bynum, Southern Connecticut State University
'What are the philosophical implications of computers and the internet? A pessimist might see these new technologies as leading to the creation of vast encyclopaedic databases far exceeding the capacities of any individual. Yet Luciano Floridi takes a different view, aruging ingeniously for the optimistic conclusion that the computer revolution will lead instead to a reversal of the trend towards specialisation and a return to the Renaissance mind.' - Donald Gillies, King's College London
'In his seminal book, Philosophy and Computing, Luciano Floridi provides a rich combination of technical information and philosophical insights necessary for the emerging field of philosophy and computing.' - James Moor, Dartmouth College
'Luciano Floridi's book discusses the most important and the latest branches of research in information technology. He approaches the subject from a novel philosophical viewpoint, while demonstrating a strong command of the relevant technicalities of the subject.' - Hava T. Siegelman, Technion
Product Description
Philosophy and Computing is the first accessible and comprehensive philosophical introduction to Information and Communication Technology.
Review "The Blackwell Guide to the Philosophy of Computing and Information is a rich resource for... more Review
"The Blackwell Guide to the Philosophy of Computing and Information is a rich resource for an important, emerging field within philosophy. This excellent volume covers the basic topics in depth, yet is written in a style that is accessible to non–philosophers. There is no other book that assembles and explains systematically so much information about the diverse aspects of philosophy of computing and information. I believe this book will serve both as an authoritative introduction to the field for students and as a standard reference for professionals for years to come. I highly recommend it." James Moor, Dartmouth College <!––end––>
"There are contributions from a range of respected academics, many of them authorities in their field, and this certainly anchors the work in a sound scholarly foundation. The scope of the content, given the youthfulness of the computing era, is signigficant. The variety of the content too is remarkable. In summary this is a wonderfully fresh look at the world of of computing and information, which requires its own philosophy in testimony that there are some real issues that can exercise the mind." Reference Reviews
"The judicious choice of topics, as well as the degree of detail in the various chapters, are just what it takes neither to deter the average reader requiring this Guide, nor to makeit unfeasible placing this volume in the hands of students. Floridi′s book is clearly a valuable addition to a worthy series." Pragmatics & Cognition
Product Description
This Guide provides an ambitious state–of–the–art survey of the fundamental themes, problems, arguments and theories constituting the philosophy of computing.
* A complete guide to the philosophy of computing and information.
* Comprises 26 newly–written chapters by leading international experts.
* Provides a complete, critical introduction to the field.
* Each chapter combines careful scholarship with an engaging writing style.
* Includes an exhaustive glossary of technical terms.
* Ideal as a course text, but also of interest to researchers and general readers.
Synopsis Computing and information, and their philosophy in the broad sense, play a most importan... more Synopsis
Computing and information, and their philosophy in the broad sense, play a most important scientific, technological and conceptual role in our world. This book collects together, for the first time, the views and experiences of some of the visionary pioneers and most influential thinkers in such a fundamental area of our intellectual development. This is yet another gem in the 5 Questions Series by Automatic Press / VIP.
Review Floridi's complete and rigorous book constitutes a major contribution for the knowledge o... more Review
Floridi's complete and rigorous book constitutes a major contribution for the knowledge of the transmission and influence of Sextus' writings, which makes it an essential work of reference for any study in this field. (The British Journal for the History of Philosophy )
A fascinating read for anyone interested in the history of Scepticism. (Greece & Rome )
Can knowledge provide its own justification? This sceptical challenge - known as the problem of t... more Can knowledge provide its own justification? This sceptical challenge - known as the problem of the criterion - is one of the major issues in the history of epistemology, and this volume provides its first comprehensive study, in a span of time that goes from Sextus Empiricus to Quince. After an essential introduction to the notions of knowledge and of philosophy of knowledge, the book provides a detailed reconstruction of the history of the problem. There follows a conceptual analysis of its logical features, and a comparative examination of a phenomenology of solution that have been suggested in the course of the history of philosophy in order to overcome it, from Descartes to Popper. In this context, an indirect approach to the problem of the criterion is defended as the most successful strategy against the sceptical challenge.
A chapter in Archives in Liquid Times, edited by Frans Smit, Arnoud Glaudemans, and Rienk Jonker
We increasingly rely on AI-related applications (smart technologies) to perform tasks that would ... more We increasingly rely on AI-related applications (smart technologies) to perform tasks that would be simply impossible by un-aided or un-augmented human intelligence. This is possible because the world is becoming an infosphere increasingly well adapted to AI’s limited capacities. Being able to imagine what adaptive demands this process will place on humanity may help to devise technological solutions that can lower their anthropological costs.
British Journal for the History of Philosophy, 1995
... See now That nothing is known, ed. by Elaine Limbrick, Eng. Trans, by Douglas FS Thomson (Cam... more ... See now That nothing is known, ed. by Elaine Limbrick, Eng. Trans, by Douglas FS Thomson (Cambridge: Cambridge UP, 1988). ... Cartesianism is among the sources of Villemandy's epistemological optimism and 'lay' faith in the inteiligibility of the universe. ...
IEEE Transactions on Visualization and Computer Graphics, 2000
The Onlife Manifesto, 2014
Rivista elettronica di filosofia-Registrazione n. …, 1996
... Bibliotec@SWIF Page 2. Linee di Ricerca – SWIF Coordinamento Editoriale: Gian Maria Greco Sup... more ... Bibliotec@SWIF Page 2. Linee di Ricerca – SWIF Coordinamento Editoriale: Gian Maria Greco Supervisione Tecnica: Fabrizio Martina Supervisione: Luciano Floridi Redazione: Eva Franchino, Federica Scali. LdR è un e-book, inteso come numero speciale della rivista SWIF. ...
Journal of the History of Philosophy, 2001
ABSTRACT
The Moral Status of Technical Artefacts, 2014
Synthese, 2009
Abstract Various conceptual approaches to the notion of information can currently be traced in th... more Abstract Various conceptual approaches to the notion of information can currently be traced in the literature in logic and formal epistemology. A main issue of disagree-ment is the attribution of truthfulness to informational data, the so called Veridicality Thesis (Floridi 2005). The ...
Protection of Information and the Right to Privacy - A New Equilibrium?, 2014
Law, Governance and Technology Series, 2014
New Challenges to Philosophy of Science, 2013
Citeseer
... analyses, and account for most of the literature in CyberEthics (see for example Spinello and... more ... analyses, and account for most of the literature in CyberEthics (see for example Spinello and Tavani [2001] and other chapters in the present volume). ... concerning the self through personal homepages (Chandler [1998], see also Adamic and Adar [online]). ...
The Electronic Library, 1996
In 1963 Arthur C. Clarke published a story called Dial F for Frankenstein, in which he imagined t... more In 1963 Arthur C. Clarke published a story called Dial F for Frankenstein, in which he imagined the following scenario. On 31 January 1974, the last communications satellite is launched in order to achieve, at last, full interconnection of the whole, international telephone system. ...
Close Engagements with Artificial Companions, 2010
The Cambridge Handbook of Information and Computer Ethics, 2010
InCID: Rev. Ci. Inf. Doc., 2010
Thinking Machines and the Philosophy of Computer Science
Horizons philosophiques
ABSTRACT
37th Conference on Uncertainty in Artificial Intelligence, 2021
Necessity and sufficiency are the building blocks of all successful explanations. Yet despite the... more Necessity and sufficiency are the building blocks of all successful explanations. Yet despite their importance, these notions have been conceptually underdeveloped and inconsistently applied in explainable artificial intelligence (XAI), a fast-growing research area that is so far lacking in firm theoretical foundations. Building on work in logic, probability, and causality, we establish the central role of necessity and sufficiency in XAI, unifying seemingly disparate methods in a single formal framework. We provide a sound and complete algorithm for computing explanatory factors with respect to a given context, and demonstrate its flexibility and competitive performance against state of the art alternatives on various tasks.
La Filosofia dell’Informazione: una sfida etica ed epistemologica Workshop con Luciano Floridi P... more La Filosofia dell’Informazione: una sfida etica ed epistemologica
Workshop con Luciano Floridi Professore Ordinario di Filosofia e Etica dell’Informazione dell’Università di Oxford, Direttore di Ricerca all’Oxford Internet Institute Copernicus Visiting Professor, IUSS Ferrara 1391
Research on the ethics of algorithms has grown substantially over the past decade. Alongside the ... more Research on the ethics of algorithms has grown substantially over the past decade. Alongside the exponential development and application of machine learning algorithms, new ethical problems and solutions relating to their ubiquitous use in society have been proposed. This article builds on a review of the ethics of algorithms published in 2016 (Mittelstadt et al. 2016). The golas are to contribute to the debate on the identification and analysis of the ethical implications of algorithms, to provide an updated analysis of epistemic and normative concerns, and to offer actionable guidance for the governance of the design, development and deployment of algorithms.
Pre-Print, 2019
Healthcare systems across the globe are struggling with increasing costs and worsening outcomes. ... more Healthcare systems across the globe are struggling with increasing costs and worsening outcomes. This presents those responsible for overseeing healthcare with a challenge. Increasingly, policymakers, politicians, clinical entrepreneurs and computer and data scientists argue that a key part of the solution will be 'Artificial Intelligence' (AI)-particularly Machine Learning (ML). This argument stems not from the belief that all healthcare needs will soon be taken care of by "robot doctors." Instead, it is an argument that rests on the classic counterfactual definition of AI as an umbrella term for a range of techniques that can be used to make machines complete tasks in a way that would be considered intelligent were they to be completed by a human. Automation of this nature could offer great opportunities for the improvement of healthcare services and ultimately patients' health by significantly improving human clinical capabilities in diagnosis, drug discovery, epidemiology, personalised medicine, and operational efficiency. However, if these AI solutions are to be embedded in clinical practice, then at least three issues need to be considered: the technical possibilities and limitations; the ethical, regulatory and legal framework; and the governance framework. In this article, we report on the results of a systematic analysis designed to provide a clear overview of the second of these elements: the ethical, regulatory and legal framework. We find that ethical issues arise at six levels of abstraction (individual, interpersonal, group, institutional, sectoral, and societal) and can be categorised as epistemic, normative, or overarching. We conclude by stressing how important it is that the ethical challenges raised by implementing AI in healthcare settings are tackled proactively rather than reactively and map the key considerations for policymakers to each of the ethical concerns highlighted.
It has been suggested that to overcome the challenges facing the UK’s National Health Service (NH... more It has been suggested that to overcome the challenges facing the UK’s National Health Service (NHS) of an ageing population and reduced available funding, the NHS should be transformed into a more informationally mature and heterogeneous organisation, reliant on data-based and algorithmically-driven interactions between human, artificial, and hybrid (semi-artificial) agents. This transformation process would offer significant benefit to patients, clinicians, and the overall system, but it would also rely on a fundamental transformation of the healthcare system in a way that poses significant governance challenges. In this article, we argue that a fruitful way to overcome these challenges is by adopting a pro-ethical approach to design that analyses the system as a whole, keeps society-in-the-loop throughout the process, and distributes responsibility evenly across all nodes in the system.
The debate about the ethical implications of Artificial Intelligence dates from the 1960s (Wiener... more The debate about the ethical implications of Artificial Intelligence dates from the 1960s (Wiener, 1960) (Samuel, 1960). However, in recent years symbolic AI has been complemented and sometimes replaced by (Deep) Neural Networks and Machine Learning (ML) techniques. This has vastly increased its potential utility and impact on society, with the consequence that the ethical debate has gone mainstream. Such a debate has primarily focused on principles-the 'what' of AI ethics (beneficence, non-maleficence, autonomy, justice and explicability)-rather than on practices, the 'how.' Awareness of the potential issues is increasing at a fast rate, but the AI community's ability to take action to mitigate the associated risks is still at its infancy. Therefore, our intention in presenting this research is to contribute to closing the gap between principles and practices by constructing a typology that may help practically-minded developers 'apply ethics' at each stage of the pipeline, and to signal to researchers where further work is needed. The focus is exclusively on Machine Learning, but it is hoped that the results of this research may be easily applicable to other branches of AI. The article outlines the research method for creating this typology, the initial findings, and provides a summary of future research needs. 2
Antioxidants and Redox Signaling, 2017
Significance. The environment can elicit biological responses such as oxidative stress (OS) and i... more Significance. The environment can elicit biological responses such as oxidative stress (OS) and inflammation as consequence of chemical, physical or psychological changes. As population studies are essential for establishing these environment-organism interactions, biomarkers of oxidative stress or inflammation are critical in formulating mechanistic hypotheses. Recent advances. By using examples of stress induced by various mechanisms, we focus on the biomarkers that have been used to assess oxidative stress and inflammation in these conditions. We discuss the difference between biomarkers that are the result of a chemical reaction (such as lipid peroxides or oxidized proteins that are a result of the reaction of molecules with reactive oxygen species, ROS) and those that represent the biological response to stress, such as the transcription factor NRF2 or inflammation and inflammatory cytokines. Critical issues. The high-throughput and holistic approaches to biomarker discovery used extensively in large-scale molecular epidemiological exposome are also discussed in the context of human exposure to environmental stressors. Future directions. We propose to consider the role of biomarkers as signs and distinguish between signs that are just indicators of biological processes and proxies that one can interact with and modify the disease process.
In October 2016, the White House, the European Parliament, and the UK House of Commons each issue... more In October 2016, the White House, the European Parliament, and the UK House of Commons each issued a report outlining their visions on how to prepare society for the widespread use of AI. In this article, we provide a comparative assessment of these three reports in order to facilitate the design of policies favourable to the development of a 'good AI society'. To do so, we examine how each report addresses the following three topics: (a) the development of a 'good AI society'; (b) the role and responsibility of the government, the private sector, and the research community (including academia) in pursuing such a development; and (c) where the recommendations to support such a development may be in need of improvement. Our analysis concludes that the reports address adequately various ethical, social, and economic topics, but come short of providing an overarching political vision and long-term strategy for the development of a 'good AI society'. In order to contribute to fill this gap, in the conclusion we suggest a two-pronged approach.
In information societies, operations, decisions and choices previously left to humans are increas... more In information societies, operations, decisions and choices previously left to humans are increasingly delegated to algorithms, which may advise, if not decide, about how data should be interpreted and what actions should be taken as a result. More and more often, algorithms mediate social processes, business transactions, governmental decisions, and how we perceive, understand, and interact among ourselves and with the environment. Gaps between the design and operation of algorithms and our understanding of their ethical implications can have severe consequences affecting individuals as well as groups and whole societies. This paper makes three contributions to clarify the ethical importance of algorithmic mediation. It provides a prescriptive map to organise the debate. It reviews the current discussion of ethical aspects of algorithms. And it assesses the available literature in order to identify areas requiring further work to develop the ethics of algorithms.
Abstracts are invited for the workshop “The Ethics of Data Science: The Landscape for the Alan Tu... more Abstracts are invited for the workshop “The Ethics of Data Science: The Landscape for the Alan Turing Institute”. This event is being organised as part of a series of activities promoted by the Alan Turing Institute (ATI) in order to define the national and international landscape around data science and to support the ATI’s scientific programme.
In recent years, there has been a huge increase in the number of bots online, varying from Web cr... more In recent years, there has been a huge increase in the number of bots online, varying from Web crawlers for search engines, to chatbots for online customer service, spambots on social media, and content-editing bots in online collaboration communities. The online world has turned into an ecosystem of bots. However, our knowledge of how these automated agents are interacting with each other is rather poor. In this article, we analyze collaborative bots by studying the interactions between bots that edit articles on Wikipedia. We find that, although
In our information societies, we increasingly delegate tasks and decisions to automated systems, ... more In our information societies, we increasingly delegate tasks and decisions to automated systems, devices and agents that mediate human relationships, by taking decisions and acting on the basis of algorithms. Their increased intelligence, autonomous behavior and connectivity are changing crucially the life conditions of human beings as well as altering traditional concepts and ways of understanding reality. Algorithms are directed to solve problems that are not always detectable in their own relevance and timeliness. They are also meant to solve those problems through procedures that are not always visible and assessable in their own. In addition, technologies based on algorithmic procedures more and more infer personal information from aggregated data, thus profiling human beings and anticipating their expectations, views and behaviors. This may have normative, if not discriminatory, consequences. While algorithmic procedures and applications are meant to serve human needs, they risk to create an environment in which human beings tend to develop adaptive strategies by conforming their behaviour to the expected output of the procedures, with serious distortive effects. Against this backdrop, little room is often left for a process of rational argumentation able to challenge the results of algorithmic procedures by putting into question some of their hidden assumptions or by taking into account some neglected aspects of the problems under consideration. At the same time, it is widely recognized that scientific and social advances crucially depend on such an open and free critical discussion.
Recommendations to myself
This is a unique opportunity for early career researchers to join The Alan Turing Institute. The ... more This is a unique opportunity for early career researchers to join The Alan Turing Institute. The Alan Turing Institute (ATI) is the UK’s new national institute for data science, established to bring together world-leading expertise to provide leadership in the emerging field of data science. The Institute has been founded by the universities of Cambridge, Edinburgh, Oxford, UCL and Warwick and the EPSRC.
This is a targeted call, by which we intend to recruit researchers in subjects currently underrepresented by our fellowship cohort. Fellowships are available for 3 years with the potential for an additional 2 years of support following interim review. Fellows will pursue research based at the Institute hub in the British Library, London. Fellowships will be awarded to individual candidates and fellows will be employed by a joint venture partner university (Cambridge, Edinburgh, Oxford, UCL or Warwick).
Workshop con Luciano Floridi Professore Ordinario di Filosofia e Etica dell’Informazione dell’Uni... more Workshop con Luciano Floridi
Professore Ordinario di Filosofia e Etica dell’Informazione dell’Università di Oxford, Direttore di Ricerca all’Oxford Internet Institute
Copernicus Visiting Professor, IUSS Ferrara 1391
La Filosofia dell’Informazione: una sfida etica ed epistemologica Ferrara 24 – 26 marzo e 28 – 30 aprile 2016
Workshop con Luciano Floridi Professore Ordinario di Filosofia e Etica dell’Informazione dell’Uni... more Workshop con Luciano Floridi
Professore Ordinario di Filosofia e Etica dell’Informazione dell’Università di Oxford, Direttore di Ricerca all’Oxford Internet Institute
Copernicus Visiting Professor, IUSS Ferrara 1391
La Filosofia dell’Informazione: una sfida etica ed epistemologica Ferrara 24 – 26 marzo e 28 – 30 aprile 2016
Luciano Floridi Oxford University
This theme issue has the founding ambition of landscaping Data Ethics as a new branch of ethics t... more This theme issue has the founding ambition of landscaping Data Ethics as a new branch of ethics that studies and evaluates moral problems related to data (including generation, recording, curation, processing, dissemination, sharing, and use), algorithms (including AI, artificial agents, machine learning, and robots), and corresponding practices (including responsible innovation, programming, hacking, and professional codes), in order to formulate and support morally good solutions (e.g. right conducts or right values). Data Ethics builds on the foundation provided by Computer and Information Ethics but, at the same time, it refines the approach endorsed so far in this research field, by shifting the Level of Abstraction of ethical enquiries, from being information-centric to being data-centric. This shift brings into focus the different moral dimensions of all kinds of data, even the data that never translate directly into information but can be used to support actions or generate behaviours, for example. It highlights the need for ethical analyses to concentrate on the content and nature of computational operations—the interactions among hardware, software, and data—rather than on the variety of digital technologies that enables them. And it emphasises the complexity of the ethical challenges posed by Data Science. Because of such complexity, Data Ethics should be developed from the start as a macroethics, that is, as an overall framework that avoids narrow, ad hoc approaches and addresses the ethical impact and implications of Data Science and its applications within a consistent, holistic, and inclusive framework. Only as a macroethics Data Ethics will provide the solutions that can maximise the value of Data Science for our societies, for all of us, and for our environments.
The debate on whether and how the Internet can protect and foster human rights has become a defin... more The debate on whether and how the Internet can protect and foster human rights has become a defining issue of our time. This debate often focuses on Internet governance from a regulatory perspective, underestimating the influence and power of the governance of the Internet's architecture. The technical decisions made by Internet Standard Developing Organisations (SDOs) that build and maintain the technical infrastructure of the Internet influences how information flows. They rearrange the shape of the technically mediated public sphere, including which rights it protects and which practices it enables. In this article, we contribute to the debate on SDOs' ethical responsibility to bring their work in line with human rights. We defend three theses. First, SDOs' work is inherently political. Second, the Internet Engineering Task Force (IETF), one of the most influential SDOs, has a moral obligation to ensure its work is coherent with, and fosters, human rights. Third, the IETF should enable the actualisation of human rights through the protocols and standards it designs by implementing a responsibility-by-design approach to engineering. We conclude by presenting some initial recommendations on how to ensure that work carried out by the IETF may enable human rights.