Introduction to the Topical Collection on AI and Responsibility (original) (raw)
Related papers
Governance of Responsible AI: From Ethical Guidelines to Cooperative Policies
2022
The increasingly pervasive role of Artificial Intelligence (AI) in our societies is radically changing the way that social interaction takes place within all fields of knowledge. The obvious opportunities in terms of accuracy, speed and originality of research are accompanied by questions about the possible risks and the consequent responsibilities involved in such a disruptive technology. In recent years, this twofold aspect has led to an increase in analyses of the ethical and political implications of AI. As a result, there has been a proliferation of documents that seek to define the strategic objectives of AI together with the ethical precautions required for its acceptable development and deployment. Although the number of documents is certainly significant, doubts remain as to whether they can effectively play a role in safeguarding democratic decision-making processes. Indeed, a common feature of the national strategies and ethical guidelines published in recent years is that they only timidly address how to integrate civil society into the selection of AI objectives. Although scholars are increasingly advocating the necessity to include civil society, it remains unclear which modalities should be selected. If both national strategies and ethics guidelines appear to be neglecting the necessary role of a democratic scrutiny for identifying challenges, objectives, strategies and the appropriate regulatory measures that such a disruptive technology should undergo, the question is then, what measures can we advocate that are able to overcome such limitations? Considering the necessity to operate holistically with AI as a social object, what theoretical framework can we adopt in order to implement a model of governance? What conceptual methodology shall we develop that is able to offer fruitful insights to governance of AI? Drawing on the insights of classical pragmatist scholars, we propose a framework of democratic experimentation based on the method of social inquiry. In this article, we first summarize some of the main points of discussion around the potential societal, ethical and political issues of AI systems. We then identify the main answers and solutions by analyzing current national strategies and ethics guidelines. After showing the theoretical and practical limits of these approaches, we outline an alternative proposal that can help strengthening the active role of society in the discussion about the role and extent of AI systems.
Progressing Towards Responsible AI
ArXiv, 2020
The field of Artificial Intelligence (AI) and, in particular, the Machine Learning area, counts on a wide range of performance metrics and benchmark data sets to assess the problem-solving effectiveness of its solutions. However, the appearance of research centres, projects or institutions addressing AI solutions from a multidisciplinary and multi-stakeholder perspective suggests a new approach to assessment comprising ethical guidelines, reports or tools and frameworks to help both academia and business to move towards a responsible conceptualisation of AI. They all highlight the relevance of three key aspects: (i) enhancing cooperation among the different stakeholders involved in the design, deployment and use of AI; (ii) promoting multidisciplinary dialogue, including different domains of expertise in this process; and (iii) fostering public engagement to maximise a trusted relation with new technologies and practitioners. In this paper, we introduce the Observatory on Society an...
Ethics, Artificial Intelligence and Responsibility: Contemporary Challenges
The purpose of this paper is to approach, from a philosophical perspective, the relationship between responsibility and artificial intelligence, emphasizing some contemporary challenges of this relationship. To this end, we started with a brief exposition of the notion of responsibility from a philosophical perspective, highlighting its social aspect. We then analyzed responsibility in relation to objects developed by artificial intelligence (AI), pointing out that intelligent machines are programmed to think or imitate or replace human intelligence, in order to optimize problem solving and replace humans more efficiently in actions that are difficult or require much effort. Finally, we addressed some contemporary challenges concerning the ethics of artificial intelligence and responsibility. From the reflections presented, we consider that the responsibility for the actions of autonomous objects and their consequences lies upon their manufacturers, programmers, sellers, or users, since it is not possible to attribute personality to an automaton.
AI Ethics: Chosen Challenges for Contemporary Societies and Technological Policymaking
2023
Artificial Intelligence (AI) is a rapidly advancing technology that permeates human life at various levels. It evokes hopes for a better, easier, and more exciting life, while also instilling fears about the future without humans. AI has become part of our daily lives, supporting fields such as medicine, customer service, finance, and justice systems; providing entertainment, and driving innovation across diverse fields of knowledge. Some even argue that we have entered the “AI era.” However, AI is not solely a matter of technological progress. We already witness its positive and negative impact on individuals and societies. Hence, it is crucial to examine the primary challenges posed by AI, which is the subject of AI ethics. In this paper, I present the key challenges that emerged in the literature and require ethical reflection. These include the issues of data privacy and security, the problem of AI biases resulting from social, technical, or socio-technical factors, and the challenges associated with using AI for prediction of human behavior (particularly in the context of the justice system). I also discuss existing approaches to AI ethics within the framework of technological regulations and policymaking, presenting concrete ways in which ethics can be implemented in practice. Drawing on the functioning of other scientific and technological fields, such as gene editing, the development of automobile and aviation industries, I highlight the lessons we can learn from how they function to later apply it to how AI is introduced in societies. In the final part of the paper, I analyze two case studies to illustrate the ethical challenges related to recruitment algorithms and risk assessment tools in the criminal justice system. The objective of this work is to contribute to the sustainable development of AI by promoting human-centered, societal, and ethical approaches to its advancement. Such approach seeks to maximize the benefits derived from AI while simultaneously mitigating its diverse negative consequences.
Four Responsibility Gaps with Artificial Intelligence: Why they Matter and How to Address them
Philosophy & Technology
The notion of “responsibility gap” with artificial intelligence (AI) was originally introduced in the philosophical debate to indicate the concern that “learning automata” may make more difficult or impossible to attribute moral culpability to persons for untoward events. Building on literature in moral and legal philosophy, and ethics of technology, the paper proposes a broader and more comprehensive analysis of the responsibility gap. The responsibility gap, it is argued, is not one problem but a set of at least four interconnected problems – gaps in culpability, moral and public accountability, active responsibility—caused by different sources, some technical, other organisational, legal, ethical, and societal. Responsibility gaps may also happen with non-learning systems. The paper clarifies which aspect of AI may cause which gap in which form of responsibility, and why each of these gaps matter. It proposes a critical review of partial and non-satisfactory attempts to address t...
Artificial Intelligence and Morality: A Social Responsibility
Journal of Intelligence Studies in Business
Both the globe and technology are growing more quickly than ever. Artificial intelligence's design and algorithm are being called into question as its deployment becomes more widespread, raising moral and ethical issues. We use artificial intelligence in a variety of industries to improve skill, service, and performance. Hence, it has both proponents and opponents. AI uses a given collection of data to derive action or knowledge. There is therefore always a chance that it will contain some inaccurate information. Since artificial intelligence is created by scientists and engineers, it will always present issues with accountability, responsibility, and system reliability. There is great potential for economic development, societal advancement, and improved human security and safety thanks to artificial intelligence.
Ethical Implications of AI: Balancing Innovation and Responsibility
The proliferation of Artificial Intelligence (AI) technologies in various sectors has raised profound ethical concerns regarding bias, privacy, and accountability. This paper examines these ethical implications, explores current regulatory and ethical frameworks, and proposes strategies to foster responsible AI development. Through a comprehensive literature review and analysis of case studies, the study identifies critical ethical challenges and emphasizes the importance of proactive ethical guidelines to mitigate risks. The findings underscore the need for interdisciplinary collaboration and regulatory oversight to ensure AI innovations are ethically sound and beneficial to society.
Towards a framework for understanding societal and ethical implications of Artificial Intelligence
2020
Artificial Intelligence (AI) is one of the most discussed technologies today. There are many innovative applications such as the diagnosis and treatment of cancer, customer experience, new business, education, contagious diseases propagation and optimization of the management of humanitarian catastrophes. However, with all those opportunities also comes great responsibility to ensure good and fair practice of AI. The objective of this paper is to identify the main societal and ethical challenges implied by a massive uptake of AI. We have surveyed the literature for the most common challenges and classified them in seven groups: 1) Non-desired effects, 2) Liability, 3) Unknown consequences, 4) Relation people-robots, 5) Concentration of power and wealth, 6) Intentional bad uses, and 7) AI for weapons and warfare. The challenges should be dealt with in different ways depending on their origin; some have technological solutions, while others require ethical, societal, or political answ...
Artificial intelligence (AI) is rapidly reshaping our world. As AI systems become increasingly autonomous and integrated into various sectors, fundamental ethical issues such as accountability, transparency, bias, and privacy are exacerbated or morph into new forms. This introduction provides an overview of the current ethical landscape of AI. It explores the pressing need to address biases in AI systems, protect individual privacy, ensure transparency and accountability, and manage the broader societal impacts of AI on labour markets, education, and social interactions. It also highlights the global nature of AI's challenges, such as its environmental impact and security risks, stressing the importance of international collaboration and culturally sensitive ethical guidelines. It then outlines three unprecedented challenges AI poses to copyright and intellectual property rights; individual autonomy through AI's "hypersuasion"; and our understanding of authenticity, originality, and creativity through the transformative impact of AI-generated content. The conclusion emphasises the importance of ongoing critical vigilance, imaginative conceptual design, and collaborative efforts between diverse stakeholders to deal with the ethical complexities of AI and shape a sustainable and socially preferable future. It underscores the crucial role of philosophy in identifying and analysing the most significant problems and designing convincing and feasible solutions, calling for a new, engaged, and constructive approach to philosophical inquiry in the digital age.