Wendell Wallach - Academia.edu (original) (raw)
Uploads
Papers by Wendell Wallach
Proceedings of the IEEE, 2019
Interaction Studies, 2010
The current debate over technological unemployment sacrifices significant analytic value because ... more The current debate over technological unemployment sacrifices significant analytic value because it is one-sided, limited in scope, and sequential. We show that analyzing technological innovations in parallel with apparently independent socioeconomic innovations and trends offers important analytical benefits. Our focus is on socioeconomic innovations and trends that standardize education, workplace requirements, and culture. A highly standardized workplace is not only more suitable for international outsourcing; it is also more suitable for machine labor. In this context, we identify five specific research questions that would benefit from parallel analysis and scenarios. We also introduce the concepts "functional equivalency" and "functional singularity" (in juxtaposition to technological singularity) to provide semantic tools that emphasize the importance of an integrated approach, capable of tracking and analyzing two interacting and potentially converging tr...
Purpose – In spite of highly publicized competitions where computers have prevailed over humans, ... more Purpose – In spite of highly publicized competitions where computers have prevailed over humans, the intelligence of computer systems still remains quite limited in comparison to that of humans. Present day computers provide plenty of information but lack wisdom. The purpose of this paper is to investigate whether reliance on computers with limited intelligence might undermine the quality of the education students receive. Design/methodology/approach – Using a conceptual approach, the authors take the performance of IBMʼs Watson computer against human quiz competitors as a starting point to explore how society, and especially education, might change in the future when everyone has access to desktop technology to access information. They explore the issue of placing excessive trust in such machines without the capacity to evaluate the quality and reliability of the information provided. Findings – The authors find that the day when computing machines surpass human intelligence is muc...
The challenge of designing computer systems and robots with the ability to make moral judgments h... more The challenge of designing computer systems and robots with the ability to make moral judgments has stepped out of sci-ence fiction and moved into the laboratory. Engineers and scholars, anticipating practical necessities, are writing arti-cles, participating in conference workshops, and initiating a few experiments directed at substantiating rudimentary moral reasoning in hardware and software. The subject has been designated by several names, including machine ethics, ma-chine morality, artificial morality, ethical alife or computa-tional morality. We will describe the challenges facing the development of ethical agents and discuss the contributions that ALife can make to meeting these challenges.
A principal goal of the discipline of artificial morality is to design artificial agents to act a... more A principal goal of the discipline of artificial morality is to design artificial agents to act as if they are moral agents. Intermediate goals of artificial morality are directed at building into AI systems sensitivity to the values, ethics, and legality of activities. The development of an effective foundation for the field of artificial morality involves exploring the technological and philosophical issues involved in making computers into explicit moral reasoners. The goal of this paper is to discuss strategies for implementing artificial morality and the differing criteria for success that are appropriate to different strategies.
The accelerating pace of emerging technologies such as AI has revealed a total mismatch between e... more The accelerating pace of emerging technologies such as AI has revealed a total mismatch between existing governmental approaches and what is needed for effective ethical/legal oversight. To address this “pacing gap” the authors proposed governance coordinating committees (GCCs) in 2015 as a new more agile approach for the coordinated oversight of emerging technologies. In this paper, we quickly reintroduce the reasons why AI and robotics require more agile governance, and the potential role of the GCC model for meeting that need. Secondly, we flesh out the roles for government, engineering, and ethics in forcing a comprehensive approach to the oversight of AI/robotics mediated by a GCC. We argue for an international GCC with complementary regional bodies in light of the transnational nature of AI concerns and risks. We also propose a series of new mechanisms for enforcing (directly or indirectly) “soft law” approaches for AI through coordinated institutional controls by insurers, jo...
What roles or functions does consciousness fulfill in the making of moral decisions? Will artific... more What roles or functions does consciousness fulfill in the making of moral decisions? Will artificial agents capable of making appropriate decisions in morally charged situations require machine consciousness? Should the capacity to make moral decisions be considered an attribute essential for being designated a fully conscious agent? Research on the prospects for developing machines capable of making moral decisions and research on machine consciousness have developed as independent fields of inquiry. Yet there is significant overlap. Both fields are likely to progress through the instantiation of systems with artificial general intelligence (AGI). Certainly special classes of moral decision making will require attributes of consciousness such as being able to empathize with the pain and suffering of others. But in this article we will propose that consciousness also plays a functional role in making most if not all moral decisions. Work by the authors of this article with LIDA, a c...
The implementation of moral decision-making abilities in AI is a natural and necessary extension ... more The implementation of moral decision-making abilities in AI is a natural and necessary extension to the social mechanisms of autonomous software agents and androids. Engineers exploring design strategies for systems sensitive to moral considerations in their choices and actions will need to determine what role ethical theory should play in defining control architectures for such systems. The architectures for morally intelligent agents fall within two broad approaches: the top-down imposition of ethical theories, and the bottom-up building of systems that aim at specified goals or standards which may or may not be specified in explicitly theoretical terms. In this paper we wish to provide some direction for continued research by outlining the value and limitations inherent in each of these approaches.
Ethics of Artificial Intelligence
Implementing sensitivity to norms, laws, and human values in computational systems has transition... more Implementing sensitivity to norms, laws, and human values in computational systems has transitioned from philosophical reflection to an actual engineering challenge. The “value alignment” approach to dealing with superintelligent AIs tends to employ computationally friendly concepts such as utility functions, system goals, agent preferences, and value optimizers, which, this chapter argues, do not have intrinsic ethical significance. This chapter considers what may be lost in the excision of intrinsically ethical concepts from the project of engineering moral machines. It argues that human-level AI and superintelligent systems can be assured to be safe and beneficial only if they embody something like virtue or moral character and that virtue embodiment is a more appropriate long-term goal for AI safety research than value alignment.
Communications of the ACM
Ethics and Emerging Technologies, 2014
Proceedings of the IEEE, 2019
Interaction Studies, 2010
The current debate over technological unemployment sacrifices significant analytic value because ... more The current debate over technological unemployment sacrifices significant analytic value because it is one-sided, limited in scope, and sequential. We show that analyzing technological innovations in parallel with apparently independent socioeconomic innovations and trends offers important analytical benefits. Our focus is on socioeconomic innovations and trends that standardize education, workplace requirements, and culture. A highly standardized workplace is not only more suitable for international outsourcing; it is also more suitable for machine labor. In this context, we identify five specific research questions that would benefit from parallel analysis and scenarios. We also introduce the concepts "functional equivalency" and "functional singularity" (in juxtaposition to technological singularity) to provide semantic tools that emphasize the importance of an integrated approach, capable of tracking and analyzing two interacting and potentially converging tr...
Purpose – In spite of highly publicized competitions where computers have prevailed over humans, ... more Purpose – In spite of highly publicized competitions where computers have prevailed over humans, the intelligence of computer systems still remains quite limited in comparison to that of humans. Present day computers provide plenty of information but lack wisdom. The purpose of this paper is to investigate whether reliance on computers with limited intelligence might undermine the quality of the education students receive. Design/methodology/approach – Using a conceptual approach, the authors take the performance of IBMʼs Watson computer against human quiz competitors as a starting point to explore how society, and especially education, might change in the future when everyone has access to desktop technology to access information. They explore the issue of placing excessive trust in such machines without the capacity to evaluate the quality and reliability of the information provided. Findings – The authors find that the day when computing machines surpass human intelligence is muc...
The challenge of designing computer systems and robots with the ability to make moral judgments h... more The challenge of designing computer systems and robots with the ability to make moral judgments has stepped out of sci-ence fiction and moved into the laboratory. Engineers and scholars, anticipating practical necessities, are writing arti-cles, participating in conference workshops, and initiating a few experiments directed at substantiating rudimentary moral reasoning in hardware and software. The subject has been designated by several names, including machine ethics, ma-chine morality, artificial morality, ethical alife or computa-tional morality. We will describe the challenges facing the development of ethical agents and discuss the contributions that ALife can make to meeting these challenges.
A principal goal of the discipline of artificial morality is to design artificial agents to act a... more A principal goal of the discipline of artificial morality is to design artificial agents to act as if they are moral agents. Intermediate goals of artificial morality are directed at building into AI systems sensitivity to the values, ethics, and legality of activities. The development of an effective foundation for the field of artificial morality involves exploring the technological and philosophical issues involved in making computers into explicit moral reasoners. The goal of this paper is to discuss strategies for implementing artificial morality and the differing criteria for success that are appropriate to different strategies.
The accelerating pace of emerging technologies such as AI has revealed a total mismatch between e... more The accelerating pace of emerging technologies such as AI has revealed a total mismatch between existing governmental approaches and what is needed for effective ethical/legal oversight. To address this “pacing gap” the authors proposed governance coordinating committees (GCCs) in 2015 as a new more agile approach for the coordinated oversight of emerging technologies. In this paper, we quickly reintroduce the reasons why AI and robotics require more agile governance, and the potential role of the GCC model for meeting that need. Secondly, we flesh out the roles for government, engineering, and ethics in forcing a comprehensive approach to the oversight of AI/robotics mediated by a GCC. We argue for an international GCC with complementary regional bodies in light of the transnational nature of AI concerns and risks. We also propose a series of new mechanisms for enforcing (directly or indirectly) “soft law” approaches for AI through coordinated institutional controls by insurers, jo...
What roles or functions does consciousness fulfill in the making of moral decisions? Will artific... more What roles or functions does consciousness fulfill in the making of moral decisions? Will artificial agents capable of making appropriate decisions in morally charged situations require machine consciousness? Should the capacity to make moral decisions be considered an attribute essential for being designated a fully conscious agent? Research on the prospects for developing machines capable of making moral decisions and research on machine consciousness have developed as independent fields of inquiry. Yet there is significant overlap. Both fields are likely to progress through the instantiation of systems with artificial general intelligence (AGI). Certainly special classes of moral decision making will require attributes of consciousness such as being able to empathize with the pain and suffering of others. But in this article we will propose that consciousness also plays a functional role in making most if not all moral decisions. Work by the authors of this article with LIDA, a c...
The implementation of moral decision-making abilities in AI is a natural and necessary extension ... more The implementation of moral decision-making abilities in AI is a natural and necessary extension to the social mechanisms of autonomous software agents and androids. Engineers exploring design strategies for systems sensitive to moral considerations in their choices and actions will need to determine what role ethical theory should play in defining control architectures for such systems. The architectures for morally intelligent agents fall within two broad approaches: the top-down imposition of ethical theories, and the bottom-up building of systems that aim at specified goals or standards which may or may not be specified in explicitly theoretical terms. In this paper we wish to provide some direction for continued research by outlining the value and limitations inherent in each of these approaches.
Ethics of Artificial Intelligence
Implementing sensitivity to norms, laws, and human values in computational systems has transition... more Implementing sensitivity to norms, laws, and human values in computational systems has transitioned from philosophical reflection to an actual engineering challenge. The “value alignment” approach to dealing with superintelligent AIs tends to employ computationally friendly concepts such as utility functions, system goals, agent preferences, and value optimizers, which, this chapter argues, do not have intrinsic ethical significance. This chapter considers what may be lost in the excision of intrinsically ethical concepts from the project of engineering moral machines. It argues that human-level AI and superintelligent systems can be assured to be safe and beneficial only if they embody something like virtue or moral character and that virtue embodiment is a more appropriate long-term goal for AI safety research than value alignment.
Communications of the ACM
Ethics and Emerging Technologies, 2014