Kevin Baum | Saarland University (original) (raw)
Papers by Kevin Baum
Designing trustworthy systems and enabling external parties to accurately assess the trustworthin... more Designing trustworthy systems and enabling external parties to accurately assess the trustworthiness of these systems are crucial objectives. Only if trustors assess system trustworthiness accurately, they can base their trust on adequate expectations about the system and reasonably rely on or reject its outputs. However, the process by which trustors assess a system’s actual trustworthiness to arrive at their perceived trustworthiness remains underexplored. In this paper, we conceptually distinguish between trust propensity, trustworthiness, trust, and trusting behavior. Drawing on psychological models of assessing other people’s characteristics, we present the two-level Trustworthiness Assessment Model (TrAM). At the micro level, we propose that trustors assess system trustworthiness based on cues associated with the system. The accuracy of this assessment depends on cue relevance and availability on the system’s side, and on cue detection and utilization on the human’s side. At t...
The International Review of Information Ethics
Informatics is the innovation driver of our time. From social media and artificial intelligence t... more Informatics is the innovation driver of our time. From social media and artificial intelligence to autonomous cyber-physical systems: informatics-driven, digital products and services permeate our society in significant ways. Computer scientists, whether researchers or software developers, are shaping tomorrow's society. As a consequence, ethical, societal, and practical reasons demand that students of computer science and related subjects should receive at least a basic ethical education to be able to do justice to their ever-growing responsibilities and duties. Ethics for Nerds is an award-winning lecture that is being taught annually at Saarland University since 2016. The course has been continually updated and progressively improved over the years. In this paper, we share our experiences with and best practices for teaching the basics of ethics to students of computer science and offer advice on how to design a successful ethics course as part of a computer science study pro...
2021 IEEE 29th International Requirements Engineering Conference Workshops (REW), 2021
System quality attributes like explainability, transparency, traceability, explicability, interpr... more System quality attributes like explainability, transparency, traceability, explicability, interpretability, understand-ability, and the like are given an increasing weight, both in research and in the industry. All of these attributes can be sub-sumed under the term of “perspicuity”. We argue in this vision paper that perspicuity is to be regarded as a meaningful and distinct class of quality attributes from which new requirements along with new challenges arise, and that perspicuity as a requirement is needed for legal, societal, and moral reasons, as well as for reasons of consistency within requirements engineering.
Computers in Human Behavior, 2021
Abstract Advances in artificial intelligence contribute to increasing automation of decisions. In... more Abstract Advances in artificial intelligence contribute to increasing automation of decisions. In a healthcare-scheduling context, this study compares effects of decision agents and explanations for decisions on decision-recipients’ perceptions of justice. In a 2 (decision agent: automated vs. human) × 3 (explanation: no explanation vs. equality-explanation vs. equity-explanation) between-subjects online study, 209 healthcare professionals were asked to put themselves in a situation where their vacation request was denied by either a human or an automated agent. Participants either received no explanation or an explanation based on equality or equity norms. Perceptions of interpersonal justice were stronger for the human agent. Additionally, participants perceived human agents as offering more voice and automated agents as being more consistent in decision-making. When given no explanation, perceptions of informational justice were impaired only for the human decision agent. In the study’s second part, participants took the perspective of a decision-maker and were given the choice to delegate decision-making to an automated system. Participants who delegated an unpleasant decision to the system frequently externalized responsibility and showed different response patterns when confronted by a decision-recipient who asked for a rationale for the decision.
2019 IEEE 27th International Requirements Engineering Conference (RE), 2019
Recent research efforts strive to aid in designing explainable systems. Nevertheless, a systemati... more Recent research efforts strive to aid in designing explainable systems. Nevertheless, a systematic and overarching approach to ensure explainability by design is still missing. Often it is not even clear what precisely is meant when demanding explainability. To address this challenge, we investigate the elicitation, specification, and verification of explainablity as a Non-Functional Requirement (NFR) with the long-term vision of establishing a standardized certification process for the explainability of software-driven systems in tandem with appropriate development techniques. In this work, we carve out different notions of explainability and high-level requirements people have in mind when demanding explainability, and sketch how explainability concerns may be approached in a hypothetical hiring scenario. We provide a conceptual analysis which unifies the different notions of explainability and the corresponding explainability demands.
Philosophy & Technology
We argue that explainable artificial intelligence (XAI), specifically reason-giving XAI, often co... more We argue that explainable artificial intelligence (XAI), specifically reason-giving XAI, often constitutes the most suitable way of ensuring that someone can properly be held responsible for decisions that are based on the outputs of artificial intelligent (AI) systems. We first show that, to close moral responsibility gaps (Matthias 2004), often a human in the loop is needed who is directly responsible for particular AI-supported decisions. Second, we appeal to the epistemic condition on moral responsibility to argue that, in order to be responsible for her decision, the human in the loop has to have an explanation available of the system’s recommendation. Reason explanations are especially well-suited to this end, and we examine whether—and how—it might be possible to make such explanations fit with AI systems. We support our claims by focusing on a case of disagreement between human in the loop and AI system.
Fernsehserien sind längst ein unabdingbarer Bestandteil unseres Alltags geworden. Insbesondere di... more Fernsehserien sind längst ein unabdingbarer Bestandteil unseres Alltags geworden. Insbesondere die Internetdienste haben durch erschwingliche Subskriptionsmodelle die Rezeption serieller Formate drastisch erhöht. Die mittlerweile nahezu uneingeschränkte Verfügbarkeit von Fernseh-und Internetserien und die erhöhte zeitliche und räumliche Flexibilität bei deren Rezeption wirken als Katalysatoren für diese Entwicklung. Wohl aufgrund ihrer Aktualität und Ubiquität erfreut sich die Fernsehserie auch im gegenwärtigen akademischen Diskurs einer großen Beliebtheit sowohl als Gegenstand der praxisbezogenen Lehre als auch der Forschung. Die Fernsehserienforschung-zuweilen in vielleicht zu vereinfachender Manier auch Fernsehforschung genannt 1-befindet sich, aus wissenschaftstheoretischer Perspektive betrachtet, aktuell in einer spannenden Phase, in der ihr Stellenwert als eigenständiger Untersuchungsbereich ausgehandelt wird. Im vorliegenden Beitrag werden Anhaltspunkte für die Beantwortung der Frage benannt, mit welchem Fug und Recht von ‚der Fernsehserienforschung' (im Singular) als einer universitären Disziplin die Rede sein kann. Um eine Antwort auf diese Fragestellung zu finden, ist es von Nöten, ein Kriterienset zu benennen, anhand dessen, intersubjektiv nachvollziehbar, eine wissenschaftssoziologische Standortbestimmung der Serienforschung erfolgen kann. Eine 10 Die Vielfalt der Begrifflichkeiten zur Bezeichnung der Pluralität der in einer Fernsehserie zur Anwendung kommenden Kommunikationsmittel ist bereits Ausdruck des Nebeneinanderforschens, in diesem Fall der Semiotik, Literatur-, Kultur-und Sprachwissenschaften.
We find ourselves surrounded by a rapidly increasing number of autonomous and semi-autonomous sys... more We find ourselves surrounded by a rapidly increasing number of autonomous and semi-autonomous systems. Two grand challenges arise from this development: Machine Ethics and Machine Explainability. Machine Ethics, on the one hand, is concerned with behavioral constraints for systems, set up in a formal unambiguous, algorithmizable, and implementable way, so that morally acceptable, restricted behavior results; Machine Explainability, on the other hand, enables systems to explain their actions and argue for their decisions, so that human users can understand and justifiably trust them. In this paper, we stress the need to link and cross-fertilize these two areas. We point out how Machine Ethics calls for Machine Explainability, and how Machine Explainability involves Machine Ethics. We develop both these facets based on a toy example from the context of medical care robots. In this context, we argue that moral behavior, even if it were verifiable and verified, is not enough to establis...
Im Zuge der Digitalisierung wird die Lebenswelt von Grundschulkindern mehr und mehr durch Informa... more Im Zuge der Digitalisierung wird die Lebenswelt von Grundschulkindern mehr und mehr durch Informatiksysteme geprägt. Informatische Bildung in der Grundschule kann zur Entmystifizierung solcher Systeme maßgeblich beitragen und so die Grundlage für tiefgreifendes Verständnis legen. Bisher ist allerdings nur wenig bekannt über das Vorwissen der SchülerInnen in Bezug auf Informatiksysteme und über die Wirksamkeit der eingesetzten Unterrichtsmaterialien im Hinblick auf den Erwerb von informatischen Kompetenzen. In der hier beschriebenen Studie mit insgesamt 137 Kindern wurde das Vorwissen von Viertklässlern mit einem Leitfadeninterview ermittelt. Danach wurde eine fünfstündige Unterrichtseinheit zu algorithmischen Grundbausteinen und dem Mikrocontroller Calliope mini durchgeführt. Die Unterrichtseinheit endete mit einer Aufgabenstellung, für die die Kinder das Eingabe-Verarbeitung-Ausgabe-Prinzip (EVA-Prinzip) nachvollziehen und mit dem Mikrocontroller umsetzen mussten. Die Bearbeitung d...
Nowadays, users interact with applications in constantly changing environments. The plethora of I... more Nowadays, users interact with applications in constantly changing environments. The plethora of I/O modalities is beneficial for a wide range of application areas such as virtual reality, cloud-based software, or scientific visualization. These areas require interfaces based not only on the traditional mouse and keyboard but also on gestures, speech, or highly-specialized and environment-dependent equipment. We introduce a hypergraph-based interaction model and its implementation as a distributed system, called MorphableUI. Its primary focus is to deliver a user- and developer-friendly way to establish dynamic connections between applications and interaction devices. We present an easy-to-use API for developers and a mobile frontend for users to set up their preferred interfaces. During runtime, MorphableUI transports interaction data between devices and applications. As one of the novelties, the system supports I/O transfer functions by automatically splitting, merging, and casting...
2021 IEEE 29th International Requirements Engineering Conference Workshops (REW), 2021
National and international guidelines for trustworthy artificial intelligence (AI) consider expla... more National and international guidelines for trustworthy artificial intelligence (AI) consider explainability to be a central facet of trustworthy systems. This paper outlines a multidisciplinary rationale for explainability auditing. Specifically, we propose that explainability auditing can ensure the quality of explainability of systems in applied contexts and can be the basis for certification as a means to communicate whether systems meet certain explainability standards and requirements. Moreover, we emphasize that explainability auditing needs to take a multidisciplinary perspective, and we provide an overview of four perspectives (technical, psychological, ethical, legal) and their respective benefits with respect to explainability auditing.
International Journal of Selection and Assessment, 2021
This is an open access article under the terms of the Creative Commons Attribution-NonCommercial ... more This is an open access article under the terms of the Creative Commons Attribution-NonCommercial License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited and is not used for commercial purposes.
Leveraging Applications of Formal Methods, Verification and Validation: Discussion, Dissemination, Applications, 2016
Today we often deal with hybrid products, i.e. physical devices containing embedded software. Som... more Today we often deal with hybrid products, i.e. physical devices containing embedded software. Sometimes, e.g., in the VW emission scandal, such hybrid systems aims rather at the fulfillment of interests of the manufacturers than at those of the customers. This often happens hidden from and unbeknown to the owners or users of these devices and especially unbeknown to supervisory authorities. While examples of such software doping can be easily found, the phenomenon itself isn’t well understood yet. Not only do we lack a proper definition of the term “software doping”, it is also the moral status of software doping that seems vague and unclear. In this paper, I try, in the tradition of computer ethics, to first understand what software doping is and then to examine its moral status. I argue that software doping is at least pro tanto morally wrong. I locate problematic features of software doping that are in conflict with moral rights that come with device ownership. Furthermore, I argue for the stronger claim that, in general, software doping also is morally wrong all things considered – at least from the point of view of some normative theories. Explicitly, the VW emission scandal is adduced as a significant specimen of software doping that unquestionably is morally wrong all things considered. Finally, I conclude that we ought to develop software doping detection if only for moral reasons and point towards the implications my work might have for the development of future software doping detection methods.
Electronic Proceedings in Theoretical Computer Science, 2019
We find ourselves surrounded by a rapidly increasing number of autonomous and semi-autonomous sys... more We find ourselves surrounded by a rapidly increasing number of autonomous and semi-autonomous systems. Two grand challenges arise from this development: Machine Ethics and Machine Explainability. Machine Ethics, on the one hand, is concerned with behavioral constraints for systems, so that morally acceptable, restricted behavior results; Machine Explainability, on the other hand, enables systems to explain their actions and argue for their decisions in a way that human users can understand and justifiably trust them. In this paper, we try to motivate and work towards a framework combining Machine Ethics and Machine Explainability. Starting from a toy example, we detect various desiderata of such a framework and argue why they should and how they could be incorporated in autonomous systems. Our main idea is to apply a framework of formal argumentation theory both, for decision-making under ethically motivated constraints and for the task of generating useful explanations based on these constraints given only limited knowledge of the world. The result of our deliberations can be described as a first version of an ethically motivated, principle-governed framework combining Machine Ethics and Machine Explainability.
Proceedings of the 1st Workshop on Explainable Computational Intelligence (XCI 2017), 2017
We argue that, to be trustworthy, Computational Intelligence (CI) has to do what it is entrusted ... more We argue that, to be trustworthy, Computational Intelligence (CI) has to do what it is entrusted to do for permissible reasons and to be able to give rationalizing explanations of its behavior which are accurate and graspable. We support this claim by drawing parallels with trustworthy human persons, and we show what difference this makes in a hypothetical CI hiring system. Finally, we point out two challenges for trustworthy CI and sketch a mechanism which could be used to generate sufficiently accurate as well as graspable rationalizing explanations for CI behavior.
Designing trustworthy systems and enabling external parties to accurately assess the trustworthin... more Designing trustworthy systems and enabling external parties to accurately assess the trustworthiness of these systems are crucial objectives. Only if trustors assess system trustworthiness accurately, they can base their trust on adequate expectations about the system and reasonably rely on or reject its outputs. However, the process by which trustors assess a system’s actual trustworthiness to arrive at their perceived trustworthiness remains underexplored. In this paper, we conceptually distinguish between trust propensity, trustworthiness, trust, and trusting behavior. Drawing on psychological models of assessing other people’s characteristics, we present the two-level Trustworthiness Assessment Model (TrAM). At the micro level, we propose that trustors assess system trustworthiness based on cues associated with the system. The accuracy of this assessment depends on cue relevance and availability on the system’s side, and on cue detection and utilization on the human’s side. At t...
The International Review of Information Ethics
Informatics is the innovation driver of our time. From social media and artificial intelligence t... more Informatics is the innovation driver of our time. From social media and artificial intelligence to autonomous cyber-physical systems: informatics-driven, digital products and services permeate our society in significant ways. Computer scientists, whether researchers or software developers, are shaping tomorrow's society. As a consequence, ethical, societal, and practical reasons demand that students of computer science and related subjects should receive at least a basic ethical education to be able to do justice to their ever-growing responsibilities and duties. Ethics for Nerds is an award-winning lecture that is being taught annually at Saarland University since 2016. The course has been continually updated and progressively improved over the years. In this paper, we share our experiences with and best practices for teaching the basics of ethics to students of computer science and offer advice on how to design a successful ethics course as part of a computer science study pro...
2021 IEEE 29th International Requirements Engineering Conference Workshops (REW), 2021
System quality attributes like explainability, transparency, traceability, explicability, interpr... more System quality attributes like explainability, transparency, traceability, explicability, interpretability, understand-ability, and the like are given an increasing weight, both in research and in the industry. All of these attributes can be sub-sumed under the term of “perspicuity”. We argue in this vision paper that perspicuity is to be regarded as a meaningful and distinct class of quality attributes from which new requirements along with new challenges arise, and that perspicuity as a requirement is needed for legal, societal, and moral reasons, as well as for reasons of consistency within requirements engineering.
Computers in Human Behavior, 2021
Abstract Advances in artificial intelligence contribute to increasing automation of decisions. In... more Abstract Advances in artificial intelligence contribute to increasing automation of decisions. In a healthcare-scheduling context, this study compares effects of decision agents and explanations for decisions on decision-recipients’ perceptions of justice. In a 2 (decision agent: automated vs. human) × 3 (explanation: no explanation vs. equality-explanation vs. equity-explanation) between-subjects online study, 209 healthcare professionals were asked to put themselves in a situation where their vacation request was denied by either a human or an automated agent. Participants either received no explanation or an explanation based on equality or equity norms. Perceptions of interpersonal justice were stronger for the human agent. Additionally, participants perceived human agents as offering more voice and automated agents as being more consistent in decision-making. When given no explanation, perceptions of informational justice were impaired only for the human decision agent. In the study’s second part, participants took the perspective of a decision-maker and were given the choice to delegate decision-making to an automated system. Participants who delegated an unpleasant decision to the system frequently externalized responsibility and showed different response patterns when confronted by a decision-recipient who asked for a rationale for the decision.
2019 IEEE 27th International Requirements Engineering Conference (RE), 2019
Recent research efforts strive to aid in designing explainable systems. Nevertheless, a systemati... more Recent research efforts strive to aid in designing explainable systems. Nevertheless, a systematic and overarching approach to ensure explainability by design is still missing. Often it is not even clear what precisely is meant when demanding explainability. To address this challenge, we investigate the elicitation, specification, and verification of explainablity as a Non-Functional Requirement (NFR) with the long-term vision of establishing a standardized certification process for the explainability of software-driven systems in tandem with appropriate development techniques. In this work, we carve out different notions of explainability and high-level requirements people have in mind when demanding explainability, and sketch how explainability concerns may be approached in a hypothetical hiring scenario. We provide a conceptual analysis which unifies the different notions of explainability and the corresponding explainability demands.
Philosophy & Technology
We argue that explainable artificial intelligence (XAI), specifically reason-giving XAI, often co... more We argue that explainable artificial intelligence (XAI), specifically reason-giving XAI, often constitutes the most suitable way of ensuring that someone can properly be held responsible for decisions that are based on the outputs of artificial intelligent (AI) systems. We first show that, to close moral responsibility gaps (Matthias 2004), often a human in the loop is needed who is directly responsible for particular AI-supported decisions. Second, we appeal to the epistemic condition on moral responsibility to argue that, in order to be responsible for her decision, the human in the loop has to have an explanation available of the system’s recommendation. Reason explanations are especially well-suited to this end, and we examine whether—and how—it might be possible to make such explanations fit with AI systems. We support our claims by focusing on a case of disagreement between human in the loop and AI system.
Fernsehserien sind längst ein unabdingbarer Bestandteil unseres Alltags geworden. Insbesondere di... more Fernsehserien sind längst ein unabdingbarer Bestandteil unseres Alltags geworden. Insbesondere die Internetdienste haben durch erschwingliche Subskriptionsmodelle die Rezeption serieller Formate drastisch erhöht. Die mittlerweile nahezu uneingeschränkte Verfügbarkeit von Fernseh-und Internetserien und die erhöhte zeitliche und räumliche Flexibilität bei deren Rezeption wirken als Katalysatoren für diese Entwicklung. Wohl aufgrund ihrer Aktualität und Ubiquität erfreut sich die Fernsehserie auch im gegenwärtigen akademischen Diskurs einer großen Beliebtheit sowohl als Gegenstand der praxisbezogenen Lehre als auch der Forschung. Die Fernsehserienforschung-zuweilen in vielleicht zu vereinfachender Manier auch Fernsehforschung genannt 1-befindet sich, aus wissenschaftstheoretischer Perspektive betrachtet, aktuell in einer spannenden Phase, in der ihr Stellenwert als eigenständiger Untersuchungsbereich ausgehandelt wird. Im vorliegenden Beitrag werden Anhaltspunkte für die Beantwortung der Frage benannt, mit welchem Fug und Recht von ‚der Fernsehserienforschung' (im Singular) als einer universitären Disziplin die Rede sein kann. Um eine Antwort auf diese Fragestellung zu finden, ist es von Nöten, ein Kriterienset zu benennen, anhand dessen, intersubjektiv nachvollziehbar, eine wissenschaftssoziologische Standortbestimmung der Serienforschung erfolgen kann. Eine 10 Die Vielfalt der Begrifflichkeiten zur Bezeichnung der Pluralität der in einer Fernsehserie zur Anwendung kommenden Kommunikationsmittel ist bereits Ausdruck des Nebeneinanderforschens, in diesem Fall der Semiotik, Literatur-, Kultur-und Sprachwissenschaften.
We find ourselves surrounded by a rapidly increasing number of autonomous and semi-autonomous sys... more We find ourselves surrounded by a rapidly increasing number of autonomous and semi-autonomous systems. Two grand challenges arise from this development: Machine Ethics and Machine Explainability. Machine Ethics, on the one hand, is concerned with behavioral constraints for systems, set up in a formal unambiguous, algorithmizable, and implementable way, so that morally acceptable, restricted behavior results; Machine Explainability, on the other hand, enables systems to explain their actions and argue for their decisions, so that human users can understand and justifiably trust them. In this paper, we stress the need to link and cross-fertilize these two areas. We point out how Machine Ethics calls for Machine Explainability, and how Machine Explainability involves Machine Ethics. We develop both these facets based on a toy example from the context of medical care robots. In this context, we argue that moral behavior, even if it were verifiable and verified, is not enough to establis...
Im Zuge der Digitalisierung wird die Lebenswelt von Grundschulkindern mehr und mehr durch Informa... more Im Zuge der Digitalisierung wird die Lebenswelt von Grundschulkindern mehr und mehr durch Informatiksysteme geprägt. Informatische Bildung in der Grundschule kann zur Entmystifizierung solcher Systeme maßgeblich beitragen und so die Grundlage für tiefgreifendes Verständnis legen. Bisher ist allerdings nur wenig bekannt über das Vorwissen der SchülerInnen in Bezug auf Informatiksysteme und über die Wirksamkeit der eingesetzten Unterrichtsmaterialien im Hinblick auf den Erwerb von informatischen Kompetenzen. In der hier beschriebenen Studie mit insgesamt 137 Kindern wurde das Vorwissen von Viertklässlern mit einem Leitfadeninterview ermittelt. Danach wurde eine fünfstündige Unterrichtseinheit zu algorithmischen Grundbausteinen und dem Mikrocontroller Calliope mini durchgeführt. Die Unterrichtseinheit endete mit einer Aufgabenstellung, für die die Kinder das Eingabe-Verarbeitung-Ausgabe-Prinzip (EVA-Prinzip) nachvollziehen und mit dem Mikrocontroller umsetzen mussten. Die Bearbeitung d...
Nowadays, users interact with applications in constantly changing environments. The plethora of I... more Nowadays, users interact with applications in constantly changing environments. The plethora of I/O modalities is beneficial for a wide range of application areas such as virtual reality, cloud-based software, or scientific visualization. These areas require interfaces based not only on the traditional mouse and keyboard but also on gestures, speech, or highly-specialized and environment-dependent equipment. We introduce a hypergraph-based interaction model and its implementation as a distributed system, called MorphableUI. Its primary focus is to deliver a user- and developer-friendly way to establish dynamic connections between applications and interaction devices. We present an easy-to-use API for developers and a mobile frontend for users to set up their preferred interfaces. During runtime, MorphableUI transports interaction data between devices and applications. As one of the novelties, the system supports I/O transfer functions by automatically splitting, merging, and casting...
2021 IEEE 29th International Requirements Engineering Conference Workshops (REW), 2021
National and international guidelines for trustworthy artificial intelligence (AI) consider expla... more National and international guidelines for trustworthy artificial intelligence (AI) consider explainability to be a central facet of trustworthy systems. This paper outlines a multidisciplinary rationale for explainability auditing. Specifically, we propose that explainability auditing can ensure the quality of explainability of systems in applied contexts and can be the basis for certification as a means to communicate whether systems meet certain explainability standards and requirements. Moreover, we emphasize that explainability auditing needs to take a multidisciplinary perspective, and we provide an overview of four perspectives (technical, psychological, ethical, legal) and their respective benefits with respect to explainability auditing.
International Journal of Selection and Assessment, 2021
This is an open access article under the terms of the Creative Commons Attribution-NonCommercial ... more This is an open access article under the terms of the Creative Commons Attribution-NonCommercial License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited and is not used for commercial purposes.
Leveraging Applications of Formal Methods, Verification and Validation: Discussion, Dissemination, Applications, 2016
Today we often deal with hybrid products, i.e. physical devices containing embedded software. Som... more Today we often deal with hybrid products, i.e. physical devices containing embedded software. Sometimes, e.g., in the VW emission scandal, such hybrid systems aims rather at the fulfillment of interests of the manufacturers than at those of the customers. This often happens hidden from and unbeknown to the owners or users of these devices and especially unbeknown to supervisory authorities. While examples of such software doping can be easily found, the phenomenon itself isn’t well understood yet. Not only do we lack a proper definition of the term “software doping”, it is also the moral status of software doping that seems vague and unclear. In this paper, I try, in the tradition of computer ethics, to first understand what software doping is and then to examine its moral status. I argue that software doping is at least pro tanto morally wrong. I locate problematic features of software doping that are in conflict with moral rights that come with device ownership. Furthermore, I argue for the stronger claim that, in general, software doping also is morally wrong all things considered – at least from the point of view of some normative theories. Explicitly, the VW emission scandal is adduced as a significant specimen of software doping that unquestionably is morally wrong all things considered. Finally, I conclude that we ought to develop software doping detection if only for moral reasons and point towards the implications my work might have for the development of future software doping detection methods.
Electronic Proceedings in Theoretical Computer Science, 2019
We find ourselves surrounded by a rapidly increasing number of autonomous and semi-autonomous sys... more We find ourselves surrounded by a rapidly increasing number of autonomous and semi-autonomous systems. Two grand challenges arise from this development: Machine Ethics and Machine Explainability. Machine Ethics, on the one hand, is concerned with behavioral constraints for systems, so that morally acceptable, restricted behavior results; Machine Explainability, on the other hand, enables systems to explain their actions and argue for their decisions in a way that human users can understand and justifiably trust them. In this paper, we try to motivate and work towards a framework combining Machine Ethics and Machine Explainability. Starting from a toy example, we detect various desiderata of such a framework and argue why they should and how they could be incorporated in autonomous systems. Our main idea is to apply a framework of formal argumentation theory both, for decision-making under ethically motivated constraints and for the task of generating useful explanations based on these constraints given only limited knowledge of the world. The result of our deliberations can be described as a first version of an ethically motivated, principle-governed framework combining Machine Ethics and Machine Explainability.
Proceedings of the 1st Workshop on Explainable Computational Intelligence (XCI 2017), 2017
We argue that, to be trustworthy, Computational Intelligence (CI) has to do what it is entrusted ... more We argue that, to be trustworthy, Computational Intelligence (CI) has to do what it is entrusted to do for permissible reasons and to be able to give rationalizing explanations of its behavior which are accurate and graspable. We support this claim by drawing parallels with trustworthy human persons, and we show what difference this makes in a hypothetical CI hiring system. Finally, we point out two challenges for trustworthy CI and sketch a mechanism which could be used to generate sufficiently accurate as well as graspable rationalizing explanations for CI behavior.