Robots and Transparency: The Multiple Dimensions of Transparency in the Context of Robot Technologies (original) (raw)
Related papers
Journal of Ambient Intelligence and Smart Environments, 2019
This paper discusses the difficulties to obtain valid consent for data processing activities executed by Artificial Intelligence (AI) systems. Although the European Union’s General Data Protection Regulation (GDPR) is one of the most updated and most comprehensive legal instruments ensuring the right to data protection, the so-called consent obligation is challenged by several technical and practical issues in the case of AI systems. Data controllers obligation to demonstrate transparent information and to ensure the right to be forgotten is being challenged by the technical capabilities of AI taken together with some opaque statements in the GDPR. More detailed explanations should be delivered by the EU Institutions on the implementation of the GDPR for data controllers offering AI systems.
In a future smart and data-driven environment, intelligent machines will take more and more automated decisions for us. In a way, this means human beings give away part of their responsibility. Within current regulations, machines like these are seen as objects, with the designers and manufacturers as the ones responsible for the outcomes and actions of such machines. The current privacy by design focus within EU regulations and policies aim to safeguard the personal data of users within these technological developments. This is important, since privacy can be seen as a precondition of autonomy. Solely a privacy by design focus won't be enough in the future, which is why the EU regulations should now focus on incorporating a next step, being choice architectures for data-subjects as well as data-controllers. The incorporation of human choice within design could and regulating for choice architectures by design might prove useful, in the sense that they safeguard human responsibility in an highly autonomous environment. This way, human autonomy as well as privacy are protected by design.
Transparency of Automated Decisions in the GDPR: An Attempt for Systemisation
SSRN Electronic Journal
The study provides a conceptual framework of the transparency requirements arising due to the opaque and biased nature of the automated decisions; and further explores the compatibility of this framework with the affordances of the transparency-related provisions of the GDPR. In line with this, the section following this introduction will start with the question: what type of automated decisions are within the scope of the GDPR? Accordingly, the coming section 2 will explore what amounts to an automated decision under the Regulation, and how the requirement of “solely automated processing” should be understood. In search of an answer, the section introduces a “regulatory perspective” to serve as the common denominator to systemize legal and other similarly significant effects of the automated decisions as expressed in Article 22/1. In sum: in order to address the issue at the necessary level of generality, automated data-driven systems are approached as decisional processes with certain regulatory impact. As the paper ultimately intends to analyse the adequacy of the transparency-related provisions of the GDPR, this logically entails, as an initial step, the definition of a measure, or a certain benchmark of transparency to test and compare with the affordances provided by the Regulation. To this end, section 3 briefly provides the essential components of a technology-neutral and model-agnostic framework which aims to conceptualise and systemise the transparency requirements engendered by the automated decisions — namely, the transparency desiderata. Intended as a generic template, the transparency desiderata may be seen as a legal reading of the data-driven technologies together with their capacities and affordances. Taking into consideration the legal, economic and technical/computational impediments; section 3 completes with a theoretical outline of the possible modes of implementation for the transparency desiderata. Over all, the section seeks answer to the question: what must be made transparent to render automated decisions reviewable, verifiable and justifiable as regulatory processes — either directly by human reason or through human-machine symbiotic mechanisms? Next, with a view to find out to what extent that the GDPR accommodates the transparency desiderata, sections 4 and 5 analyse the transparency affordances of the provisions specific to automated decisions in a two-pronged methodology. Accordingly, the scope and the implications of these provisions are studied in a dialectical entanglement — though as two different set of obligations. First, section 4, will provide a normative analysis of the relevant provisions formulated in the form of notification and disclosure duties under the “access rights” (Art. 13,14,15). The second set of transparency-related legal remedies to be analysed under the GDPR is the right not to be subject to automated decisions and its derivatives as formulated in Article 22. Section 5 is spared for the transparency implications of Article 22; and as a novel approach, the right to human intervention and contestation(Art.22/3) are treated as a different type of obligation which is complimentary to the “access rights”, but distinct in nature. Based on the systemic view taken, section 5.elaborates on the possible content and the procedural aspects of the right to contest as provided in Article 22/3 — namely the contestation scheme: an initial abstraction to be further developed and refined to accommodate different decision-making domains and methodologies.
INTERNATIONAL JOURNAL FOR LEGAL RESEARCH AND ANALYSIS, 2023
No part of this publication may be reproduced or copied in any form by any means without prior written permission of Managing Editor of IJLRA. The views expressed in this publication are purely personal opinions ofthe authors and do not reflect the views of the Editorial Team of IJLRA. Though every effort has been made to ensure that the information in Volume 2 Issue 7 is accurate and appropriately cited/referenced, neither the Editorial Board nor IJLRA shall be held liable or responsible in any manner whatsever for any consequences for any action taken by anyone on the basis of information in the Journal.
Paladyn, Journal of Behavioral Robotics
Can we have personal robots without giving away personal data? Besides, what is the role of a robots Privacy Policy in that question? This work explores for the first time privacy in the context of consumer robotics through the lens of information communicated to users through Privacy Policies and Terms and Conditions. Privacy, personal and non-personal data are discussed under the light of the human–robot relationship, while we attempt to draw connections to dimensions related to personalization, trust, and transparency. We introduce a novel methodology to assess how the “Organization for Economic Cooperation and Development Guidelines Governing the Protection of Privacy and Trans-Border Flows of Personal Data” are reflected upon the publicly available Privacy Policies and Terms and Conditions in the consumer robotics field. We draw comparisons between the ways eight consumer robotic companies approach privacy principles. Current findings demonstrate significant deviations in the s...
Ethical Dimensions of the GDPR, AI Regulation, and Beyond
Direito Público
Our digital society is changing rapidly, with emerging new technologies such as artificial intelligence (AI) and machine learning, robotics, and the internet of things. These changes trigger new fundamental ethical questions relating to privacy, data protection and other values, including human rights and the way they are affected by the extensive and intensive use of data for analytical and practical innovations. This article explores these ethical dimensions and the extent to which the European Union’s General Data Protection Regulation (GDPR) of 2018 takes ethics into account in relation to these socio-technical developments. More briefly, it looks similarly but more selectively at the EU’s proposed AI Act of 2021, which aims to regulate AI in relation to levels of risk.It concludes with some observations on desirable institutional arrangements for making and applying ethical judgements in the regulation of advanced technologies that use personal data.