Sandra Wachter | University of Oxford (original) (raw)

Papers by Sandra Wachter

Research paper thumbnail of The GDPR and the Internet of Things: A Three-Step Transparency Model

Privacy The Internet of Things (IoT) requires pervasive collection and linkage of user data to pr... more Privacy The Internet of Things (IoT) requires pervasive collection and linkage of user data to provide personalised experiences based on potentially invasive inferences. Consistent identification of users and devices is necessary for this functionality, which poses risks to user privacy. The forthcoming General Data Protection Regulation (GDPR) contains numerous provisions relevant to these risks, which may nonetheless be insufficient to ensure a fair balance between users' and developers' interests. A three-step transparency model is described based on known privacy risks of the IoT, the GDPR's governing principles, and weaknesses in its relevant provisions. Eleven ethical guidelines are proposed for IoT developers and data controllers on how information about the functionality of the IoT should be shared with users above the GDPR's legally binding requirements. Two use cases demonstrate how the guidelines apply in practice: IoT in public spaces and connected cities, and connected cars.

Bookmarks Related papers MentionsView impact

Research paper thumbnail of Normative challenges of identification in the Internet of Things: Privacy, profiling, discrimination, and the GDPR

In the Internet of Things (IoT), identification and access control technologies provide essential... more In the Internet of Things (IoT), identification and access control technologies provide essential infrastructure to link data between a user's devices with unique identities, and provide seamless and linked up services. At the same time, profiling methods based on linked records can reveal unexpected details about users' identity and private life, which can conflict with privacy rights and lead to economic, social, and other forms of discriminatory treatment. A balance must be struck between identification and access control required for the IoT to function and user rights to privacy and identity. Striking this balance is not an easy task because of weaknesses in cybersecurity and anonymisation techniques. The EU General Data Protection Regulation (GDPR), set to come into force in May 2018, may provide essential guidance to achieve a fair balance between the interests of IoT providers and users. Through a review of academic and policy literature, this paper maps the inherit tension between privacy and identifiability in the IoT. It focuses on four challenges: (1) profiling, inference, and discrimination; (2) control and context-sensitive sharing of identity; (3) consent and uncertainty; and (4) honesty, trust, and transparency. The paper will then examine the extent to which several standards defined in the GDPR will provide meaningful protection for privacy and control over identity for users of IoT. The paper concludes that in order to minimise the privacy impact of the conflicts between data protection principles and identification in the IoT, GDPR standards urgently require further specification and implementation into the design and deployment of IoT technologies.

Bookmarks Related papers MentionsView impact

Research paper thumbnail of COUNTERFACTUAL EXPLANATIONS WITHOUT OPENING THE BLACK BOX: AUTOMATED DECISIONS AND THE GDPR

Harvard Journal of Law & Technology, 2018

There has been much discussion of the “right to explanation” in the EU General Data Protection Re... more There has been much discussion of the “right to explanation” in the EU General Data Protection Regulation, and its existence, merits, and disadvantages. Implementing a right to explanation that opens the ‘black box’ of algorithmic decision-making faces major legal and technical barriers. Explaining the functionality of complex algorithmic decision-making systems and their rationale in specific cases is a technically challenging problem. Some explanations may offer little meaningful information to data subjects, raising questions around their value. Data controllers have an interest to not disclose information about their algorithms that contains trade secrets, violates the rights and freedoms of others (e.g. privacy), or allows data subjects to game or manipulate decision-making.

Explanations of automated decisions need not hinge on the general public understanding how algorithmic systems function. Even though such interpretability is of great importance and should be pursued, explanations can, in principle, be offered without opening the black box. Looking at explanations as a means to help a data subject act rather than merely understand, one could gauge the scope and content of explanations according to the specific goal or action they are intended to support.

From the perspective of individuals affected by automated decision-making, we propose three aims for explanations:

(1) to inform and help the individual understand why a particular decision was reached,

(2) to provide grounds to contest the decision if the outcome is undesired, and

(3) to understand what would need to change in order to receive a desired result in the future, based on the current decision-making model.

We assess how each of these goals finds support in the GDPR, and the extent to which they hinge on opening the ‘black box’. We suggest data controllers should offer a particular type of explanation, ‘unconditional counterfactual explanations’, to support these three aims. These counterfactual explanations describe the smallest change to the world that can be made to obtain a desirable outcome, or to arrive at the “closest possible world.” As multiple variables or sets of variables can lead to one or more desirable outcomes, multiple counterfactual explanations can be provided, corresponding to different choices of nearby possible worlds for which the counterfactual holds. Counterfactuals describe a dependency on the external facts that lead to that decision without the need to convey the internal state or logic of an algorithm. As a result, counterfactuals serve as a minimal solution that bypasses the current technical limitations of interpretability, while striking a balance between transparency and the rights and freedoms of others (e.g. privacy, trade secrets).

Bookmarks Related papers MentionsView impact

Research paper thumbnail of A RIGHT TO REASONABLE INFERENCES: RE-THINKING DATA PROTECTION LAW IN THE AGE OF BIG DATA AND AI

Columbia Business Law Review, 2019

Big Data analytics and artificial intelligence (AI) draw non-intuitive and unverifiable inference... more Big Data analytics and artificial intelligence (AI) draw non-intuitive and unverifiable inferences and predictions about the behaviors, preferences, and private lives of individuals. These inferences draw on highly diverse and feature-rich data of unpredictable value, and create new opportunities for discriminatory, biased, and invasive decision-making. Data protection law is meant to protect people’s privacy, identity, reputation, and autonomy, but is currently failing to protect data subjects from the novel risks of inferential analytics. The legal status of inferences is heavily disputed in legal scholarship, and marked by inconsistencies and contradictions within and between the views of the Article 29 Working Party and the European Court of Justice (ECJ).

This Article shows that individuals are granted little control and oversight over how their personal data is used to draw inferences about them. Compared to other types of personal data, inferences are effectively ‘economy class’ personal data in the General Data Protection Regulation (GDPR). Data subjects’ rights to know about (Art 13-15), rectify (Art 16), delete (Art 17), object to (Art 21), or port (Art 20) personal data are significantly curtailed for inferences. The GDPR also provides insufficient protection against sensitive inferences (Art 9) or remedies to challenge inferences or important decisions based on them (Art 22(3)).

This situation is not accidental. In standing jurisprudence the ECJ has consistently restricted the remit of data protection law to assessing the legitimacy of input personal data undergoing processing, and to rectify, block, or erase it. Critically, the ECJ has likewise made clear that data protection law is not intended to ensure the accuracy of decisions and decision-making processes involving personal data, or to make these processes fully transparent. Current policy proposals addressing privacy protection (the ePrivacy Regulation and the EU Digital Content Directive) and Europe’s new Copyright Directive and Trade Secrets Directive also fail to close the GDPR’s accountability gaps concerning inferences.

This Article argues that a new data protection right, the ‘right to reasonable inferences’, is needed to help close the accountability gap currently posed by ‘high risk inferences’ , meaning inferences drawn from Big Data analytics that damage privacy or reputation, or have low verifiability in the sense of being predictive or opinion-based while being used in important decisions. This right would require ex-ante justification to be given by the data controller to establish whether an inference is reasonable. This disclosure would address (1) why certain data form a normatively acceptable basis from which to draw inferences; (2) why these inferences are relevant and normatively acceptable for the chosen processing purpose or type of automated decision; and (3) whether the data and methods used to draw the inferences are accurate and statistically reliable. The ex-ante justification is bolstered by an additional ex-post mechanism enabling unreasonable inferences to be challenged.

Bookmarks Related papers MentionsView impact

Research paper thumbnail of Transparent, Explainable, and Accountable AI for Robotics

Wachter, S., Mittelstadt, B., & Floridi, L. (2017). Transparent, explainable, and accountable AI ... more Wachter, S., Mittelstadt, B., & Floridi, L. (2017). Transparent, explainable, and accountable AI for
robotics. Science Robotics, 2(6), eaan6080.

Bookmarks Related papers MentionsView impact

Research paper thumbnail of Algorithms and AI are the future, but we must not allow them to become a shield for injustice.pdf

we must hold algorithms to - at least - the same standards as humans, making sure that we do not ... more we must hold algorithms to - at least - the same standards as humans, making sure that we do not blindly trust them, and retaining the right to question and understand their decisions.

Bookmarks Related papers MentionsView impact

Research paper thumbnail of Artificial Intelligence and the 'Good Society': the US, EU, and UK approach

In October 2016, the White House, the European Parliament, and the UK House of Commons each issue... more In October 2016, the White House, the European Parliament, and the UK House of Commons each issued a report outlining their visions on how to prepare society for the widespread use of artificial intelligence (AI). In this article, we provide a comparative assessment of these three reports in order to facilitate the design of policies favourable to the development of a 'good AI society'. To do so, we examine how each report addresses the following three topics: (a) the development of a 'good AI society'; (b) the role and responsibility of the government, the private sector, and the research community (including academia) in pursuing such a development; and (c) where the recommendations to support such a development may be in need of improvement. Our analysis concludes that the reports address adequately various ethical, social, and economic topics, but come short of providing an overarching political vision and long-term strategy for the development of a 'good AI society'. In order to contribute to fill this gap, in the conclusion we suggest a two-pronged approach.

Bookmarks Related papers MentionsView impact

Research paper thumbnail of The Ethics of Algorithms: Mapping the Debate

Bookmarks Related papers MentionsView impact

Research paper thumbnail of The Ethics of Algorithms: Mapping the Debate

In information societies, operations, decisions and choices previously left to humans are increas... more In information societies, operations, decisions and choices previously left to humans are increasingly delegated to algorithms, which may advise, if not decide, about how data should be interpreted and what actions should be taken as a result. More and more often, algorithms mediate social processes, business transactions, governmental decisions, and how we perceive, understand, and interact among ourselves and with the environment. Gaps between the design and operation of algorithms and our understanding of their ethical implications can have severe consequences affecting individuals as well as groups and whole societies. This paper makes three contributions to clarify the ethical importance of algorithmic mediation. It provides a prescriptive map to organise the debate. It reviews the current discussion of ethical aspects of algorithms. And it assesses the available literature in order to identify areas requiring further work to develop the ethics of algorithms.

Bookmarks Related papers MentionsView impact

Drafts by Sandra Wachter

Research paper thumbnail of Artificial Intelligence and the 'Good Society': the US, EU, and UK approach

In October 2016, the White House, the European Parliament, and the UK House of Commons each issue... more In October 2016, the White House, the European Parliament, and the UK House of Commons each issued a report outlining their visions on how to prepare society for the widespread use of AI. In this article, we provide a comparative assessment of these three reports in order to facilitate the design of policies favourable to the development of a 'good AI society'. To do so, we examine how each report addresses the following three topics: (a) the development of a 'good AI society'; (b) the role and responsibility of the government, the private sector, and the research community (including academia) in pursuing such a development; and (c) where the recommendations to support such a development may be in need of improvement. Our analysis concludes that the reports address adequately various ethical, social, and economic topics, but come short of providing an overarching political vision and long-term strategy for the development of a 'good AI society'. In order to contribute to fill this gap, in the conclusion we suggest a two-pronged approach.

Bookmarks Related papers MentionsView impact

Research paper thumbnail of Why a right to explanation of automated decision-making does not exist in the General Data Protection Regulation

Since approval of the EU General Data Protection Regulation (GDPR) in 2016, it has been widely an... more Since approval of the EU General Data Protection Regulation (GDPR) in 2016, it has been widely and repeatedly claimed that a 'right to explanation' of decisions made by automated or artificially intelligent algorithmic systems will be legally mandated by the GDPR. This right to explanation is viewed as an ideal mechanism to enhance the accountability and transparency of automated decision-making. However, there are several reasons to doubt both the legal existence and the feasibility of such a right. In contrast to the right to explanation of specific automated decisions claimed elsewhere, the GDPR only mandates that data subjects receive limited information (Articles 13-15) about the logic involved, as well as the significance and the envisaged consequences of automated decision-making systems, what we term a 'right to be informed'. Further, the ambiguity and limited scope of the 'right not to be subject to automated decision-making' contained in Article 22 (from which the alleged 'right to explanation' stems) raises questions over the protection actually afforded to data subjects. These problems show that the GDPR lacks precise language as well as explicit and well-defined rights and safeguards against automated decision-making, and therefore runs the risk of being toothless. We propose a number of legislative steps that, if taken, may improve the transparency and accountability of automated decision-making when the GDPR comes into force in 2018.

Bookmarks Related papers MentionsView impact

Research paper thumbnail of Privacy: Primus Inter Pares - Privacy as a precondition for self-development, personal fulfilment and the free enjoyment of fundamental human rights

The General Data Protection Regulation (GDPR) is Europe’s new approach to enhance privacy as it p... more The General Data Protection Regulation (GDPR) is Europe’s new approach to enhance privacy as it promises to enforce harmonised data protection standards in the Member States . However, even though the regulation has the dedicated goal to enhance privacy, it also tries to balance competing rights, such as the free flow of data, transparency, national security and overriding economic interests. As a result, the supervisory authorities will determine new data protection standards. Their assessment and how they evaluate the importance of privacy will be the benchmark. However, supervisory authorities will argue from a standpoint that assumes that all competing interests are equal. By analysing and interpreting the jurisprudence of the European Court of Human Rights (ECtHR), I will argue that the dominant theoretical position treating all human rights as equal must be abandoned. Rather, I will show that jurisprudence contains an inherited hierarchy among certain rights in which privacy occupies an elevated position. The reasons are threefold: first, privacy is a critical element to personal fulfilment and self-development which has intrinsic value for human beings and a democratic society as it is the basis for pluralism. Second, free and undisturbed development of personality is a necessary precondition for the free exercise of certain human rights, e.g. right to education; freedom of expression; freedom of thought, conscience and religion; free elections; and freedom of assembly and association. Third, some level of privacy has to be ensured in order to freely exercise these human rights. I will conclude that these issues become even more pressing due to the universal implementation of digital technologies. Informational self-determination is one effective tool to guarantee privacy and to guard against discrimination, public humiliation or self-imposed stigma and push for effective remedies in case of privacy infringements and urge to consider stricter laws that prohibit requests or receipt of certain information (e.g. about race, sexual orientation, health status, or gender) that could form the basis of discrimination.

Bookmarks Related papers MentionsView impact

Research paper thumbnail of The GDPR and the Internet of Things: A Three-Step Transparency Model

Privacy The Internet of Things (IoT) requires pervasive collection and linkage of user data to pr... more Privacy The Internet of Things (IoT) requires pervasive collection and linkage of user data to provide personalised experiences based on potentially invasive inferences. Consistent identification of users and devices is necessary for this functionality, which poses risks to user privacy. The forthcoming General Data Protection Regulation (GDPR) contains numerous provisions relevant to these risks, which may nonetheless be insufficient to ensure a fair balance between users' and developers' interests. A three-step transparency model is described based on known privacy risks of the IoT, the GDPR's governing principles, and weaknesses in its relevant provisions. Eleven ethical guidelines are proposed for IoT developers and data controllers on how information about the functionality of the IoT should be shared with users above the GDPR's legally binding requirements. Two use cases demonstrate how the guidelines apply in practice: IoT in public spaces and connected cities, and connected cars.

Bookmarks Related papers MentionsView impact

Research paper thumbnail of Normative challenges of identification in the Internet of Things: Privacy, profiling, discrimination, and the GDPR

In the Internet of Things (IoT), identification and access control technologies provide essential... more In the Internet of Things (IoT), identification and access control technologies provide essential infrastructure to link data between a user's devices with unique identities, and provide seamless and linked up services. At the same time, profiling methods based on linked records can reveal unexpected details about users' identity and private life, which can conflict with privacy rights and lead to economic, social, and other forms of discriminatory treatment. A balance must be struck between identification and access control required for the IoT to function and user rights to privacy and identity. Striking this balance is not an easy task because of weaknesses in cybersecurity and anonymisation techniques. The EU General Data Protection Regulation (GDPR), set to come into force in May 2018, may provide essential guidance to achieve a fair balance between the interests of IoT providers and users. Through a review of academic and policy literature, this paper maps the inherit tension between privacy and identifiability in the IoT. It focuses on four challenges: (1) profiling, inference, and discrimination; (2) control and context-sensitive sharing of identity; (3) consent and uncertainty; and (4) honesty, trust, and transparency. The paper will then examine the extent to which several standards defined in the GDPR will provide meaningful protection for privacy and control over identity for users of IoT. The paper concludes that in order to minimise the privacy impact of the conflicts between data protection principles and identification in the IoT, GDPR standards urgently require further specification and implementation into the design and deployment of IoT technologies.

Bookmarks Related papers MentionsView impact

Research paper thumbnail of COUNTERFACTUAL EXPLANATIONS WITHOUT OPENING THE BLACK BOX: AUTOMATED DECISIONS AND THE GDPR

Harvard Journal of Law & Technology, 2018

There has been much discussion of the “right to explanation” in the EU General Data Protection Re... more There has been much discussion of the “right to explanation” in the EU General Data Protection Regulation, and its existence, merits, and disadvantages. Implementing a right to explanation that opens the ‘black box’ of algorithmic decision-making faces major legal and technical barriers. Explaining the functionality of complex algorithmic decision-making systems and their rationale in specific cases is a technically challenging problem. Some explanations may offer little meaningful information to data subjects, raising questions around their value. Data controllers have an interest to not disclose information about their algorithms that contains trade secrets, violates the rights and freedoms of others (e.g. privacy), or allows data subjects to game or manipulate decision-making.

Explanations of automated decisions need not hinge on the general public understanding how algorithmic systems function. Even though such interpretability is of great importance and should be pursued, explanations can, in principle, be offered without opening the black box. Looking at explanations as a means to help a data subject act rather than merely understand, one could gauge the scope and content of explanations according to the specific goal or action they are intended to support.

From the perspective of individuals affected by automated decision-making, we propose three aims for explanations:

(1) to inform and help the individual understand why a particular decision was reached,

(2) to provide grounds to contest the decision if the outcome is undesired, and

(3) to understand what would need to change in order to receive a desired result in the future, based on the current decision-making model.

We assess how each of these goals finds support in the GDPR, and the extent to which they hinge on opening the ‘black box’. We suggest data controllers should offer a particular type of explanation, ‘unconditional counterfactual explanations’, to support these three aims. These counterfactual explanations describe the smallest change to the world that can be made to obtain a desirable outcome, or to arrive at the “closest possible world.” As multiple variables or sets of variables can lead to one or more desirable outcomes, multiple counterfactual explanations can be provided, corresponding to different choices of nearby possible worlds for which the counterfactual holds. Counterfactuals describe a dependency on the external facts that lead to that decision without the need to convey the internal state or logic of an algorithm. As a result, counterfactuals serve as a minimal solution that bypasses the current technical limitations of interpretability, while striking a balance between transparency and the rights and freedoms of others (e.g. privacy, trade secrets).

Bookmarks Related papers MentionsView impact

Research paper thumbnail of A RIGHT TO REASONABLE INFERENCES: RE-THINKING DATA PROTECTION LAW IN THE AGE OF BIG DATA AND AI

Columbia Business Law Review, 2019

Big Data analytics and artificial intelligence (AI) draw non-intuitive and unverifiable inference... more Big Data analytics and artificial intelligence (AI) draw non-intuitive and unverifiable inferences and predictions about the behaviors, preferences, and private lives of individuals. These inferences draw on highly diverse and feature-rich data of unpredictable value, and create new opportunities for discriminatory, biased, and invasive decision-making. Data protection law is meant to protect people’s privacy, identity, reputation, and autonomy, but is currently failing to protect data subjects from the novel risks of inferential analytics. The legal status of inferences is heavily disputed in legal scholarship, and marked by inconsistencies and contradictions within and between the views of the Article 29 Working Party and the European Court of Justice (ECJ).

This Article shows that individuals are granted little control and oversight over how their personal data is used to draw inferences about them. Compared to other types of personal data, inferences are effectively ‘economy class’ personal data in the General Data Protection Regulation (GDPR). Data subjects’ rights to know about (Art 13-15), rectify (Art 16), delete (Art 17), object to (Art 21), or port (Art 20) personal data are significantly curtailed for inferences. The GDPR also provides insufficient protection against sensitive inferences (Art 9) or remedies to challenge inferences or important decisions based on them (Art 22(3)).

This situation is not accidental. In standing jurisprudence the ECJ has consistently restricted the remit of data protection law to assessing the legitimacy of input personal data undergoing processing, and to rectify, block, or erase it. Critically, the ECJ has likewise made clear that data protection law is not intended to ensure the accuracy of decisions and decision-making processes involving personal data, or to make these processes fully transparent. Current policy proposals addressing privacy protection (the ePrivacy Regulation and the EU Digital Content Directive) and Europe’s new Copyright Directive and Trade Secrets Directive also fail to close the GDPR’s accountability gaps concerning inferences.

This Article argues that a new data protection right, the ‘right to reasonable inferences’, is needed to help close the accountability gap currently posed by ‘high risk inferences’ , meaning inferences drawn from Big Data analytics that damage privacy or reputation, or have low verifiability in the sense of being predictive or opinion-based while being used in important decisions. This right would require ex-ante justification to be given by the data controller to establish whether an inference is reasonable. This disclosure would address (1) why certain data form a normatively acceptable basis from which to draw inferences; (2) why these inferences are relevant and normatively acceptable for the chosen processing purpose or type of automated decision; and (3) whether the data and methods used to draw the inferences are accurate and statistically reliable. The ex-ante justification is bolstered by an additional ex-post mechanism enabling unreasonable inferences to be challenged.

Bookmarks Related papers MentionsView impact

Research paper thumbnail of Transparent, Explainable, and Accountable AI for Robotics

Wachter, S., Mittelstadt, B., & Floridi, L. (2017). Transparent, explainable, and accountable AI ... more Wachter, S., Mittelstadt, B., & Floridi, L. (2017). Transparent, explainable, and accountable AI for
robotics. Science Robotics, 2(6), eaan6080.

Bookmarks Related papers MentionsView impact

Research paper thumbnail of Algorithms and AI are the future, but we must not allow them to become a shield for injustice.pdf

we must hold algorithms to - at least - the same standards as humans, making sure that we do not ... more we must hold algorithms to - at least - the same standards as humans, making sure that we do not blindly trust them, and retaining the right to question and understand their decisions.

Bookmarks Related papers MentionsView impact

Research paper thumbnail of Artificial Intelligence and the 'Good Society': the US, EU, and UK approach

In October 2016, the White House, the European Parliament, and the UK House of Commons each issue... more In October 2016, the White House, the European Parliament, and the UK House of Commons each issued a report outlining their visions on how to prepare society for the widespread use of artificial intelligence (AI). In this article, we provide a comparative assessment of these three reports in order to facilitate the design of policies favourable to the development of a 'good AI society'. To do so, we examine how each report addresses the following three topics: (a) the development of a 'good AI society'; (b) the role and responsibility of the government, the private sector, and the research community (including academia) in pursuing such a development; and (c) where the recommendations to support such a development may be in need of improvement. Our analysis concludes that the reports address adequately various ethical, social, and economic topics, but come short of providing an overarching political vision and long-term strategy for the development of a 'good AI society'. In order to contribute to fill this gap, in the conclusion we suggest a two-pronged approach.

Bookmarks Related papers MentionsView impact

Research paper thumbnail of The Ethics of Algorithms: Mapping the Debate

Bookmarks Related papers MentionsView impact

Research paper thumbnail of The Ethics of Algorithms: Mapping the Debate

In information societies, operations, decisions and choices previously left to humans are increas... more In information societies, operations, decisions and choices previously left to humans are increasingly delegated to algorithms, which may advise, if not decide, about how data should be interpreted and what actions should be taken as a result. More and more often, algorithms mediate social processes, business transactions, governmental decisions, and how we perceive, understand, and interact among ourselves and with the environment. Gaps between the design and operation of algorithms and our understanding of their ethical implications can have severe consequences affecting individuals as well as groups and whole societies. This paper makes three contributions to clarify the ethical importance of algorithmic mediation. It provides a prescriptive map to organise the debate. It reviews the current discussion of ethical aspects of algorithms. And it assesses the available literature in order to identify areas requiring further work to develop the ethics of algorithms.

Bookmarks Related papers MentionsView impact

Research paper thumbnail of Artificial Intelligence and the 'Good Society': the US, EU, and UK approach

In October 2016, the White House, the European Parliament, and the UK House of Commons each issue... more In October 2016, the White House, the European Parliament, and the UK House of Commons each issued a report outlining their visions on how to prepare society for the widespread use of AI. In this article, we provide a comparative assessment of these three reports in order to facilitate the design of policies favourable to the development of a 'good AI society'. To do so, we examine how each report addresses the following three topics: (a) the development of a 'good AI society'; (b) the role and responsibility of the government, the private sector, and the research community (including academia) in pursuing such a development; and (c) where the recommendations to support such a development may be in need of improvement. Our analysis concludes that the reports address adequately various ethical, social, and economic topics, but come short of providing an overarching political vision and long-term strategy for the development of a 'good AI society'. In order to contribute to fill this gap, in the conclusion we suggest a two-pronged approach.

Bookmarks Related papers MentionsView impact

Research paper thumbnail of Why a right to explanation of automated decision-making does not exist in the General Data Protection Regulation

Since approval of the EU General Data Protection Regulation (GDPR) in 2016, it has been widely an... more Since approval of the EU General Data Protection Regulation (GDPR) in 2016, it has been widely and repeatedly claimed that a 'right to explanation' of decisions made by automated or artificially intelligent algorithmic systems will be legally mandated by the GDPR. This right to explanation is viewed as an ideal mechanism to enhance the accountability and transparency of automated decision-making. However, there are several reasons to doubt both the legal existence and the feasibility of such a right. In contrast to the right to explanation of specific automated decisions claimed elsewhere, the GDPR only mandates that data subjects receive limited information (Articles 13-15) about the logic involved, as well as the significance and the envisaged consequences of automated decision-making systems, what we term a 'right to be informed'. Further, the ambiguity and limited scope of the 'right not to be subject to automated decision-making' contained in Article 22 (from which the alleged 'right to explanation' stems) raises questions over the protection actually afforded to data subjects. These problems show that the GDPR lacks precise language as well as explicit and well-defined rights and safeguards against automated decision-making, and therefore runs the risk of being toothless. We propose a number of legislative steps that, if taken, may improve the transparency and accountability of automated decision-making when the GDPR comes into force in 2018.

Bookmarks Related papers MentionsView impact

Research paper thumbnail of Privacy: Primus Inter Pares - Privacy as a precondition for self-development, personal fulfilment and the free enjoyment of fundamental human rights

The General Data Protection Regulation (GDPR) is Europe’s new approach to enhance privacy as it p... more The General Data Protection Regulation (GDPR) is Europe’s new approach to enhance privacy as it promises to enforce harmonised data protection standards in the Member States . However, even though the regulation has the dedicated goal to enhance privacy, it also tries to balance competing rights, such as the free flow of data, transparency, national security and overriding economic interests. As a result, the supervisory authorities will determine new data protection standards. Their assessment and how they evaluate the importance of privacy will be the benchmark. However, supervisory authorities will argue from a standpoint that assumes that all competing interests are equal. By analysing and interpreting the jurisprudence of the European Court of Human Rights (ECtHR), I will argue that the dominant theoretical position treating all human rights as equal must be abandoned. Rather, I will show that jurisprudence contains an inherited hierarchy among certain rights in which privacy occupies an elevated position. The reasons are threefold: first, privacy is a critical element to personal fulfilment and self-development which has intrinsic value for human beings and a democratic society as it is the basis for pluralism. Second, free and undisturbed development of personality is a necessary precondition for the free exercise of certain human rights, e.g. right to education; freedom of expression; freedom of thought, conscience and religion; free elections; and freedom of assembly and association. Third, some level of privacy has to be ensured in order to freely exercise these human rights. I will conclude that these issues become even more pressing due to the universal implementation of digital technologies. Informational self-determination is one effective tool to guarantee privacy and to guard against discrimination, public humiliation or self-imposed stigma and push for effective remedies in case of privacy infringements and urge to consider stricter laws that prohibit requests or receipt of certain information (e.g. about race, sexual orientation, health status, or gender) that could form the basis of discrimination.

Bookmarks Related papers MentionsView impact