The Algorithmic Leviathan: Arbitrariness, Fairness, and Opportunity in Algorithmic Decision Making Systems (original) (raw)

Fairness in Algorithmic Decision-making

Amicus Curiae, 2019

This article discusses conceptions of fairness in algorithmic decision-making, within the context of the UK’s legal system. Using practical operational examples of algorithmic tools, itargues that such practices involve inherent technical trade-offs over multiple, competing notions of fairness, which are further exacerbated by policy choices made by those public authorities who use them. This raises major concerns regarding the ability of such choices to affect legal issues in decision-making, and transform legal protections, without adequate legal oversight, or a clear legal framework. This is not to say that the law does not have the capacity to regulate and ensure fairness, but that a more expansive idea of its function is required.

Formalising trade-offs beyond algorithmic fairness: lessons from ethical philosophy and welfare economics

AI and Ethics, 2021

There is growing concern that decision-making informed by machine learning (ML) algorithms may unfairly discriminate based on personal demographic attributes, such as race and gender. Scholars have responded by introducing numerous mathematical definitions of fairness to test the algorithm, many of which are in conflict with one another. However, these reductionist representations of fairness often bear little resemblance to real-life fairness considerations, which in practice are highly contextual. Moreover, fairness metrics tend to be implemented within narrow and targeted fairness toolkits for algorithm assessments that are difficult to integrate into an algorithm’s broader ethical assessment. In this paper, we derive lessons from ethical philosophy and welfare economics as they relate to the contextual factors relevant for fairness. In particular we highlight the debate around the acceptability of particular inequalities and the inextricable links between fairness, welfare and a...

“Let the algorithm decide”: is human dignity at stake?

Revista Brasileira de Políticas Públicas

The goal of this article is to argue that the debate regarding algorithmic decision-making and its impact on fundamental rights can be better addressed in order to allow for adequate regulatory policies regarding recent technological developments in automation. Through a review of the literature on algorithms and an analysis of Articles 6, IX and 20 of the Brazilian Federal Law n° 13.709/2018 (LGPD) this article concludes that claims that algorithmic decisions are unlawful because of profiling or because they replace human analysis are imprecise and could be better framed. Profiles are nothing more than generalizations, largely accepted in legal systems, and there are many kinds of decisions based on generalizations which algorithms can adequately make with no human intervention. In this context, this article aims at restating the debate about automated decisions and fundamental rights focusing on two main obstacles: (i) the potential for discrimination by algorithmic systems and (ii) accountability of their decision-making processes. Lastly, the arguments put forward are applied to the current case of the covid-19 pandemic to illustrate the challenges ahead.

Fair, Transparent, and Accountable Algorithmic Decision-making Processes

Philosophy & Technology, 2017

The combination of increased availability of large amounts of finegrained human behavioral data and advances in machine learning is presiding over a growing reliance on algorithms to address complex societal problems. Algorithmic decision-making processes might lead to more objective and thus potentially fairer decisions than those made by humans who may be influenced by greed, prejudice, fatigue, or hunger. However, algorithmic decision-making has been criticized for its potential to enhance discrimination, information and power asymmetry, and opacity. In this paper we provide an overview of available technical solutions to enhance fairness, accountability and transparency in algorithmic decision-making. We also highlight the criticality and urgency to engage multidisciplinary teams of researchers, practitioners, policy makers and citizens to co-develop, deploy and evaluate in the real-world algorithmic decision-making processes designed to maximize fairness and transparency. In doing so, we describe the Open Algortihms (OPAL) project as a step towards

Algorithmic Discrimination and Responsibility: Selected Examples from the United States of America and South America

ICAI 2019, 2019

This paper discusses examples and activities that promote consumer protection through adapting of non-discriminatory algorithms. The casual observer of data from smartphones to artificial intelligence believes in technological determinism. To them, data reveal real trends with neutral decision-makers that are not prejudiced. However, machine learning technologies are created by people. Therefore, creator biases can appear in decisions based on algorithms used for surveillance, social profiling, surveillance, and business intelligence. This paper adapts Lawrence Lessig's framework (laws, markets, codes, and social norms). It highlights cases in the USA and South America where algorithms discriminated and how statutes tried to mitigate the negative consequences. Global companies such as Facebook and Amazon are among those discussed in the case studies. In the case of Ecuador, the algorithms and the lack of protection of personal data for citizens are not regulated or protected in the treatment of information that arises in social networks used by public and private institutions. Consequently, individual rights are not strictly shielded by national and international laws and or through regulations of telecommunications and digital networks. In the USA, a proposed bill, the "Algorithmic Accountability Act" would require large companies to audit their machine-learning powered automated systems such as facial recognition or ad targeting algorithm for bias. The Federal Trade Commission (FTC) will create rules for evaluating automated systems, while companies would evaluate the algorithms powering these tools for bias or discrimination, including threats to consumer privacy or security.

Algorithmic Authority: The Ethics, Politics, and Economics of Algorithms that Interpret, Decide, and Manage

This panel will explore algorithmic authority as it manifests and plays out across multiple domains. Algorithmic authority refers to the power of algorithms to manage human action and influence what information is accessible to users. Algorithms increasingly have the ability to affect everyday life, work practices, and economic systems through automated decision-making and interpretation of " big data ". Cases of algorithmic authority include algorithmically curating news and social media feeds, evaluating job performance, matching dates, and hiring and firing employees. This panel will bring together researchers of quantified self, healthcare, digital labor, social media, and the sharing economy to deepen the emerging discourses on the ethics, politics, and economics of algorithmic authority in multiple domains.

The Proxy Problem: Fairness and Artificial Intelligence

Developers of predictive systems use proxies when they cannot directly observe attributes relevant to predictions they would like to make. Proxies have always been used, but today the use of proxies has the consequence that one area of one’s life can have significant consequences for another seemingly disconnected area, and that raises concerns about fairness and freedom, as the following example illustrates. Sally defaults on a $50,000 credit card debt and declares bankruptcy. The debt was the result of paying for lifesaving treatment for her daughter, and despite her best efforts, she could not afford even the minimum credit card payments. A credit scoring system predicts that Sally is a poor risk even though post-bankruptcy Sally is a good risk—her daughter having recovered. Sally’s car insurance company uses credit ratings as proxy for safe driving (as many US insurance companies in fact do). Is it fair that Sally’s life-saving effort forces her down a disadvantageous path? Our starting point for addressing fairness is the economist John Roemer’s observation in Equality of Opportunity that a conception of “equality of opportunity . . . prevalent today in Western democracies . . . says that society should do what it can to ‘level the playing field’ among individuals who compete for positions.” Does the insurance company unfairly tilt the playing field against Sally when it uses her credit score to set her insurance premium? More generally, as the Sally example illustrates, one factor that affects level-playing-field fairness is the social structure of information processing itself. The use of proxies can profoundly alter the social structure of information processing. When does their use do so unfairly? To address that question, we adapt an approach suggested in an influential article by the computer scientist Cynthia Dwork (who cites Roemer as a source of her approach). Computer science has recently seen an explosion of articles about AI and fairness, and one of our goals is to bring those discussions more centrally into the discussion of legal scholars. It may seem we have chosen badly, however. One criticism of Dwork et al. is that the fairness criterion she offers is of little practical value since it requires determining relevant differences among people in ways that are—or at least appear to be—highly problematic in in real-life cases. We defend the approach against this criticism by showing how a regulatory process addressing the use of proxies in AI could make reasonable determinations of relevant differences among individuals and assign an important to the Dwork et al. criterion of fairness. The regulatory process would promote level-playing-field fairness even in the face of proxy-driven AI.

Three lessons for and from algorithmic discrimination1

Res Publica, 2023

Algorithmic discrimination has rapidly become a topic of intense public and academic interest. This article explores three issues raised by algorithmic discrimination: 1) the distinction between direct and indirect discrimination, 2) the notion of disadvantageous treatment, and 3) the moral badness of discriminatory automated decision-making. It argues that some conventional distinctions between direct and indirect discrimination appear not to apply to algorithmic discrimination, that algorithmic discrimination may often be discrimination between groups, as opposed to against groups, and that it is not necessarily the case that morally bad algorithmic discrimination gives us reason to not use automated decisionmaking. For each of the three issues, the article explores implications for algorithmic discrimination, suggests some alternative answers, and clarifies how we may want to think of discrimination more broadly in light of lessons drawn from the context of algorithmic discrimination.