Fairness Perceptions of Algorithmic Decision-Making: A Systematic Review of the Empirical Literature (original) (raw)

Opportunities for a More Interdisciplinary Approach to Measuring Perceptions of Fairness in Machine Learning

Equity and Access in Algorithms, Mechanisms, and Optimization, 2021

As machine learning (ML) is deployed in high-stakes domains, such as disease diagnosis or prison sentencing, questions of fairness have become an area of concern in its development. This interest has produced a variety of statistical fairness definitions derived from classical performance metrics which further expand the decisions that ML practitioners must make in building a system. The need to choose between these definitions raises questions about what conditions influence people to perceive an algorithm as fair or not. Recent results highlight the heavily contextual nature of fairness perceptions, and the specific conditions under which psychological principles such as framing can reliably sway these perceptions. Additional interdisciplinary insights include lessons from the replication crisis within psychology, from which we can glean best-practices for reproducible empirical research. We survey key research at the intersection of ML and psychology, focusing on psychological mechanisms underlying fairness preferences. We conclude by stating the continued need for interdisciplinary research, and underscore best-practices that can inform the state-of-the-art practice. We consider this research to be of a descriptive nature, enabling a deeper understanding and a substantiated discussion.

Algorithmic Fairness

2020

An increasing number of decisions regarding the daily lives of human beings are being controlled by artificial intelligence (AI) algorithms in spheres ranging from healthcare, transportation, and education to college admissions, recruitment, provision of loans and many more realms. Since they now touch on many aspects of our lives, it is crucial to develop AI algorithms that are not only accurate but also objective and fair. Recent studies have shown that algorithmic decision-making may be inherently prone to unfairness, even when there is no intention for it. This paper presents an overview of the main concepts of identifying, measuring and improving algorithmic fairness when using AI algorithms. The paper begins by discussing the causes of algorithmic bias and unfairness and the common definitions and measures for fairness. Fairness-enhancing mechanisms are then reviewed and divided into pre-process, in-process and post-process mechanisms. A comprehensive comparison of the mechani...

Cognitive and Emotional Response to Fairness in AI – A Systematic Review

2019

Artificial intelligence is increasingly used to make decisions that can have a significant impact on people's lives. These decisions can disadvantage certain groups of individuals. A central question that follows is the feasibility of justice in AI applications. Therefore, it should be considered which demands such applications have to meet and where the transfer of social order to algorithmic contexts still needs to be overhauled. Previous research efforts in the context of discrimination come from different disciplines and shed light on problems from specific perspectives on the basis of various definitions. An interdisciplinary approach to this topic is still lacking, which is why it is considered sensible to systematically summarise research findings across disciplines in order to find parallels and combine common fairness requirements. This endeavour is the aim of this paper. As a result of the systematic review, it can be stated that the individual perception of fairness i...

Fairness in Algorithmic Decision-making

Amicus Curiae, 2019

This article discusses conceptions of fairness in algorithmic decision-making, within the context of the UK’s legal system. Using practical operational examples of algorithmic tools, itargues that such practices involve inherent technical trade-offs over multiple, competing notions of fairness, which are further exacerbated by policy choices made by those public authorities who use them. This raises major concerns regarding the ability of such choices to affect legal issues in decision-making, and transform legal protections, without adequate legal oversight, or a clear legal framework. This is not to say that the law does not have the capacity to regulate and ensure fairness, but that a more expansive idea of its function is required.

Opportunities for a More Interdisciplinary Approach to Perceptions of Fairness in Machine Learning

2020

As machine learning (ML) is deployed in high-stakes domains, such as disease diagnosis or prison sentencing, questions of fairness have become an area of concern in the development of ML. This interest has produced a variety of statistical fairness definitions derived from classical performance metrics which further expand the decisions that ML practitioners must make in building a system. The need to choose between these definitions raises questions about what conditions influence people to perceive an algorithm as fair or not. Recent results highlight the heavily contextual nature of fairness perceptions, and the specific conditions under which psychological principles such as framing can reliably sway these perceptions. Additional interdisciplinary insights include lessons from the replication crisis within psychology, from which we can glean best-practices for reproducible empirical research. We survey key research at the intersection of ML and psychology, focusing on psychologi...

Framework for developing algorithmic fairness

Bulletin of Electrical Engineering and Informatics

In a world where the algorithm can control the lives of society, it is not surprising that specific complications in determining the fairness in the algorithmic decision will arise at some point. Machine learning has been the de facto tool to forecast a problem that humans cannot reliably predict without injecting some amount of subjectivity in it (i.e., eliminating the “irrational” nature of humans). In this paper, we proposed a framework for defining a fair algorithm metric by compiling information and propositions from various papers into a single summarized list of fairness requirements (guideline alike). The researcher can then adopt it as a foundation or reference to aid them in developing their interpretation of algorithmic fairness. Therefore, future work for this domain would have a more straightforward development process. We also found while structuring this framework that to develop a concept of fairness that everyone can accept, it would require collaboration with other...

Fairness in Algorithmic Decision Making: An Excursion Through the Lens of Causality

2019

As virtually all aspects of our lives are increasingly impacted by algorithmic decision making systems, it is incumbent upon us as a society to ensure such systems do not become instruments of unfair discrimination on the basis of gender, race, ethnicity, religion, etc. We consider the problem of determining whether the decisions made by such systems are discriminatory, through the lens of causal models. We introduce two definitions of group fairness grounded in causality: fair on average causal effect (FACE), and fair on average causal effect on the treated (FACT). We use the Rubin-Neyman potential outcomes framework for the analysis of cause-effect relationships to robustly estimate FACE and FACT. We demonstrate the effectiveness of our proposed approach on synthetic data. Our analyses of two real-world data sets, the Adult income data set from the UCI repository (with gender as the protected attribute), and the NYC Stop and Frisk data set (with race as the protected attribute), show that the evidence of discrimination obtained by FACE and FACT, or lack thereof, is often in agreement with the findings from other studies. We further show that FACT, being somewhat more nuanced compared to FACE, can yield findings of discrimination that differ from those obtained using FACE.

The Algorithmic Leviathan: Arbitrariness, Fairness, and Opportunity in Algorithmic Decision Making Systems

2021

Automated decision-making systems implemented in public life are typically standardized. One algorithmic decision-making system can replace thousands of human deciders. Each of the humans so replaced had her own decision-making criteria: some good, some bad, and some arbitrary. Is such arbitrariness of moral concern? We argue that an isolated arbitrary decision need not morally wrong the individual whom it misclassifies. However, if the same algorithms are applied across a public sphere, such as hiring or lending, a person could be excluded from a large number of opportunities. This harm persists even when the automated decision-making systems are "fair" on standard metrics of fairness. We argue that such arbitrariness at scale is morally problematic and propose technically informed solutions that can lessen the impact of algorithms at scale and so mitigate or avoid the moral harms we identify.

Fair, Transparent, and Accountable Algorithmic Decision-making Processes

Philosophy & Technology, 2017

The combination of increased availability of large amounts of finegrained human behavioral data and advances in machine learning is presiding over a growing reliance on algorithms to address complex societal problems. Algorithmic decision-making processes might lead to more objective and thus potentially fairer decisions than those made by humans who may be influenced by greed, prejudice, fatigue, or hunger. However, algorithmic decision-making has been criticized for its potential to enhance discrimination, information and power asymmetry, and opacity. In this paper we provide an overview of available technical solutions to enhance fairness, accountability and transparency in algorithmic decision-making. We also highlight the criticality and urgency to engage multidisciplinary teams of researchers, practitioners, policy makers and citizens to co-develop, deploy and evaluate in the real-world algorithmic decision-making processes designed to maximize fairness and transparency. In doing so, we describe the Open Algortihms (OPAL) project as a step towards

Perception of fairness in algorithmic decisions: Future developers' perspective

Patterns, 2021

Highlights d Appropriate factors used in the decision-making does not ensure perceived fairness d Trust to the system is strongly affected by the user's perception of fairness d Algorithmic fairness is defined as the use of objective factors d Sensitive attributes can be the most likely cause of unfairness