Consumer Fairness in Recommender Systems: Contextualizing Definitions and Mitigations (original) (raw)

Fairness in Recommender Systems: Research Landscape and Future Directions

arXiv (Cornell University), 2022

Recommender systems can strongly influence which information we see online, e.g., on social media, and thus impact our beliefs, decisions, and actions. At the same time, these systems can create substantial business value for different stakeholders. Given the growing potential impact of such AI-based systems on individuals, organizations, and society, questions of fairness have gained increased attention in recent years. However, research on fairness in recommender systems is still a developing area. In this survey, we first review the fundamental concepts and notions of fairness that were put forward in the area in the recent past. Afterward, through a review of more than 160 scholarly publications, we present an overview of how research in this field is currently operationalized, e.g., in terms of general research methodology, fairness measures, and algorithmic approaches. Overall, our analysis of recent works points to certain research gaps. In particular, we find that in many research works in computer science, very abstract problem operationalizations are prevalent and questions of the underlying normative claims and what represents a fair recommendation in the context of a given application are often not discussed in depth. These observations call for more interdisciplinary research to address fairness in recommendation in a more comprehensive and impactful manner.

Exploring User Opinions of Fairness in Recommender Systems

2020

Algorithmic fairness for artificial intelligence has become increasingly relevant as these systems become more pervasive in society. One realm of AI, recommender systems, presents unique challenges for fairness due to trade offs between optimizing accuracy for users and fairness to providers. But what is fair in the context of recommendation--particularly when there are multiple stakeholders? In an initial exploration of this problem, we ask users what their ideas of fair treatment in recommendation might be, and why. We analyze what might cause discrepancies or changes between user's opinions towards fairness to eventually help inform the design of fairer and more transparent recommendation algorithms.

A flexible framework for evaluating user and item fairness in recommender systems

User Modeling and User-Adapted Interaction, 2021

One common characteristic of research works focused on fairness evaluation (in machine learning) is that they call for some form of parity (equality) either in treatment-meaning they ignore the information about users' memberships in protected classes during training-or in impact-by enforcing proportional beneficial outcomes to users in different protected classes. In the recommender systems community, fairness has been studied with respect to both users' and items' memberships in protected classes defined by some sensitive attributes (e.g., gender or race for users, revenue in a multi-stakeholder setting for items). Again here, the concept has been commonly interpreted as some form of equality-i.e., the degree to which the system is meeting the information needs of all its users in an equal sense. In this work, we propose a probabilistic framework based on generalized cross entropy (GCE) to measure fairness of a given recommendation model. The framework comes with a suite of advantages: first, it allows the system designer to define and measure fairness for both users and items and can be applied to any classification task; second, it can incorporate various notions of fairness as it does not rely on specific and predefined probability distributions and they can be defined at design time; finally, in its design it uses a gain factor, which can be flexibly defined to contemplate different accuracy-related metrics to measure fairness upon decision-support metrics (e.g., precision, recall) or rank-based measures (e.g., NDCG, MAP). An experimental evaluation on four real-world datasets shows the nuances captured by our proposed metric regarding fairness on different user and item attributes, where nearest-neighbor recommenders tend to obtain good results under equality constraints. We observed that when the users are clustered based on both their interaction with the system and other sensitive attributes, such as age or gender, algorithms with similar performance values get different behaviors with respect to user fairness due to the different way they process data for each user cluster.

A Survey on Fairness-Aware Recommender Systems

As information filtering services, recommender systems have extremely enriched our daily life by providing personalized suggestions and facilitating people in decision-making, which makes them vital and indispensable to human society in the information era. However, as people become more dependent on them, recent studies show that recommender systems potentially own unintentional impacts on society and individuals because of their unfairness (e.g., gender discrimination in job recommendations). To develop trustworthy services, it is crucial to devise fairness-aware recommender systems that can mitigate these bias issues. In this survey, we summarise existing methodologies and practices of fairness in recommender systems. Firstly, we present concepts of fairness in different recommendation scenarios, comprehensively categorize current advances, and introduce typical methods to promote fairness in different stages of recommender systems. Next, after introducing datasets and evaluation metrics applied to assess the fairness of recommender systems, we will delve into the significant influence that fairness-aware recommender systems exert on real-world industrial applications. Subsequently, we highlight the connection between fairness and other principles of trustworthy recommender systems, aiming to consider trustworthiness principles holistically while advocating for fairness. Finally, we summarize this review, spotlighting promising opportunities in comprehending concepts, frameworks, the balance between accuracy and fairness, and the ties with trustworthiness, with the ultimate goal of fostering the development of fairness-aware recommender systems.

Fairness in Recommendation Ranking through Pairwise Comparisons

Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining

Recommender systems are one of the most pervasive applications of machine learning in industry, with many services using them to match users to products or information. As such it is important to ask: what are the possible fairness risks, how can we quantify them, and how should we address them? In this paper we offer a set of novel metrics for evaluating algorithmic fairness concerns in recommender systems. In particular we show how measuring fairness based on pairwise comparisons from randomized experiments provides a tractable means to reason about fairness in rankings from recommender systems. Building on this metric, we offer a new regularizer to encourage improving this metric during model training and thus improve fairness in the resulting rankings. We apply this pairwise regularization to a large-scale, production recommender system and show that we are able to significantly improve the system's pairwise fairness.

Recommendation Fairness: From Static to Dynamic

arXiv (Cornell University), 2021

Driven by the need to capture users' evolving interests and optimize their long-term experiences, more and more recommender systems have started to model recommendation as a Markov decision process and employ reinforcement learning to address the problem. Shouldn't research on the fairness of recommender systems follow the same trend from static evaluation and one-shot intervention to dynamic monitoring and non-stop control? In this paper, we portray the recent developments in recommender systems first and then discuss how fairness could be baked into the reinforcement learning techniques for recommendation. Moreover, we argue that in order to make further progress in recommendation fairness, we may want to consider multi-agent (game-theoretic) optimization, multi-objective (Pareto) optimization, and simulationbased optimization, in the general framework of stochastic games. CCS CONCEPTS • Information systems → Recommender systems; • Computing methodologies → Reinforcement learning.

Recommender Systems and Their Fairness for User Preferences: A Literature Study (DOI: 10.13140/RG.2.2.35330.12487)

Recommender System (RS) is an information system that provides suggestions for items of information to be used by their users. These suggestions can improve the user choices for typically needed items. RS proved its usefulness in commercial environments such as Amazon and also proved its importance in scientific environments such as ScienceDirect, Citeseer, among others. Nowadays, RS is used extensively in social media environments such as Facebook, Twitter and LinkedIn. However, the methods used for ranking the recommended items of information and data may be biased and directing users towards unfair decisions. In this literature study, we introduce a background knowledge about most of RS techniques mentioned in the literature. Then, we identify the limitations of each technique that may be the reason for introducing biased recommendations to the user.

Fairness and Popularity Bias in Recommender Systems: an Empirical Evaluation

2021

In this paper, we present the results of an empirical evaluation investigating how recommendation algorithms are affected by popularity bias. Popularity bias makes more popular items to be recommended more frequently than less popular ones, thus it is one of the most relevant issues that limits the fairness of recommender systems. In particular, we define an experimental protocol based on two state-of-theart datasets containing users' preferences on movies and books and three different recommendation paradigms, i.e., collaborative filtering, content-based filtering and graph-based algorithms. In order to evaluate the overall fairness of the recommendations we use well-known metrics such as Catalogue Coverage, Gini Index and Group Average Popularity (ΔGAP). The goal of this paper is: (i) to provide a clear picture of how recommendation techniques are affected by popularity bias; (ii) to trigger further research in the area aimed to introduce methods to mitigate or reduce biases i...

Multisided Fairness for Recommendation

ArXiv, 2017

Recent work on machine learning has begun to consider issues of fairness. In this paper, we extend the concept of fairness to recommendation. In particular, we show that in some recommendation contexts, fairness may be a multisided concept, in which fair outcomes for multiple individuals need to be considered. Based on these considerations, we present a taxonomy of classes of fairness-aware recommender systems and suggest possible fairness-aware recommendation architectures.

A Fairness-aware Hybrid Recommender System

ArXiv, 2018

Recommender systems are used in variety of domains affecting people's lives. This has raised concerns about possible biases and discrimination that such systems might exacerbate. There are two primary kinds of biases inherent in recommender systems: observation bias and bias stemming from imbalanced data. Observation bias exists due to a feedback loop which causes the model to learn to only predict recommendations similar to previous ones. Imbalance in data occurs when systematic societal, historical, or other ambient bias is present in the data. In this paper, we address both biases by proposing a hybrid fairness-aware recommender system. Our model provides efficient and accurate recommendations by incorporating multiple user-user and item-item similarity measures, content, and demographic information, while addressing recommendation biases. We implement our model using a powerful and expressive probabilistic programming language called probabilistic soft logic. We experimental...