Fairness Perceptions of Algorithmic Decision-Making: A Systematic Review of the Empirical Literature (original) (raw)

Uploaded (2022) | Journal: ArXiv

Abstract

Algorithmic decision-making (ADM) increasingly shapes people’s daily lives. Given that such autonomous systems can cause severe harm to individuals and social groups, fairness concerns have arisen. A human-centric approach demanded by scholars and policymakers requires taking people’s fairness perceptions into account when designing and implementing ADM. We provide a comprehensive, systematic literature review synthesizing the existing empirical insights on perceptions of algorithmic fairness from 39 empirical studies spanning multiple domains and scientific disciplines. Through thorough coding, we systemize the current empirical literature along four dimensions: (1) algorithmic predictors, (2) human predictors, (3) comparative effects (human decision-making vs. algorithmic decision-making), and (4) consequences of ADM. While we identify much heterogeneity around the theoretical concepts and empirical measurements of algorithmic fairness, the insights come almost exclusively from We...

FAQs

sparkles

AI

What are the main dimensions of perceived algorithmic fairness identified in literature?add

The review identifies four dimensions: algorithmic predictors, human predictors, comparative effects, and consequences of ADM.

How does political ideology affect perceptions of algorithmic fairness?add

Research indicates that political ideology influences fairness perceptions, with conservatives viewing sensitivity features as fairer than liberals.

What methodological approach was used to conduct the systematic literature review?add

The methodology followed includes seven steps, involving predefined criteria, a robust literature search, and rigorous screening processes.

What are the practical implications of perceived fairness in algorithmic decision-making?add

Perceived algorithmic unfairness can negatively impact institutional reputation and trust, influencing user engagement and compliance.

How do algorithmic predictors influence user perceptions of fairness?add

Algorithmic design features such as input reliability, relevance, and potential outcomes significantly shape users' fairness perceptions.

Loading...

Loading Preview

Sorry, preview is currently unavailable. You can download the paper by clicking the button above.

References (94)

  1. Acikgoz, Y., Davison, K. H., Compagnone, M., & Laske, M. (2020). Justice perceptions of artificial intelligence in selection. International Journal of Selection and Assessment, 28(4), 399-416. https://doi.org/10.1111/ijsa.12306
  2. Araujo, T., Helberger, N., Kruikemeier, S., & de Vreese, C. H. (2020). In AI we trust? Perceptions about automated decision-making by artificial intelligence. AI & Society, 35(611-732). https://doi.org/10.1007/s00146-019-00931-w Artificial Intelligence High-Level Expert Group. (2019). Ethics guidelines for trustworthy AI. https://ec.europa.eu/futurium/en/ai-alliance-consultation
  3. Bansak, K., Ferwerda, J., Hainmueller, J., Dillon, A., Hangartner, D., Lawrence, D., & Weinstein, J. (2018). Improving refugee integration through data-driven algorithmic assignment. Science, 359(6373), 325-329. https://doi.org/10.1126/science.aao4408
  4. Barabas, C., Doyle, C., Rubinovitz, J. B., & Dinakar, K. (2020). Studying Up: Reorienting the study of algorithmic fairness around issues of power. Proceedings of the 2020 ACM Conference on Fairness, Accountability, and Transparency, 167-176. https://doi.org/10.1145/3351095.3372859
  5. Barocas, S., & Selbst, A. (2016). Big Data's Disparate Impact. California Law Review, 104(1), 671-729. https://doi.org/http://dx.doi.org/10.15779/Z38BG31
  6. Binns, R. (2018a). Fairness in Machine Learning: Lessons from Political Philosophy. Journal of Machine Learning Research, 81, 1-11. http://arxiv.org/abs/1712.03586
  7. Binns, R. (2018b). What Can Political Philosophy Teach Us about Algorithmic Fairness? IEEE Security & Privacy, 16(3), 73-80. https://doi.org/10.1109/MSP.2018.2701147
  8. Binns, R. (2020). Human Judgment in algorithmic loops: Individual justice and automated decision-making. Regulation & Governance, 1-15. https://doi.org/10.1111/rego.12358
  9. Binns, R., Van Kleek, M., Veale, M., Lyngs, U., Zhao, J., & Shadbolt, N. (2018). "It's Reducing a Human Being to a Percentage"; Perceptions of Justice in Algorithmic Decisions. Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, 1-14. https://doi.org/10.1145/3173574.3173951
  10. Booth, A., Sutton, A., & Papaioannou, D. (2016). Systematic Approaches to a Successful Literature Review (2nd Editio). SAGE Publications Ltd.
  11. Chouldechova, A. (2017). Fair Prediction with Disparate Impact: A Study of Bias in Recidivism Prediction Instruments. Big Data, 5(2), 153-163. https://doi.org/10.1089/big.2016.0047
  12. Colquitt, J. A., & Rodell, J. B. (2015). Measuring Justice and Fairness. In R. S. Cropanzano & M. L. Ambrose (Eds.), The Oxford Handbook of Justice in the Workplace (pp. 187-202). Oxford University Press.
  13. Deutsch, M. (1975). Equity, Equality, and Need: What Determines Which Value Will Be Used as the Basis of Distributive Justice? Journal of Social Issues, 31(3), 137-149. https://doi.org/10.1111/j.1540-4560.1975.tb01000.x
  14. Dietvorst, B. J., Simmons, J. P., & Massey, C. (2015). Algorithm aversion: People erroneously avoid algorithms after seeing them err. Journal of Experimental Psychology: General, 144(1), 114-126. https://doi.org/10.1037/xge0000033
  15. Dodge, J., Vera Liao, Q., Zhang, Y., Bellamy, R. K. E., & Dugan, C. (2019). Explaining models: An empirical study of how explanations impact fairness judgment. International Conference on Intelligent User Interfaces, Proceedings IUI, Part F1476, 275-285. https://doi.org/10.1145/3301275.3302310
  16. Dwork, C., Hardt, M., Pitassi, T., Reingold, O., & Zemel, R. (2012). Fairness Through Awareness. Proceedings of the 3rd Innovations in Theoretical Computer Science, 214-226. https://doi.org/10.1145/2090236.2090255
  17. Dworkin, R. (1981). What Is Equality? Part 1: Equality of Welfare. Philosophy & Public Affairs, 10(3), 185-246.
  18. Favaretto, M., De Clercq, E., & Elger, B. S. (2019). Big Data and discrimination: perils, promises and solutions. A systematic review. Journal of Big Data, 6(1), 12. https://doi.org/10.1186/s40537-019-0177-4
  19. Gajane, P., & Pechenizkiy, M. (2017). On Formalizing Fairness in Prediction with Machine Learning. ArXiv. http://arxiv.org/abs/1710.03184
  20. Glikson, E., & Woolley, A. W. (2020). Human trust in artificial intelligence: Review of empirical research. Academy of Management Annals, 14(2), 627-660. https://doi.org/10.5465/annals.2018.0057
  21. Greenberg, J. (1987). A Taxonomy of Organizational Justice Theories. Academy of Management Review, 12(1), 9-22. https://doi.org/10.5465/amr.1987.4306437
  22. Greenberg, J. (1990). Organizational Justice: Yesterdan, Today, and Tomorrow. Journal of Management, 16(2), 399-432. https://doi.org/10.1177/014920639001600208
  23. Greenberg, J. (1993). The Social Side of Fairness: Interpersonal and Informational Classes of Organizational Justice. In R. Cropanzano (Ed.), Justice in the Workplace: Approaching Fairness in Human Resource Management (pp. 79-103). Lawrence Erlbaum Associates.
  24. Grgić-Hlača, N., Redmiles, E. M., Gummadi, K. P., & Weller, A. (2018). Human Perceptions of Fairness in Algorithmic Decision Making: A Case Study of Criminal Risk Prediction. Proceedings of the 2018 World Wide Web Conference on World Wide Web, 903-912. https://doi.org/10.1145/3178876.3186138
  25. Grgić-Hlača, N., Weller, A., & Redmiles, E. M. (2020). Dimensions of Diversity in Human Perceptions of Algorithmic Fairness. In arXiv. arXiv. https://arxiv.org/abs/2005.00808v1
  26. Grgić-Hlača, N., Zafar, M. B., Gummadi, K. P., & Weller, A. (2018). Beyond Distributive Fairness in Algorithmic Decision Making: Feature Selection for Procedurally Fair Learning. Proceedings of the 32th AAAI Conference on Artificial Intelligence, 51-60.
  27. Hardt, M., Price, E., & Srebro, N. (2012). Equality of Opportunity in Supervised Learning. Proceedings of the 30th Conference on Neural Information Processing Systems, 3323-3331.
  28. Harrison, G., Hanson, J., Jacinto, C., Ramirez, J., & Ur, B. (2020). An Empirical Study on the Perceived Fairness of Realistic , Imperfect Machine Learning Models. Proceedings of the 2020 ACM Conference on Fairness, Accountability, and Transparency, 392-402. https://doi.org/10.1145/3351095.3372831
  29. Helberger, N., Araujo, T., & de Vreese, C. H. (2020). Who is the fairest of them all? Public attitudes and expectations regarding automated decision-making. Computer Law and Security Review, 39. https://doi.org/10.1016/j.clsr.2020.105456
  30. Henrich, J., Heine, S. J., & Norenzayan, A. (2010). The weirdest people in the world? Behavioral and Brain Sciences, 33(2-3), 61-83. https://doi.org/10.1017/S0140525X0999152X
  31. Higgins, J. P., & Deeks, J. J. (2008). Selecting studies and collecting data. In J. P. Higgins & S. Green (Eds.), Cochrane Handbook for Systematic Reviews of Interventions (pp. 151-185). John Wiley & Sons Ltd.
  32. Hutchinson, B., & Mitchell, M. (2019). 50 Years of Test (Un)fairness: Lessons for Machine Learning. Proceedings of the 2019 ACM Conference on Fairness, Accountability, and Transparency, 49-58. https://doi.org/10.1145/3287560.3287600
  33. Jobin, A., Ienca, M., & Vayena, E. (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence, 1(9), 389-399. https://doi.org/10.1038/s42256-019-0088-2
  34. Jungnickel, K. (2018). New Methods of Measuring Opinion Leadership: A Systematic, Interdisciplinary Literature Analysis. International Journal of Communication, 12, 2702- 2724. https://doi.org/1932-8036/20180005
  35. Kaibel, C., Koch-Bayram, I., Biemann, T., & Mühlenbock, M. (2019). Applicant Perceptions of Hiring Algorithms -Uniqueness and Discrimination Experiences as Moderators. Academy of Management Proceedings, 2019(1). https://doi.org/10.5465/ambpp.2019.210
  36. Kasinidou, M., Kleanthous, S., Barlas, P., & Otterbacher, J. (2021). I agree with the decision, but they didn't deserve this. Future Developers' Perception of Fairness in Algorithmic Decisions. Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 690-700. https://doi.org/10.1145/3442188.3445931
  37. Kelley, P. G., Yang, Y., Heldreth, C., Moessner, C., Sedley, A., Kramm, A., Newman, D., & Woodruff, A. (2019). "Happy and assured that life will be easy 10 years from now.": Perceptions of artificial intelligence in 8 countries. In arXiv.
  38. Kieslich, K., Lünich, M., & Marcinkowski, F. (2021). The Threats of Artificial Intelligence Scale (TAI): Development, Measurement and Test Over Three Application Domains. International Journal of Social Robotics, 1-15. https://doi.org/10.1007/s12369-020-00734- w
  39. Kleinberg, J., Mullainathan, S., & Raghavan, M. (2017). Inherent trade-offs in the fair determination of risk scores. Leibniz International Proceedings in Informatics, LIPIcs, 67(43), 1-23. https://doi.org/10.4230/LIPIcs.ITCS.2017.43
  40. Köbis, N., Starke, C., & Rahwan, I. (2021). Artificial Intelligence as an Anti-Corruption Tool (AI-ACT) --Potentials and Pitfalls for Top-down and Bottom-up Approaches. http://arxiv.org/abs/2102.11567
  41. Köchling, A., & Wehner, M. C. (2020). Discriminated by an algorithm: a systematic review of discrimination and fairness by algorithmic decision-making in the context of HR recruitment and HR development. Business Research, 1-54. https://doi.org/10.1007/s40685- 020-00134-w
  42. Koene, A., Perez, E., Ceppi, S., Rovatsos, M., Webb, H., Patel, M., Jirotka, M., & Lane, G. (2017). Algorithmic Fairness in Online Information Mediating Systems. Proceedings of the 2017 ACM on Web Science Conference, 391-392. https://doi.org/10.1145/3091478.3098864
  43. Kusner, M., Loftus, J., Russell, C., & Silva, R. (2017). Counterfactual Fairness. Processings of the Advances in Neural Information Processing Systems, 4066-4076.
  44. Lee, M. K. (2018). Understanding perception of algorithmic decisions: Fairness, trust, and emotion in response to algorithmic management. Big Data & Society, 5(1), 205395171875668. https://doi.org/10.1177/2053951718756684
  45. Lee, M. K., & Baykal, S. (2017). Algorithmic Mediation in Group Decisions: Fairness Perceptions of Algorithmically Mediated vs. Discussion-Based Social Division. Proceedings of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing, 1035-1048. https://doi.org/10.1145/2998181.2998230
  46. Lee, M. K., Jain, A., Cha, H. J. I. N., Ojha, S., & Kusbit, D. (2019). Procedural justice in algorithmic fairness: Leveraging transparency and outcome control for fair algorithmic mediation. Proceedings of the ACM on Human-Computer Interaction, 3(CSCW). https://doi.org/10.1145/3359284
  47. Lee, M. K., Kim, J. T., & Lizarondo, L. (2017). A human-centered approach to algorithmic services: Considerations for fair and motivating smart community service management that allocates donations to non-profit organizations. Proceedings of the Conference on Human Factors in Computing Systems, 3365-3376. https://doi.org/10.1145/3025453.3025884
  48. Lee, M. K., Kusbit, D., Kahng, A., Kim, J. T., Yuan, X., Chan, A., See, D., Noothigattu, R., Lee, S., Psomas, A., & Procaccia, A. D. (2019). Webuildai: Participatory framework for algorithmic governance. Proceedings of the ACM on Human-Computer Interaction, 3(CSCW). https://doi.org/10.1145/3359283
  49. Lee, M. S. A., & Floridi, L. (2020). Algorithmic Fairness in Mortgage Lending: from Absolute Conditions to Relational Trade-offs. Minds and Machines, 31(1), 165-191. https://doi.org/10.1007/s11023-020-09529-4
  50. Lefebvre, C., Manheimer, E., & Glanville, J. (2008). Searching for Studies. In J. P. Higgins & S. Green (Eds.), Cochrane Handbook for Systematic Reviews of Interventions (pp. 95-150). John Wiley & Sons Ltd.
  51. Lepri, B., Oliver, N., Letouzé, E., Pentland, A., & Vinck, P. (2018). Fair, Transparent, and Accountable Algorithmic Decision-making Processes: The Premise, the Proposed Solutions, and the Open Challenges. Philosophy and Technology, 31(4), 611-627. https://doi.org/10.1007/s13347-017-0279-x
  52. Lessmann, S., Baesens, B., Seow, H. V., & Thomas, L. C. (2015). Benchmarking state-of-the-art classification algorithms for credit scoring: An update of research. European Journal of Operational Research, 247(1), 124-136. https://doi.org/10.1016/j.ejor.2015.05.030
  53. Leventhal, G. S. (1980). What Should Be Done with Equity Theory? In K. J. Gergen, M. S. Greenberg, & R. H. Willis (Eds.), Social Exchange (pp. 27-55). Springer US. https://doi.org/10.1007/978-1-4613-3087-5\_2
  54. Logg, J. M., Minson, J. A., & Moore, D. A. (2019). Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes, 151, 90-103. https://doi.org/10.1016/j.obhdp.2018.12.005
  55. Marcinkowski, F., Kieslich, K., Starke, C., & Lünich, M. (2020). Implications of AI (un-)fairness in higher education admissions: The Effects of Perceived AI (Un-)Fairness on Exit, Voice and Organizational Reputation. Proceedings of the 2020 ACM on Fairness, Accountability and Transparency, 122-130. https://doi.org/10.1145/3351095.3372867
  56. Marcinkowski, F., & Starke, C. (2019). Wann ist Künstliche Intelligenz (un)fair? Ein sozialwissenschaftliches Konzept von KI-Fairness. In J. Hofmann, N. Kersting, C. Ritzi, & W. J. Schünemann (Eds.), Politik in der digitalen Gesellschaft: Zentrale Problemfelder und Forschungsperspektiven (pp. 269-288). transcript.
  57. Marzagão, T. (2017). 'Using AI to fight corruption in the Brazilian government.' Office of the Comptroller General. https://www.youtube.com/watch?v=2prrNVaD- Nc&feature=emb_logo
  58. Miller, S. M., & Keiser, L. R. (2020). Representative Bureaucracy and Attitudes Toward Automated Decision Making. Journal of Public Administration Research and Theory, 2020, 1-16. https://doi.org/10.1093/jopart/muaa019
  59. Moher, D., Liberati, A., Tetzlaff, J., & Altman, D. G. (2009). Preferred reporting items for systematic reviews and meta-analyses: The PRISMA statement. BMJ (Online), 339(7716), 332-336. https://doi.org/10.1136/bmj.b2535
  60. Nagtegaal, R. (2021). The impact of using algorithms for managerial decisions on public employees' procedural justice. Government Information Quarterly, 38(1), 1-10. https://doi.org/10.1016/j.giq.2020.101536
  61. Nam, T. (2020). Do the right thing right! Understanding the hopes and hypes of data-based policy. Government Information Quarterly, 37(3). https://doi.org/10.1016/j.giq.2020.101491
  62. Newman, D. T., Fast, N. J., & Harmon, D. J. (2020). When eliminating bias isn't fair: Algorithmic reductionism and procedural justice in human resource decisions. Organizational Behavior and Human Decision Processes, 160(March), 149-167. https://doi.org/10.1016/j.obhdp.2020.03.008
  63. Noriega-Campero, A., Garcia-Bulle, B., Cantu, L. F., Bakker, M. A., Tejerina, L., & Pentland, A. (2020). Algorithmic targeting of social policies: Fairness, accuracy, and distributed governance. Proceedings of the 2020 ACM Conference on Fairness, Accountability, and Transparency, 241-251. https://doi.org/10.1145/3351095.3375784
  64. Nyarko, J., Goel, S., & Sommers, R. (2020). Breaking Taboos in Fair Machine Learning: An Experimental Study.
  65. OECD. (2019). Recommendation of the Council on OECD Legal Instruments Artificial Intelligence. https://www.oecd.ai/ai-principles
  66. Petticrew, M., & Roberts, H. (2006). Systematic Reviews in the Social Sciences. Blackwell Publishing Ltd.
  67. Pierson, E. (2017). Demographics and discussion influence views on algorithmic fairness. http://arxiv.org/abs/1712.09124
  68. Plane, A. C., Redmiles, E. M., Mazurek, M. L., & Tschantz, M. C. (2017). Exploring user perceptions of discrimination in online targeted advertising. Proceedings of the 26th USENIX Security Symposium, 935-951.
  69. Rahwan, I. (2018). Society-in-the-loop: programming the algorithmic social contract. Ethics and Information Technology, 20(1), 5-14. https://doi.org/10.1007/s10676-017-9430-8
  70. Rahwan, I., Cebrian, M., Obradovich, N., Bongard, J., Bonnefon, J.-F., Breazeal, C., Crandall, J. W. , Christakis, N. A., Couzin, I. D., Jackson, M. O., Jennings, N. R., Kamar, E., Kloumann, I. M., Larochelle, H., Lazer, D., McElreath, R., Mislove, A., Parkes, D. C., Pentland, A. 'Sandy,' … Wellman, M. (2019). Machine behaviour. Nature, 568(7753), 477-486. https://doi.org/10.1038/s41586-019-1138-y
  71. Rajkomar, A., Dean, J., & Kohane, I. (2019). Machine Learning in Medicine. New England Journal of Medicine, 380(14), 1347-1358. https://doi.org/10.1056/NEJMra1814259
  72. Richardson, R., Schultz, J. M., & Southerland, V. M. (2019). Litigating Algorithms 2019 US Report: New Challenges to Government Use of Algorithmic Decision Systems (Issue September). https://ainowinstitute.org/litigatingalgorithms-2019-us.html
  73. Sambasivan, N., Arnesen, E., Hutchinson, B., Doshi, T., & Prabhakaran, V. (2021). Re-imagining Algorithmic Fairness in India and Beyond. Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 315-328. https://doi.org/10.1145/3442188.3445896
  74. Saxena, N. A., Huang, K., DeFilippis, E., Radanovic, G., Parkes, D. C., & Liu, Y. (2020). How do fairness definitions fare? Examining public attitudes towards algorithmic definitions of fairness. Artificial Intelligence, 283, 103238. https://doi.org/10.1145/3306618.3314248
  75. Sen, A. (1990). Justice: Means versus Freedoms. Philosophy & Public Affairs, 19(2), 111-121.
  76. Shin, D. (2020). User Perceptions of Algorithmic Decisions in the Personalized AI System: Perceptual Evaluation of Fairness, Accountability, Transparency, and Explainability. Journal of Broadcasting & Electronic Media, 1-25. https://doi.org/10.1080/08838151.2020.1843357
  77. Shin, D. (2021). The effects of explainability and causability on perception, trust, and acceptance: Implications for explainable AI. International Journal of Human Computer Studies, 146, 102551. https://doi.org/10.1016/j.ijhcs.2020.102551
  78. Shin, D., & Park, Y. J. (2019). Role of fairness, accountability, and transparency in algorithmic affordance. Computers in Human Behavior, 98(April), 277-284. https://doi.org/10.1016/j.chb.2019.04.019
  79. Shin, D., Zhong, B., & Biocca, F. A. (2020). Beyond user experience: What constitutes algorithmic experiences? International Journal of Information 52, 102061. https://doi.org/10.1016/j.ijinfomgt.2019.102061
  80. Sloane, M., & Moss, E. (2019). AI's social sciences deficit. Nature Machine Intelligence, 1(August), 330-331. https://doi.org/10.1038/s42256-019-0084-6
  81. Srivastava, M., Heidari, H., & Krause, A. (2019). Mathematical notions vs. Human perception of fairness: A descriptive approach to fairness for machine learning. Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2459-2468. https://doi.org/10.1145/3292500.3330664
  82. Starke, C., & Lünich, M. (2020). Artificial intelligence for political decision-making in the European Union: Effects on citizens' perceptions of input, throughput, and output legitimacy. Data & Policy, 2, e16. https://doi.org/10.1017/dap.2020.19
  83. Suen, H. Y., Chen, M. Y. C., & Lu, S. H. (2019). Does the use of synchrony and artificial intelligence in video interviews affect interview ratings and applicant attitudes? Computers in Human Behavior, 98, 93-101. https://doi.org/10.1016/j.chb.2019.04.012
  84. Vallejos, E. P., Koene, A., Portillo, V., Dowthwaite, L., & Cano, M. (2017). Young people's policy recommendations on algorithm fairness. Proceedings of the 2017 ACM Web Science Conference, 247-251. https://doi.org/10.1145/3091478.3091512
  85. Van Berkel, N., Goncalves, J., Hettiachchi, D., Wijenayake, S., Kelly, R. M., & Kostakos, V. (2019). Crowdsourcing perceptions of fair predictors for machine learning: A recidivism case study. Proceedings of the ACM on Human-Computer Interaction, 3(CSCW). https://doi.org/10.1145/3359130
  86. Veale, M., & Binns, R. (2017). Fairer machine learning in the real world: Mitigating discrimination without collecting sensitive data. Big Data & Society, 4(2), 1-17. https://doi.org/10.1177/2053951717743530
  87. Verma, S., & Rubin, J. (2018). Fairness Definitions Explained. Proceedings of the ACM/IEEE International Workshop on Software Fairness, 1-7. https://doi.org/10.1145/3194770.3194776
  88. Wang, A. J. (2018). Procedural Justice and Risk-Assessment Algorithms. SSRN Electronic Journal. https://doi.org/10.2139/ssrn.3170136
  89. Wang, R., Harper, F. M., & Zhu, H. (2020). Factors Influencing Perceived Fairness in Algorithmic Decision-Making: Algorithm Outcomes, Development Procedures, and Individual Differences. Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, 1-14. https://doi.org/10.1145/3313831.3376813
  90. Wirtz, B. W., Weyerer, J. C., & Geyer, C. (2019). Artificial Intelligence and the Public Sector- Applications and Challenges. International Journal of Public Administration, 42(7), 596- 615. https://doi.org/10.1080/01900692.2018.1498103
  91. Wong, P. H. (2020). Democratizing Algorithmic Fairness. Philosophy and Technology, 33(2), 225-244. https://doi.org/10.1007/s13347-019-00355-w
  92. Zafar, M. B., Valera, I., Rodriguez, M. G., Gummadi, K. P., & Weller, A. (2017). From parity to preference-based notions of fairness in classification. 31st Conference on Neural Information Processing Systems.
  93. Završnik, A. (2020). Criminal justice, artificial intelligence systems, and human rights. ERA Forum, 20(4), 567-583. https://doi.org/10.1007/s12027-020-00602-0
  94. Žliobaitė, I. (2017). Measuring discrimination in algorithmic decision making. Data Mining and Knowledge Discovery, 31(4), 1060-1089. https://doi.org/10.1007/s10618-017-0506-1