Auditing Risk Prediction of Long-Term Unemployment (original) (raw)

Time to Question if We Should: Data-Driven and Algorithmic Tools in Public Employment Services

Algorithmic and data-driven systems have been introduced to assist Public Employment Services (PES) in various countries. However, their deployment has been heavily criticized. This paper is based on a workshop organized by a distributed team of researchers in AI ethics and adjacent fields, which brought together academics, system developers, representatives from the public sector, civil-society organizations, and participants from industry. We report on the workshop and analyze three salient discussion topics, organized around our research questions: (1) the challenge of representing individuals with data, (2) the role of job counsellors and data-driven systems in PES, and (3) questions around the interactions between job seeker, counsellor, and system. Finally, we consider lessons learned from the workshop and describe plans aiming at involving a multiplicity of stakeholders in a co-design process.

Algorithmic Tools in Public Employment Services: Towards a Jobseeker-Centric Perspective

FAccT '22: 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022

Data-driven and algorithmic systems have been introduced to support Public Employment Services (PES) throughout the world. Their deployment has sparked public controversy and, as a consequence, some of these systems have been removed from use or their role was reduced. Yet the implementation of similar systems continues. In this paper, we use a participatory approach to determine a course forward for research and development in this area. We draw attention to the needs and expectations of people directly affected by these systems, i.e., jobseekers. Our investigation comprises two workshops: the first a fact-finding workshop with academics, system developers, the public sector, and civil-society organizations, the second a co-design workshop with 13 unemployed migrants to Germany. Based on the discussion in the fact-finding workshop we identified challenges of existing PES (algorithmic) systems. From the co-design workshop we identified our participants' needs and desires when contacting PES: the need for human contact, the expectation to receive genuine orientation, and the desire to be seen as a whole human being. We map these expectations to three design considerations for data-driven and algorithmic systems for PES: the importance of interpersonal interaction, jobseeker assessment as direction, and the challenge of mitigating misrepresentation. Finally, we argue that the limitations and risks of current systems cannot be addressed through minor adjustments but require a more fundamental change to the role of PES.

"We Would Never Write That Down": Classifications of Unemployed and Data Challenges for AI

This paper draws attention to new complexities of deploying artificial intelligence (AI) to sensitive contexts, such as welfare allocation. AI is increasingly used in public administration with the promise of improving decision-making through predictive modelling. To accurately predict, it needs all the agreed criteria used as part of decisions, formal and informal. This paper empirically explores the informal classifications used by caseworkers to make unemployed welfare seekers 'fit' into the formal categories applied in a Danish job centre. Our findings show that these classifications are documentable, and hence traceable to AI. However, to the caseworkers, they are at odds with the stable explanations assumed by any bureaucratic recording system as they involve negotiated and situated judgments of people's character. Thus, for moral reasons, caseworkers find them ill-suited for formal representation and predictive purposes and choose not to write them down. As a result, although classification work is crucial to the job centre's activities, AI is denuded of the real-world (and real work) character of decision-making in this context. This is an important finding for CSCW as it is not only about whether AI can 'do' decision-making in particular contexts, as previous research has argued. This paper shows that problems may also be caused by people's unwillingness to provide data to systems. It is the purpose of this paper to present the empirical results of this research, followed by a discussion of implications for AI-supported practice and research.

What Makes an Ideal Unemployed Person? Values and Norms Encapsulated in a Computerized Profiling Tool 1

Social Work and Society. International Journal Online, vol. 18, no. 1, 2020

This article provides insights into a computer-based profiling tool implemented in Poland from 2014 to 2019 to measure the employability of unemployed individuals and decide upon allocation of active labor market policies. We propose to treat the profiling tool as a source of information about what was expected from the unemployed citizens by state authorities and which attitudes were perceived by the state as “desirable” or “demanding adjustment.” We show how the profiling technology served to shape the conduct of the unemployed population, and how it imposed upon them a certain ideal of social citizenship. Our findings indicate that the normative assumptions underlying the profiling relate directly to the key aspects of welfare state transformations, namely, to the new social contract which delegitimizes financial benefits and puts forward activation, to the new concept of a citizen as an entrepreneurial and self-reliant actor and to the new, individualized perception of social risks. Results are based on the analysis of the profiling questionnaire, scoring mechanism as well as the reconstruction of the policy-making process.

Why Predictive Algorithms are So Risky for Public Sector Bodies

2020

This paper collates multidisciplinary perspectives on the use of predictive analytics in government services. It moves away from the hyped narratives of “AI” or “digital”, and the broad usage of the notion of “ethics”, to focus on highlighting the possible risks of the use of prediction algorithms in public administration. Guidelines for AI use in public bodies are currently available, however there is little evidence these are being followed or that they are being written into new mandatory regulations. The use of algorithms is not just an issue of whether they are fair and safe to use, but whether they abide with the law and whether they actually work. Particularly in public services, there are many things to consider before implementing predictive analytics algorithms, as flawed use in this context can lead to harmful consequences for citizens, individually and collectively, and public sector workers. All stages of the implementation process of algorithms are discussed, from the specification of the problem and model design through to the context of their use and the outcomes. Evidence is drawn from case studies of use in child welfare services, the US Justice System and UK public examination grading in 2020. The paper argues that the risks and drawbacks of such technological approaches need to be more comprehensively understood, and testing done in the operational setting, before implementing them. The paper concludes that while algorithms may be useful in some contexts and help to solve problems, it seems those relating to predicting real life have a long way to go to being safe and trusted for use. As “ethics” are located in time, place and social norms, the authors suggest that in the context of public administration, laws on human rights, statutory administrative functions, and data protection — all within the principles of the rule of law — provide the basis for appraising the use of algorithms, with maladministration being the primary concern rather than a breach of “ethics”.

Targeting labour market programmes: results from a randomized experiment

Institute for the Study of Labor ( …, 2007

We evaluate a randomized experiment of a statistical support system developed to assist caseworkers in Swiss employment offices in choosing appropriate active labour market programmes for their unemployed clients. This statistical support system predicted the labour market outcome for each programme and thereby suggested an 'optimal' labour market programme for each unemployed person. The support system was piloted in several employment offices. In those pilot offices, half of the caseworkers used the system and the other half acted as control group. The allocation of the caseworkers to treatment and control group was random. The experiment was designed such that caseworkers retained full discretion about the choice of active labour market programmes, and the evaluation results showed that caseworkers largely ignored the statistical support system. This indicates that stronger incentives are needed for caseworkers to comply with statistical profiling and targeting systems.

A Danish Profiling System

2004

We describe the statistical model used for profiling new unemployed workers in Denmark. When a worker -during his or her first six months in unemployment -enters the employment office for the first time, this model predicts whether he or she will be unemployed for more than six months from that date or not. The case workers' assessment of how to treat the person is partially based upon this prediction.