The Case for a Ban on Facial Recognition Surveillance in Canada (original) (raw)

Justice Sub-Committee on Policing - the Scottish Parliament: Facial Recognition: How Policing in Scotland Makes Use of This Technology

2019

As a result of interest from the SubCommittee in my recent study exploring the impacts of body-worn cameras (BWC), I hereby provide further evidence that explores the views expressed by UK police officers on the use of live facial recognition (LFR) technology. 1. Background and methodology This research project explores police-public encounters mediated by BWC. Methodologically, we consider police officers' perceptions of and their engagement with these technologies in their professional practice. In total 26 semi-structured interviews were conducted with police officers from two British Police forces in different geographic locations (South and North of the UK). These interviews were audio recorded and transcribed verbatim, without recording participants' names. 2. Findings These interviews were focused on the use of BWC and considered how these devices might be accompanied with other emerging technologies (such as LFR). However, the position of optimism and confidence in the potential uses of this technology was mainly shadowed by a position of scepticism and disbelief in scenarios where LFR is possible or sensible currently. In this short paper we will explore some of the concerns that were discussed by police officers when considering the potential use of LFR. 2.1. Acceptance and resistance The participants discussed how the adoption of an emerging technology such as LFR is subject to a process of either acceptance or resistance from both members of the public and police organisations. Several examples were used to illustrate how technologies were accepted in the past and are now used in a daily basis (such as automatic number plate recognition) and how LFR could be just a "step further" in order to "read the picture, the image, of the person" in the future (Larry, 12 years of service). Nonetheless, officers agreed that they will face backlash from the public if LFR is not deemed to work effectively. For instance, PC Mark (27 years of service, firearms unit) believed that: "We will be using a lot in the future. I wouldn't say debug it and get one that works but then it is just one of those things that will be used to fight crime. I would imagine lots of people would moan about it to start with, but I would imagine once we get a system that works properly we will end up using it." Even if LFR is reliable in the long run, the police officers discussed strategies of resistance that will be deployed in order to avoid being recognised by the brought to you by CORE View metadata, citation and similar papers at core.ac.uk

Burning Bridges: The Automated Facial Recognition Technology and Public Space Surveillance in the Modern State

2021

Live automated facial recognition technology, rolled out in public spaces and cities across the world, is transforming the nature of modern policing. R (on the application of Bridges) v Chief Constable of South Wales Police, decided in August 2020, is the first successful legal challenge to automated facial recognition technology in the world. In Bridges, the United Kingdom’s Court of Appeal held that the South Wales Police force’s use of automated facial recognition technology was unlawful. This landmark ruling could influence future policy on facial recognition in many countries. The Bridges decision imposes some limits on the police’s previously unconstrained discretion to decide whom to target and where to deploy the technology. Yet, while the decision requires that the police adopt a clearer legal framework to limit this discretion, it does not, in principle, prevent the use of facial recognition technology for mass-surveillance in public places, nor for monitoring political p...

Your face is not new to me -Regulating the surveillance power of facial recognition technologies

Internet Policy Review, 12, 1, 2023

Facial recognition technologies (FRTs) represent one of the cutting-edge applications of artificial intelligence and big data for surveillance purposes. The uses of these biometric technologies are widespread in our cities. However, they may result in serious abuses against the rights of people and minorities, or even in new kinds of mass surveillance. The article focuses on “real-time” and “live” use by law enforcement authorities, one of the most discussed deployments of FRTs. The analysis addresses, from a constitutional point of view, whether banning these technologies is inevitable, or whether it is possible to regulate them in a way that allows their use while protecting the fundamental rights at stake and preserving democratic order and the rule of law. The principle of proportionality is the standard for defining appropriate regulatory measures. The article starts off by providing an overview of how FRTs work and some of the consequent ethical, technical, societal and legal concerns that arise. It then provides a critical analysis of EU data protection legislation and the AI Act proposal to examine their strengths and shortcomings in addressing the proportionate use of FRTs.

Policing faces: the present and future of intelligent facial surveillance

Information & Communications Technology Law, 2021

In this paper, we discuss the present and future uses of intelligent facial surveillance (IFS) in law enforcement. We present an empirical and legally focused case study of live automated facial recognition technologies (LFR) in British policing. In Part I, we analyse insights from 26 frontline police officers exploring their concerns and current scepticism about LFR. We analyse recent UK case law on LFR use by police which raises concerns around human rights, data protection and anti-discrimination laws. In Part II, we consider frontline officers' optimism around future uses of LFR and explore emerging forms of IFS, namely emotional AI (EAI) technologies. A key novelty of the paper is our analysis on how the proposed EU AI Regulation (AIR) will shape future uses of IFS in policing. AIR makes LFR a prohibited form of AI and EAI use by law enforcement will be regulated as high-risk AI that has to comply with new rules and design requirements. Part III presents a series of 10 practical lessons, drawn from our reflections on the legal and empirical perspectives. These aim to inform any future law enforcement use of IFS in the UK and beyond.

Picturing algorithmic surveillance: The politics of facial recognition systems

Surveillance and society, 2004

This paper opens up for scrutiny the politics of algorithmic surveillance through an examination of Facial Recognition Systems (FRS's) in video surveillance, showing that seemingly mundane design decisions may have important political consequences that ought to be subject to scrutiny. It first focuses on the politics of technology and algorithmic surveillance systems in particular: considering t he broad politics of technology; the nature of algorithmic surveillance and biometrics, claiming that software algorithms are a particularly important domain of techno-politics; and finally considering both the growth of algorithmic biometric surveillance and the potential problems with such systems. Secondly, it gives an account of FRS's, the algorithms upon which they are based, and the biases embedded therein. In the third part, the ways in which these biases may m anifest itself in real world implementation of FRS's are outlined. Finally, some policy suggestions for the future development of FRS's are made; it is noted that the most common critiques of such systems are based on notions of privacy which seem increasingly at odds with the world of automated systems.

The sensitive nature of facial recognition: Tensions between the Swedish police and regulatory authorities1

Information polity, 2022

Emerging technologies with artificial intelligence (AI) and machine learning are laying the foundation for surveillance capabilities of a magnitude never seen before. This article focuses on facial recognition, now rapidly introduced in many police authorities around the world, with expectations of enhanced security but also subject to concerns related to privacy. The article examined a recent case where the Swedish police used the controversial facial recognition application Clearview AI, which led to a supervisory investigation that deemed the police's use of the technology illegitimate. Following research question guided the study: How do the trade-offs between privacy and security unfold in the police use of facial recognition technology? The study was designed as a qualitative document analysis of the institutional dialogue between the police and two regulatory authorities, theoretically we draw on technological affordance and legitimacy. The results show how the police's use of facial recognition gives rise to various tensions that force the police as well as policy makers to rethink and further articulate the meaning of privacy. By identifying these tensions, the article contributes with insights into various controversial legitimacy issues that may arise in the area of rules in connection with the availability and use of facial recognition.

The sensitive nature of facial recognition: Tensions between the Swedish police and regulatory authorities

Information Polity

Emerging technologies with artificial intelligence (AI) and machine learning are laying the foundation for surveillance capabilities of a magnitude never seen before. This article focuses on facial recognition, now rapidly introduced in many police authorities around the world, with expectations of enhanced security but also subject to concerns related to privacy. The article examined a recent case where the Swedish police used the controversial facial recognition application Clearview AI, which led to a supervisory investigation that deemed the police’s use of the technology illegitimate. Following research question guided the study: How do the trade-offs between privacy and security unfold in the police use of facial recognition technology? The study was designed as a qualitative document analysis of the institutional dialogue between the police and two regulatory authorities, theoretically we draw on technological affordance and legitimacy. The results show how the police’s use o...

Has the horse bolted? Dealing with legal and practical challenges of facial recognition

MediaLaws, 2022

Facial recognition is a technology that is largely used by individuals and authorities. Whilst it has great potential, especially in law enforcement, it may lead to unpredicted outcomes. This is why the European Union (EU) has started, in the AI Act’s framework, to question itself on how to rule such technology to avoid cases similar to Clearview AI. Analysing the EU approach to biometric identification systems and the case of Clearview AI, this article explores the legal and practical challenges that facial recognition poses.

Resisting the rise of facial recognition

Nature, 2020

Growing use of surveillance technology has prompted calls for bans and stricter regulation. By Antoaneta Roussi Cameras watch over Belgrade's Republic Square.