Constitutional Law in the Algorithmic Society (original) (raw)
Related papers
Constitutional democracy and technology in the age of artificial intelligence
Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences
Given the foreseeable pervasiveness of artificial intelligence (AI) in modern societies, it is legitimate and necessary to ask the question how this new technology must be shaped to support the maintenance and strengthening of constitutional democracy. This paper first describes the four core elements of today's digital power concentration, which need to be seen in cumulation and which, seen together, are both a threat to democracy and to functioning markets. It then recalls the experience with the lawless Internet and the relationship between technology and the law as it has developed in the Internet economy and the experience with GDPR before it moves on to the key question for AI in democracy, namely which of the challenges of AI can be safely and with good conscience left to ethics, and which challenges of AI need to be addressed by rules which are enforceable and encompass the legitimacy of democratic process, thus laws. The paper closes with a call for a new culture of inc...
Fundamental Rights and the Rule of Law in the Algorithmic Society
Constitutional Challenges in the Algorithmic Society
2.1 new technologies and the rise of the algorithmic society New technologies offer human agents entirely new ways of doing things. 1 However, as history shows, 'practical' innovations always bring with them more significant changes. Each new option introduced by technological evolution allowing new forms affects the substance, eventually changing the way humans think and relate to each other. 2 The transformation is especially true when we consider information and communication technologies (so-called ICT); as indicated by Marshall McLuhan, 'the media is the message'. 3 Furthermore, this scenario has been accelerated by the appearance of artificial intelligence systems (AIS), based on the application of machine learning (ML). These new technologies not only allow people to find information at an incredible speed; they also recast decision-making processes once in the exclusive remit of human beings. 4 By learning from vast amounts of data-the socalled Big Data-AIS offer predictions, evaluations, and hypotheses that go beyond the mere application of pre-existing rules or programs. They instead 'induce' their own rules of action from data analysis; in a word, they make autonomous decisions. 5 1 Pedro Domingos, The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World (Basic Books 2015). 2 One of the most prominent prophets of the idea of a new kind of progress generated through the use of technologies is surely Jeremy Rifkin. See his book The Zero Marginal Cost Society: The Internet of Things, the Collaborative Commons, and the Eclipse of Capitalism (St. Martin's Press 2014). 3 Marshall McLuhan and Quentin Fiore, The Medium Is the Massage (Ginko Press 1967). 4 Committee of Experts on Internet Intermediaries of the Council of Europe (MSI-NET), 'Algorithms and Human Rights. Study on the Human Rights Dimensions of Automated Data Processing Techniques and Possible Regulatory Implications' (2016) DGI(2017)12. 5 According to the European Parliament, 'Resolution of 16 February 2017 with recommendations to the Commission on Civil Law Rules on Robotics (2015/2103(INL))' (P8_TA(2017)0051, Bruxelles), 'a robot's autonomy can be defined as the ability to take decisions and implement them in the outside world, independently of external control or influence.' 27
Taming the Digital Leviathan: Automated Decision-Making and International Human Rights
AJIL Unbound, 2020
Enthusiasm abounds about the potential of artificial intelligence to automate public decision-making. The rise of machine learning and computational text analysis together with the proliferation of digital platforms has raised the prospect of “robo-judging” and “robo-administrators.” From a human rights perspective, the reaction has been mixed, and on balance negative. Optimists herald the possibilities of democratizing legal services and making decision-making more predictable and efficient. Critics warn, however, of the specter of new forms of social control, arbitrariness, and inequality. This essay examines the concerns over the turn to automation from the perspective of two international human rights: the rights to social security and a fair trial. It argues that while the critiques deserve a full hearing, they should be evidence-based, informed by an understanding of “technological systems,” and cognizant of the trade-offs between human and machine failure.
The Democratization of Artificial Intelligence, 2019
Digital technologies are in the process of reconfiguring our democracy. While we look for orientation and guidance in this process, the relationship between technology and democracy is unclear and seems to be in f lux. Are technology and democracy mirroring each other? 1 The internet was first hailed as genuinely democratic technology and ultimate enabler of democracy. It is now often perceived as a major threat to democracy. The story of artificial intelligence (AI) might turn out to be quite the opposite. While there are many ref lections on AI as a threat to or even as the end of democracy, 2 some voices highlight the democratic potentials of AI. 3 As is often the case, the research results depend on the premises underlying the research. This chapter is based on the assertion that technologies and media shape human affairs to a large extent, but that technology in turn is also shaped by human choices and decisions. There is a huge potential to endanger, game or even abolish democratic processes. On the contrary, there might also be opportunities to further democracy. Therefore, the extent to which AI impacts democracy is subject to the paths that are chosen in research, development and application of AI in society. The main purpose of this chapter is to highlight the room for choice in the construction of AI and its impacts on the future of democracy. It will also inquire into how law and jurisprudence relate to these questions. From this perspective, current impacts of AI on democracy have an important indicative function. But in the face of further possibilities of inventions and regulative measures on different levels, they are only precursors to what will and should be possible. In that sense, this chapter is also an attempt to deal with developments and inventions we cannot yet grasp. The main argument is that it might be possible to inf luence them nevertheless. Therefore, the chapter will ref lect on the possibility and necessity to democratize AI from a legal and jurisprudential perspective. It will then look at different ways to democratize AI.
Democracy and the Algorithmic Turn
In the current moment of democratic upheaval, the role of technology has been gaining increasing space in the democratic debate due to its role both in facilitating political debates, as well as how users' data is gathered and used. This article aims to discuss the relationship between democracy and the " algorithmic turn " – which the authors define as the " central and strategic role of data processing and automated reasoning in electoral processes, governance and decision making. " In doing so the authors help us understand how this phenomenon is influencing society – both positively and negatively – and what are the practical implications we see as a result.
Algorithmic transparency as a fundamental right in the democratic rule of law
Brazilian Journal of Law Technology and Innovation, 2023
This article scrutinizes the escalating apprehensions surrounding algorithmic transparency, positing it as a pivotal facet for ethics and accountability in the development and deployment of artificial intelligence (AI) systems. By delving into legislative and regulatory initiatives across various jurisdictions, the article discerns how different countries and regions endeavor to institute guidelines fostering ethical and responsible AI systems. Within the United States, both the US Algorithmic Accountability Act of 2022 and The European Artificial Intelligence Act share a common objective of establishing governance frameworks to hold errant entities accountable, ensuring the ethical, legal, and secure implementation of AI systems. A key emphasis in both legislations is placed on algorithmic transparency and elucidation of system functionalities, with the overarching goal of instilling accountability in AI operations. This examination extends to Brazil, where legislative proposals such as PL 2.338/2023 grapple with the intricacies of AI deployment and algorithmic transparency. Furthermore, PEC 29/2023 endeavors to enshrine algorithmic transparency as a fundamental right, recognizing its pivotal role in safeguarding users' mental integrity in the face of advancing neurotechnology and algorithmic utilization. To ascertain the approaches adopted by Europe, the United States, and Brazil in realizing the concept of Algorithmic Transparency in AI systems employed for decision-making, a comparative and deductive methodology is employed. This methodology aligns with bibliographical analysis, incorporating legal doctrines, legislative texts, and jurisprudential considerations from the respective legal systems. The analysis encompasses Algorithmic Transparency, Digital Due Process, and Accountability as inherent legal constructs, offering a comprehensive comparative perspective. However, the mere accessibility of source codes is deemed insufficient to guarantee effective comprehension and scrutiny by end-users. Recognizing this, the imperative of explainability in elucidating how AI systems function becomes evident, enabling citizens to comprehend the rationale behind decisions made by these systems. Legislative initiatives, exemplified by Resolution No. 332/2020 of the National Council of Justice (CNJ), underscore the acknowledgment of the imperative for transparency and accountability in AI systems utilized within the Judiciary.
Constitutional Challenges in the Algorithmic Society
Cambridge University Press eBooks, 2021
Is a future in which our emotions are being detected in real time and tracked, both in private and public spaces, dawning? Looking at recent technological developments, studies, patents, and ongoing experimentations, this may well be the case. 1 In its Declaration on the manipulative capabilities of algorithmic processes of February 2019, the Council of Europe's Committee of Ministers alerts us for the growing capacity of contemporary machine learning tools not only to predict choices but also to influence emotions, thoughts, and even actions, sometimes subliminally. 2 This certainly adds a new dimension to existing computational means, which increasingly make it possible to infer intimate and detailed information about individuals from readily available data, facilitating the microtargeting of individuals based on profiles in a way that may profoundly affect * The chapter is based on the keynote delivered by P. Valcke at the inaugural conference 'Constitutional Challenges in the Algorithmic Society' of the IACL Research Group on Algorithmic State Market & Society-Constitutional Dimensions', which was held from 9 to 11 May 2019 in Florence (Italy). It draws heavily from the PhD thesis of D. Clifford, entitled 'The Legal Limits to the Monetisation of Online Emotions' and defended at KU Leuven-Faculty of Law on July 3, 2019, to which the reader is referred for a more in-depth discussion. 1 For some illustrations, see B. Doerrfeld, '20+ Emotion Recognition APIs That Will Leave You Impressed, and Concerned' (Article 2015) https://nordicapis.com/20-emotion-recognition-apis-thatwill-leave-you-impressed-and-concerned/ accessed 11 June 2020; M. Zhao, F. Adib and D. Katabi, 'EQ-Radio: Emotion Recognition using Wireless Signals' (Paper 2016) http://eqradio.csail.mit.edu/ accessed 11 June 2020; CB Insights, 'Facebook's Emotion Tech: Patents Show New Ways for Detecting and Responding to Users' Feelings' (Article 2017) www.cbinsights.com/research/facebookemotion-patents-analysis/ accessed 11 June 2020; R. Murdoch et al., 'How to Build a Responsible Future for Emotional AI' (Research Report 2020) www.accenture.com/fi-en/insights/softwareplatforms/emotional-ai accessed 11 June 2020. Gartner predicts that by 2022, 10 per cent of personal devices will have emotion AI capabilities, either on-device or via cloud services, up from less than 1% in 2018: Gartner, 'Gartner Highlights 10 Uses for AI-Powered Smartphones' (Press Release 2018) www .gartner.com/en/newsroom/press-releases/2018-03-20-gartner-highlights-10-uses-for-ai-powered-smart phones accessed
Automating Government Decision-Making: Implications for the Rule of Law
Technology, Innovation and Access to Justice; Edinburgh University Press, 2021
Automation promises to improve a wide range of processes. The introduction of controlled procedures and systems in place of human labour can enhance efficiency as well as certainty and consistency. It is thus unsurprising that automation is being embraced by the private sector in fields including pharmaceuticals, retail, banking and transport. Automation also promises like benefits to government. It has the potential to make governments – and even whole democratic systems – more accurate, more efficient and fairer. As a result, several nations have become enthusiastic adopters of automation in fields such as welfare allocation and the criminal justice system. While not a recent development, automated systems that support or replace human decision-making in government are increasingly being used. This chapter assesses the benefits and challenges to the rule of law posed by automation of government decision-making. We focus narrowly on aspects of the rule of law that have the widest acceptance across political and national systems, notably that it requires governance in which the law must be predictable, stable, accessible and everyone must be equal before the law. These rule of law values are applied to four case studies: automated debt- collection in Australia, data-driven risk assessment by judges in the United States, social credit scoring in China, and automated welfare in Sweden.
When Is a Decision Automated? A Taxonomy for a Fundamental Rights Analysis
German Law Journal, 2024
This Article addresses the pressing issues surrounding the use of automated systems in public decisionmaking, specifically focusing on migration, asylum, and mobility. Drawing on empirical data, this Article examines the potential and limitations of the General Data Protection Regulation and the Artificial Intelligence Act in effectively addressing the challenges posed by automated decision-making (ADM). The Article argues that the current legal definitions and categorizations of ADM fail to capture the complexity and diversity of real-life applications where automated systems assist human decision-makers rather than replace them entirely. To bridge the gap between ADM in law and practice, this Article proposes to move beyond the concept of "automated decisions" and complement the legal protection in the GDPR and AI Act with a taxonomy that can inform a fundamental rights analysis. This taxonomy enhances our understanding of ADM and allows to identify the fundamental rights at stake and the sector-specific legislation applicable to ADM. The Article calls for empirical observations and input from experts in other areas of public law to enrich and refine the proposed taxonomy, thus ensuring clearer conceptual frameworks to safeguard individuals in our increasingly algorithmic society.
Artificial Intelligence and Human Rights
Journal of Democracy, 2019
In democratic societies, concern about the consequences of our growing reliance upon artificial intelligence (AI) is rising. The term AI, coined by John McCarthy in 1956, is elusive in its precise meaning but today broadly refers to machines that can go beyond their explicit programming by making choices in ways that mirror human reasoning. In other words, AI automates decisions that people used to make. 1 While AI promises many benefits, there are also risks associated with the swift advancement and adoption of the technology. Perhaps the darkest concerns relate to misuse of AI by authoritarian regimes. Even in free societies, however, and even when the intended application is for clearly good purposes, there is significant potential for unintended harms such as reduced privacy, lost accountability, and embedded bias. In digitally connected democracies, talk of what could go wrong with AI now touches on everything from massive job loss caused by automation to machines that make discriminatory hiring decisions, and even to threats posed by "killer robots." These concerns have darkened public attitudes and made this a key moment to either build or destroy public trust in AI. How did we get to this point? In the connected half of the world, the shift to the "data-driven" society has been quick and quiet-so quick and quiet that we have barely begun to come to grips with what our growing reliance on machine-made decisions in so many areas of life will mean for human agency, democratic accountability, and the enjoyment of human rights. Many governments have been formulating national AI strategies to keep from being left behind by the AI revolution, but few have been grap