Governance of algorithms: options and limitations (original) (raw)

Governance by and of Algorithms on the Internet: Impact and Consequences

Oxford Research Encyclopedia of Communication, 2020

Internet-based services that build on automated algorithmic selection processes, for example search engines, computational advertising, and recommender systems, are booming and platform companies that provide such services are among the most valuable corporations worldwide. Algorithms on and beyond the Internet are increasingly influencing, aiding, or replacing human decision-making in many life domains. Their far-reaching, multifaceted economic and social impact, which results from the governance by algorithms, is widely acknowledged. However, suitable policy reactions, that is, the governance of algorithms, are the subject of controversy in academia, politics, industry, and civil society. This governance by and of algorithms is to be understood in the wider context of current technical and societal change, and in connection with other emerging trends. In particular, expanding algorithmizing of life domains is closely interrelated with and dependent on growing datafication and big data on the one hand, and rising automation and artificial intelligence in modern, digitized societies on the other. Consequently, the assessments and debates of these central developmental trends in digitized societies overlap extensively. Research on the governance by and of algorithms is highly interdisciplinary. Communication studies contributes to the formation of so-called “critical algorithms studies” with its wide set of sub-fields and approaches and by applying qualitative and quantitative methods. Its contributions focus both on the impact of algorithmic systems on traditional media, journalism, and the public sphere, and also cover effect analyses and risk assessments of algorithmic-selection applications in many domains of everyday life. The latter includes the whole range of public and private governance options to counter or reduce these risks or to safeguard ethical standards and human rights, including communication rights in a digital age.

Prospectus and limitations of algorithmic governance: an ecological evaluation of algorithmic trends

Digital Policy, Regulation and Governance, 2019

Purpose The purpose of this study is to offer a roadmap for work on the ethical and societal implications of algorithms and AI. Based on an analysis of the social, technical and regulatory challenges posed by algorithmic systems in Korea, this work conducts socioecological evaluations of the governance of algorithmic transparency and accountability. Design/methodology/approach This paper analyzes algorithm design and development from critical socioecological angles: social, technological, cultural and industrial phenomena that represent the strategic interaction among people, technology and society, touching on sensitive issues of a legal, a cultural and an ethical nature. Findings Algorithm technologies are a part of a social ecosystem, and its development should be based on user interests and rights within a social and cultural milieu. An algorithm represents an interrelated, multilayered ecosystem of networks, protocols, applications, services, practices and users. Practical impl...

Digital intermediaries and the public interest standard in algorithm governance

2014

Philip Napoli is Professor of Journalism & Media Studies in the School of Communication & Information at Rutgers University, where his research focuses on media institutions and policy. He has provided testimony on media policy issues to the U.S. Senate, the FCC and the FTC as well as being featured in media outlets such as the NBC Nightly News, NPR and the Los Angeles Times. Here he discusses the need for a broader debate about algorithm governance with a more robust notion of the public interest in the digital era.

Algorithmic governance: Developing a research agenda through the power of collective intelligence

We are living in an algorithmic age where mathematics and computer science are coming together in powerful new ways to influence, shape and guide our behaviour and the governance of our societies. As these algorithmic governance structures proliferate, it is vital that we ensure their effectiveness and legitimacy. That is, we need to ensure that they are an effective means for achieving a legitimate policy goal that are also procedurally fair, open and unbiased. But how can we ensure that algorithmic governance structures are both? This article shares the results of a collective intelligence workshop that addressed exactly this question. The workshop brought together a multidisciplinary group of scholars to consider (a) barriers to legitimate and effective algorithmic governance and (b) the research methods needed to address the nature and impact of specific barriers. An interactive management workshop technique was used to harness the collective intelligence of this multidisciplinary group. This method enabled participants to produce a framework and research agenda for those who are concerned about algorithmic governance. We outline this research agenda below, providing a detailed map of key research themes, questions and methods that our workshop felt ought to be pursued. This builds upon existing work on research agendas for critical algorithm studies in a unique way through the method of collective intelligence.

A governance framework for algorithmic accountability and transparency

2019

A governance framework for algorithmic accountability and transparency Algorithmic systems are increasingly used as part of decision-making processes in the public and private sectors, with potentially significant consequences for individuals, organisations and societies. However, the very properties of scale, capability to handle complex datasets, and autonomous learning that make these systems useful also make it difficult to provide clear explanations for the decisions they make. This lack of transparency risks undermining meaningful scrutiny and accountability, which is a significant concern when relating to decision-making processes that can have a considerable impact on fundamental human rights. On the basis of a review of existing proposals for the governance of algorithmic systems, the study offers four sets of policy options, each addressing a different aspect of algorithmic transparency and accountability: i) awareness raising-education, journalism and whistleblowers; ii) accountability in public sector use of algorithmic systems; iii) regulatory oversight and legal liability; and iv) global coordination of algorithmic governance.

Governance by Algorithms: Reality Construction by Algorithmic Selection on the Internet

Media, Culture & Society, 2017

This paper explores the governance by algorithms in information societies. Theoretically, it builds on (co-)evolutionary innovation studies in order to adequately grasp the interplay of technological and societal change, and combines these with institutional approaches to incorporate governance by technology or rather software as institutions. Methodologically it draws from an empirical survey of Internet-based services that rely on automated algorithmic selection, a functional typology derived from it, and an analysis of associated potential social risks. It shows how algorithmic selection has become a growing source of social order, of a shared social reality in information societies. It argues that-similar to the construction of realities by traditional mass media-automated algorithmic selection applications shape daily lives and realities, affect the perception of the world, and influence behavior. However, the co-evolutionary perspective on algorithms as institutions, ideologies, intermediaries and actors highlights differences that are to be found first in the growing personalization of constructed realities, and second in the constellation of involved actors. Altogether, compared to reality construction by traditional mass media, algorithmic reality construction tends to increase individualization, commercialization, inequalities and deterritorialization, and to decrease transparency, controllability and predictability.

What is algorithmic governance

Sociology Compass, 2021

This article contributes a coherent framework to the rich literature emerging in the field of algorithmic governance while also resolving conflicting understandings. Tracing the history of algorithmic governance to the broad architecture of the universal Turing Machine, the article identifies a common thread of critical concern in the literature on algorithmic governance: the growing institutional capabilities to move contestable issues to a space of reduced negotiability, raising questions of social asymmetry, inequity, and inequality. Within the social context of algorithmic governance, the article highlights three general areas of concern where the social negotiability of processes is threatened: the problem of power (surveillance), discrimination (social bias), and identification (system identity).

A guideline for understanding and measuring algorithmic governance in everyday life

Internet Policy Review, 2019

Die Dokumente auf EconStor dürfen zu eigenen wissenschaftlichen Zwecken und zum Privatgebrauch gespeichert und kopiert werden. Sie dürfen die Dokumente nicht für öffentliche oder kommerzielle Zwecke vervielfältigen, öffentlich ausstellen, öffentlich zugänglich machen, vertreiben oder anderweitig nutzen. Sofern die Verfasser die Dokumente unter Open-Content-Lizenzen (insbesondere CC-Lizenzen) zur Verfügung gestellt haben sollten, gelten abweichend von diesen Nutzungsbedingungen die in der dort genannten Lizenz gewährten Nutzungsrechte. Terms of use: Documents in EconStor may be saved and copied for your personal and scholarly purposes. You are not to copy documents for public or commercial purposes, to exhibit the documents publicly, to make them publicly available on the internet, or to distribute or otherwise use the documents in public.

Mining Governance Mechanisms: Innovation Policy, Practice, and Theory Facing Algorithmic Decision-Making

Handbook of Cyber-Development, Cyber-Democracy, and Cyber-Defense, 2018

The shift from governance of to governance by information infrastructures has major implications for innovation policy. With algorithmic governance, regimes of inclusion/exclusion "sink" in information infrastructures that act as decision-makers. Inclusive governance of innovation thus needs to dig deeper into technological details. This chapter focuses on one major aspect that characterizes algorithmic decision-making, namely the overlap between policy and practice. Drawing upon the innovation dance metaphor, we ask whether any space for theory can be acknowledged when algorithmic governance tightly couples policy and practice. We first attempt to theoretically answer this question by introducing the Science and Technology Studies notion of "de-scription" as a translation of rules and behaviours from extra-somatic material devices to explicit textual instructions. We propose that space for innovation theory can be conceived of as a descriptive activity. We then exemplify the overlapping argument against the case of blockchain technologies. Blockchains are the algorithmic software underpinning peer-to-peer electronic payment systemsthe most renowned of which is Bitcoin. We argue that blockchains "inscribe trust" into software, and thus constitute self-standing governance mechanisms. By analysing a recent controversy in the Bitcoin community, we show that space for theory is more likely to emerge when a controversy arises, that requires description in order to recruit new allies. This evidence suggests that the relationship between theory and inclusion might be inverted: inclusion might not be the outcome of theory, but space for theory is the result of controversies in which opposite factions carry out recruitment strategies.

Four Crises in Algorithmic Governance

Annual Review of Law and Ethics, 2018

Algorithmic copyright governance engenders a series of overlapping crises which collectively threaten to destabilize the Western liberal state. The quantization of culture, typefied by the reduction of nuanced aesthetic judgment to a standardized system of scores and rankings, undermines the autonomy of individuals and communities as vital contributors to, and arbiters of, cultural meaning. The convergence of institutions, typefied in the algorithm’s unified role as legislator, executor, and judge of copyright, betokens the dissolution of the democratic separation of powers, opening the door for authoritarian rule. The expansion of scale creates and normalizes the conditions un- der which traditional means of interpreting and executing the law appear woefully inadequate, and private algorithms become the logical and necessary solution to this failure by the state. Finally, the destabilization of ontology erodes the very conceptual foundations of liberal democracy, replacing an already imbalanced system of power relations with a much steeper hierarchy in which nearly every member of society is relegated to the inferior (and posterior) position of copyist and consumer. Collectively, these developments suggest that we must radically reimagine our copy- right system if we are to preserve the individual agency and collective autonomy that have served as the foundational principles of liberalism and modernity.