Platform power and regulatory politics: Polanyi for the twenty-first century (original) (raw)

The emergence of platform regulation in the UK: an empirical-legal study

Social Science Research Network, 2021

Online platforms have emerged as a new kind of regulatory object. In this article, we empirically map the emergence of the field of platform regulation in one country: the United Kingdom (UK). We focus on the 18-month period between September 2018 and February 2020 when an upsurge of regulatory activism reflected increasing sensitivity to national sovereignty in the context of Brexit. Through an empirical-legal content analysis of eight official reports issued by the UK government, parliamentary committees, and regulatory agencies, we code the online harms to which regulation is being asked to respond; identify relevant subject domains of law (such as data protection and privacy, competition, education, media and broadcasting, consumer protection, tax law and financial regulation, intellectual property law, security law); and analyze the agencies referred in the reports for their centrality in the regulatory network and their regulatory powers.

The neo-regulation of internet platforms in the United Kingdom

Policy & Internet, 2022

In 2020, a new platform-regulatory model was initiated in the United Kingdom. The main focus of this article is on the development of the Digital Regulators Cooperation Forum (DRCF), set up to regulate the continually changing challenges posed by major platforms. The British model has drawn on UK competition regulation, collegial governance of international banking, and international 'agile' regulation debates. The backdrop to this 'neo-regulatory' development has been Brexit (the UK's withdrawal from the European Union). In late 2021, for the first time, the DRCF became a serious focus of parliamentary scrutiny, with recommendations for reform. Assessment of its effectiveness now largely awaits the enactment of statutory powers by the UK Parliament regarding first, 'online harms', and second, the use of pro-competitive regulation in digital markets. Whether the DRCF's model will have exemplary appeal beyond the United Kingdom is an open question.

The neo-regulation of internet platforms in the UK

Zenodo (CERN European Organization for Nuclear Research), 2021

In 2020, the UK entered a consolidating phase in the development of platform regulation. The present 'neo-regulatory' moment has been shaped first, by Brexit (the UK's withdrawal from the European Union) and second, by the creation of a voluntary collaborative forum by a diverse group of regulators. The regulatory forum is modelled on established UK competition regulation and also draws inspiration from colleges used in international banking regulation. It is founded on a pro-competition approach. This article retraces the key steps taken in establishing the present regulatory posture and machinery. It also illustrates how Brexit has informed recent policy moves. It is too early to know how effective the new framework will be because it still awaits the enactment of two statutes: first, regarding 'online harms' and second, the use of procompetitive powers in digital markets.

Digital Platforms and Their Normative Role: Looking Through the Lens of European Fundamental Values

Pravo ì suspìlʹstvo, 2022

The article is devoted to digital platforms and their impact on individuals and societies, including the legal systems and values on which the European legal order is based. The article provides a brief overview of the understanding of the term "digital platforms" and how it relates to "online platforms", but for the purposes of this research, this understanding is narrowed to these two types of platforms: (1) those intended for the exchange of information, goods or services between producers and consumers, mainly by providing services, and (2) so-called "social media", that is, those previously seen as communities and places for the exchange of opinions. The article questions whether the nature and activities of modern digital platforms are compatible with the requirements of European fundamental values, in particular human rights, democracy and the rule of law. Special attention in the research is paid to the normative role of digital platforms, which includes both the directly regulatory role and the broader role of forming and maintaining certain social norms. This role manifests itself in aspects such as regulatory intervention, changing the social landscape, and replacing public institutions in their key activities and perceptions by individuals. In more detail, the serious influence of digital platforms is expressed in the promotion of specific regulatory norms, regimes for the protection of human rights and the interpretation of the essence of particular fundamental rights, the regulation of social relations through design, the spread of certain types of contractual relations, the replacement of the justice system with restrictive dispute resolution procedures offered by the platforms. Digital platforms form the habit and tolerance of certain business models, which include the formation of dependence of people and governments on the decisions and actions of the platforms, the absence of alternatives and a monopoly position not only on the market but in broader context. By gaining public trust, platforms supplant traditional public institutions and increase platforms' influence on public opinion and democratic processes. At the same time, platforms and their owners mostly avoid both legal and moral responsibility for the consequences of their activities for human rights, the rule of law and democracy.

Digital Platform Policy and Regulation: Toward a Radical Democratic Turn

International Journal of Communication, 2020

This article considers challenges to policy and regulation presented by the dominant digital platforms. A radical democratic framing of the deliberative process is developed to acknowledge the full complexity of power relations that are at play in policy and regulatory debates, and this view is contrasted with a traditional liberal democratic perspective. We show how these different framings have informed historical and contemporary approaches to the challenges presented by conflicting interests in economic value and a range of public values in the context of media content, communication infrastructure, and digital platform policy and regulation. We argue for an agonistic approach to digital platform policy and regulatory debate so as to encourage a denaturalization of the prevailing logics of commercial datafication. We offer some suggestions about how such a generative discourse might be encouraged in such a way that it starts to yield a new common sense about the further development of digital platforms, one that might favor a digital ecology better attuned to consumer and citizen interests in democratic societies.

States as platforms following the new EU regulations on online platforms

2022

The recent adoption by the European Parliament of the Digital Services Act means that, when it comes into effect, it will formally introduce into EU law the term ‘online platforms’. In effect, between the Digital Services Act and the Digital Markets Act, a comprehensive framework for the regulation of online platforms is being introduced into EU law, the first of its kind both in Europe and internationally. However, European regulatory innovation invites a different viewpoint: Could states be considered platforms? What if this new regulatory framework was applied to states themselves? This article first outlines the regulations on online platforms in EU law. Then it discusses the role of states as information brokers in order to support its main argument, that states can be viewed as (online) platforms. A discussion of the consequences of such a conclusion is included in the final part of this analysis.

TOWARDS A REGULATORY THEORY OF PLATFORM RULE: CORPORATE "SOVEREIGNTY" THROUGH IMMUNITIES

St. Mary's Law Journal , 2024

The scale of inflammatory, divisive, false and harmful online content has prompted much soul-searching about its sources, causes and possible responses. This has brought the sweeping immunity in section 230 of the Communication Decency Act (intended to empower platforms as moderators) under intense scrutiny. Far from providing relief, it appears to have turned platforms into a source of the problem. This Article offers a fresh take on section 230, which is - despite its apparent commonplaceness - shown to be an extraordinary legal intervention as it gives important actors, otherwise key to controlling a space, a “carte blanche immunity for wrongful conduct.” That extraordinariness requires an explanation going beyond standard arguments about giving young internet companies some “breathing space” or removing disincentives for content moderation. The discussion starts with the proposition that an immunity entails self-governance, not as a matter of cause and effect, but in purely analytical term being immune means to self-govern within the scope of the immunity, that is to act without legal accountability. Building on the basic understanding of an immunity as self-governance, the Article traces the provenance of section 230 and its sweeping application to online platforms through three very different, but complementary, legal contexts: first, within the landscape of immunities as extraordinary legal devices often employed in support of governing activity; second, within the conception of the corporation as a self-governing institution embedded in immunities and impunities; and, third, within the constitutional framework and its capacity to recognise the “sovereignty-sharing” arrangement of government and platform in cyberspace. The Article’s overarching argument is that section 230 taps into the governing propensity of platforms not just as intermediaries or gatekeepers of online content, but as corporate actors which are, it is argued, inherently immune/self-governing actors with a long-standing history of “sovereignty-sharing” with government. Through this corporate prism the extraordinary “sovereign” role of platforms in cyberspace becomes intelligible. Normatively, the argument recasts platforms as hybrid private-public actors, consistent with the body of corporate scholarship, which postulates the sui generis nature of the corporation as a neither quite private nor quite public. Section 230 intensifies this argument in the case of online platforms. Repositioning online platforms as sitting “on the fence” of the private-public constitutional divide then provides the foundation for asking how constitutional restraints applicable to government may be adapted to ensure platform accountability.

Emerging platform governance: antitrust reform and non-competitive harms in digital platform markets

Information, Communication & Society, 2023

Following the Snowden revelations, Cambridge Analytica, and a policy vacuum created by technological convergence and neoliberal reforms, policy efforts to articulate oversight of digital platform markets gathered policymaker support and public attention internationally. In the U.S., examined here as a case study of these international policy efforts, competition policy emerged as a prominent governance mechanism over digital platforms, resulting in the current antitrust scrutiny of tech giants like Google and Meta. Drawing on policy documents, fieldwork, and expert interviews, I trace how antitrust reform proposals, pitched as reclaiming democratic governance over private markets, came to dominate platform policy discussions. I examine how policy efforts to address platform power via competition grappled with non-competitive harms arising in digital markets, such as threats to user privacy and disinformation flows. Finally, I show how these debates began to converge on the contours of an emergent governance paradigm for digital platform oversight. I argue that this governance framework, which seeks to optimize market mechanisms to discipline platform markets, has significant limitations, notably in addressing issues associated with big data commodification and quantification.

Platforms' Governance: Analyzing Digital Platforms' Policy Preferences

Growing political distrust in digital platforms has galvanized policy debates about how to best address issues associated with their market power and ad-run business models—including the proliferation of misinformation, privacy threats, and electoral interference. The range of proposed solutions includes growing calls for public-private policy regimes, such as co-regulation. Such proposals envision a role for digital platforms in addressing platform-related problems, whose contours need to be defined. In this article, we examine how platform companies attempt to influence these debates and define this role, focusing on the biggest U.S. digital platform companies: Amazon, Apple, Google, Facebook, and Microsoft. We conduct a content analysis of a sample of 2019 public policy blogs, statements, and testimonies by key personnel at these companies to gain insight into (a) the policy issues they engage, (b) the policy preferences they communicate, and (c) what these communications reveal about their regulatory philosophies and visions of platform governance. The findings shed light on the politics underlying the debates over platform governance and provide insight into what co-regulatory approaches might look like in practice. We call these policy paradigms “frictionless regulation”: light and narrow regulatory oversight confined to baseline standard-setting, receptive to the private sector’s ongoing feedback, and prioritizing fast responsiveness to market needs over the slow and deliberative responsiveness to the public that is typical of democratic governance.