Toward the Agile and Comprehensive International Governance of AI and Robotics [point of view] (original) (raw)

An Agile Ethical/Legal Model for the International and National Governance of AI and Robotics

2017

The accelerating pace of emerging technologies such as AI has revealed a total mismatch between existing governmental approaches and what is needed for effective ethical/legal oversight. To address this “pacing gap” the authors proposed governance coordinating committees (GCCs) in 2015 as a new more agile approach for the coordinated oversight of emerging technologies. In this paper, we quickly reintroduce the reasons why AI and robotics require more agile governance, and the potential role of the GCC model for meeting that need. Secondly, we flesh out the roles for government, engineering, and ethics in forcing a comprehensive approach to the oversight of AI/robotics mediated by a GCC. We argue for an international GCC with complementary regional bodies in light of the transnational nature of AI concerns and risks. We also propose a series of new mechanisms for enforcing (directly or indirectly) “soft law” approaches for AI through coordinated institutional controls by insurers, jo...

AI Governance in a Complex and Rapidly Changing Regulatory Landscape pdf

Humanities and Social sciences Communications, 2024

The rapid advancement and deployment of Artificial Intelligence (AI) poses significant regulatory challenges for societies. While it has the potential to bring many benefits, the risks of commercial exploitation or unknown technological dangers have led many jurisdictions to seek a legal response before measurable harm occurs. However, the lack of technical capabilities to regulate this sector despite the urgency to do so resulted in regulatory inertia. Given the borderless nature of this issue, an internationally coordinated response is necessary. This article focuses on the theoretical framework being established in relation to the development of international law applicable to AI and the regulatory authority to create and monitor enforcement of said law. The authors argue that the road ahead remains full of obstacles that must be tackled before the above-mentioned elements see the light despite the attempts being made currently to that end.

Regulating Advanced Technologies: Policy Papers of the Jean Monnet Centre of Excellence on the Regulation of Robotics and AI

2022

The policy papers here collected are the purport of the work carried out by EURA and its teams during the entire project duration (August 2018- today), and rest upon the analysis carried out over a number of events and debates, including previous editions of the yearly EURA Conference, and seminars, all of which may still be watched on our website (https://www.eura.santannapisa.it/) and social media, in particular the EURA YouTube channel1. Indeed, one of the focal points of our Jean Monnet Centre of Excellence (CoE) is to establish a constant dialogue with policy-makers, identifying topics of relevance for societal debate, and selecting trends and issues relevant for businesses and investors. Our members participate in various entities for the support of policy building in the area of robotics and embedment of ethical standards in AI system. In order to convey to legislative and governmental entities the outcome of its research and activities, EURA released a number of opinions and policy-papers, providing early identification of relevant issues, assessment, recommendations, participating in many debates leading to the adoption of regulation, advancing proposals for actions to be taken. EURA as an entity, as well as its fellows in their individual capacity, have participated in the policy debates on Artificial Intelligence, in particular about the AI Act, with a focus on prohibited practices and user deception, civil liability, risk-management and insurance, product safety, industrial and social robotics applications and platforms regulation.

CALL FOR PAPERS - ICPP 2019 Panel T13P04 on Governance of AI and the Special Issue on Governance of AI and Robotics

2019

4rd International Conference on Public Policy (ICPP4) June 26-28, 2019 – Montreal, Canada CALL FOR PAPERS – ICPP Panel on Governance of AI and the Special Issue on Governance of AI and Robotics T13P04 - Governing Artificial Intelligence and Autonomous Systems http://www.ippapublicpolicy.org/conference/icpp4-montreal-2019/panel-list/10/panel/governing-artificial-intelligence-and-autonomous-systems/860 Panel Chair and Special Issue Editor: Araz Taeihagh, LKYSPP NUS Abstract submission deadline: 30 January 2019 Developments in Artificial Intelligence (AI) and Autonomous Systems (AS) offer various benefits that will revolutionise all aspects of society, ranging from search algorithms for online advertising (Goodfellow et al. 2016), signal processing (Karaboga et al. 2014), credit scoring (Tsai & Wu 2008; Brown & Mues 2012), medical diagnosis (Russell & Norvig 2016; Amato et al. 2013), autonomous vehicles (Fagnant & Kockelman 2015; Milakis et al. 2017; Taeihagh & Lim 2018), robotic medical assistants (Stahl and Coeckelbergh 2016) to autonomous weapon systems in warfare (Krishnan 2016). The rapid adoption of these technologies threaten to outpace the regulatory responses of governments around the world, which must grapple with the increasing magnitude and speed of these transformations. The societal benefits of AI and AS have been widely acknowledged (Buchanan 2005; Taeihagh & Lim 2018; Ramchurn et al. 2012), but these technologies introduce risks and unintended consequences. New risks include and are not limited to unemployment (Acemoglu & Restrepo 2018; Frey & Osborne 2017; Peters 2017; Osoba & Welser IV 2017), safety risks (Taeihagh & Lim 2018; Kalra & Paddock 2016), privacy risks (Russell et al. 2015; Lim & Taeihagh 2018; Litman 2017), liability risks (Marchant & Lindor 2012; Čerka et al. 2015; Taeihagh & Lim 2018) and inequality (Makridakis 2017; Acemoglu & Restrepo 2018), which require appropriate governance mechanisms to be mitigated. Traditional policy instruments may be ineffective due to insufficient information on industry developments, technological and regulatory uncertainties, coordination challenges between multiple regulatory bodies (Guihot et al. 2017), and the opacity of the underlying technology (Scherer 2016), which necessitate the use of more nuanced approaches to govern AI and AS. Many studies have highlighted the urgency for and the challenges of governing AI and AS (Arkin 2009; Simshaw et al. 2015; Guihot et al. 2017; Scherer 2016; Krishnan 2016; Taeihagh & Lim 2018; Lim & Taeihagh 2018), which need to be addressed by answering the following key research questions: • What are the types of unintended consequences and risks that can arise from the adoption of AI and AS in different domains (e.g. ICT, transport, energy, public sector, healthcare, water management etc.) and how can they be effectively managed and governed? • How can AI and AS be responsibly deployed by public administrators? • What are the implications of AI and AS on incumbent industries and how can the relationship between these technologies and incumbent industries be reconciled? • Theoretical, conceptual and empirical approaches to understand new and unconventional regulatory approaches, governance strategies, institutions and discourses to govern risks arising from AI and AS. • What types of standards or guidelines have been developed in industry and governments to manage the risks arising from AI and AS? • How are risks arising from AI and AS allocated among different stakeholders vertically through the value chain (manufacturers, third-party service providers, consumers) and horizontally across different domains (transport, healthcare, financial sector, public agencies, ICT, education etc.)? • Single and comparative case studies of governance responses across different countries, regions and domains to address the risks arising from AI and AS. ICPP 2019 Abstract submission deadline - 30 January 2019

A Robust Governance for the AI Act: AI Office, AI Board, Scientific Panel, and National Authorities

Regulation is nothing without enforcement. This particularly holds for the dynamic field of emerging technologies. Hence, this article has two ambitions. First, it explains how the EU´s new Artificial Intelligence Act (AIA) will be implemented and enforced by various institutional bodies, thus clarifying the governance framework of the AIA. Second, it proposes a normative model of governance, providing recommendations to ensure uniform and coordinated execution of the AIA and the fulfilment of the legislation. Taken together, the article explores how the AIA may be implemented by national and EU institutional bodies, encompassing longstanding bodies, such as the European Commission, and those newly established under the AIA, such as the AI Office. It investigates their roles across supranational and national levels, emphasizing how EU regulations influence institutional structures and operations. These regulations may not only directly dictate the structural design of institutions but also indirectly request administrative capacities needed to enforce the AIA.

Governance of Artificial Intelligence: Emerging International Trends and Policy Frames

Ulnicane, I., W. Knight, T. Leach, B.C. Stahl and W.G. Wanjiku (2022) ‘Governance of Artificial Intelligence: Emerging international trends and policy frames’, in M.Tinnirello (Ed.) The Global Politics of Artificial Intelligence. CRC Press, pp.29-55. https://doi.org/10.1201/9780429446726-2 , 2022

In recent years, national governments, international organizations and stakeholders have launched numerous Artificial Intelligence (AI) strategies and reports. Recent research has mostly focused on AI ethics, while topics of AI policy and governance have received less attention. To address this research gap, this chapter addresses two main questions: what is driving fast-developing AI policies around the world and what are the main frames of emerging AI policies. To make sense of recent AI policy developments, this chapter draws on literature on emerging technologies, in particular on studies of performative function of hypes and expectations as well as of collaboration and competition dynamics in emerging fields. The analysis demonstrates that the fast-development of AI policy is largely driven, firstly, by a wide range of impacts of AI, and, secondly, by international assemblies such as the World Economic Forum and the Organization for Economic Co-operation and Development as well as by cross-national policy learning. However, AI policy developments are unevenly distributed around the world and are predominantly concentrated in the most developed regions. This chapter identifies three main AI policy frames: first, framing AI as revolutionary, transformative and disruptive technology; second, closely interconnected global competition and collaboration in the field of AI; and thirdly, a three-pillar approach of realising opportunities, mitigating risks and ensuring responsible AI. The chapter highlights that AI policy developments influenced by perceptions of hype, positive and negative expectations as well as global competition and collaboration can have not only positive but also problematic effects on resource allocation and political prioritization.

Towards an effective transnational regulation of AI

AI & society, 2021

Law and the legal system through which law is effected are very powerful, yet the power of the law has always been limited by the laws of nature, upon which the law has now direct grip. Human law now faces an unprecedented challenge, the emergence of a second limit on its grip, a new "species" of intelligent agents (AI machines) that can perform cognitive tasks that until recently only humans could. What happens, as a matter of law, when another species interacts with us, can be integrated into human minds and bodies, makes "real-world" decisions-not through human proxies, but directly-and does all this "intelligently", with what one could call autonomous agency or even a "mind" of its own? The article starts from the clear premise that control cannot be exercised directly on AI machines through human law. That control can only be effected through laws that apply to humans. This has several regulatory implications. The article's first discusses what, in any attempt to regulate AI machines, the law can achieve. Having identified what the law can do, the article then canvases what the law should aim to achieve overall. The article encapsulate its analysis in a list of both doctrinal and normative principles that should underpin any regulation aimed at AI machines. Finally, the article compares three transnational options to implement the proposed regulatory approach.

Governance of artificial intelligence

Policy and Society, 2021

The rapid developments in Artificial Intelligence (AI) and the intensification in the adoption of AI in domains such as autonomous vehicles, lethal weapon systems, robotics and alike pose serious challenges to governments as they must manage the scale and speed of socio-technical transitions occurring. While there is considerable literature emerging on various aspects of AI, governance of AI is a significantly underdeveloped area. The new applications of AI offer opportunities for increasing economic efficiency and quality of life, but they also generate unexpected and unintended consequences and pose new forms of risks that need to be addressed. To enhance the benefits from AI while minimising the adverse risks, governments worldwide need to understand better the scope and depth of the risks posed and develop regulatory and governance processes and structures to address these challenges. This introductory article unpacks AI and describes why the Governance of AI should be gaining far more attention given the myriad of challenges it presents. It then summarises the special issue articles and highlights their key contributions. This special issue introduces the multifaceted challenges of governance of AI, including emerging governance approaches to AI, policy capacity building, exploring legal and regulatory challenges of AI and Robotics, and outstanding issues and gaps that need attention. The special issue showcases the state-of-the-art in the governance of AI, aiming to enable researchers and practitioners to appreciate the challenges and complexities of AI governance and highlight future avenues for exploration.