ELS issues in robotics and steps to consider them - Part 2: Robotics and Regulations RockEU (original) (raw)
Related papers
Regulatory challenges of robotics: some guidelines for addressing legal and ethical issues
Law, Innovation and Technology, 2017
Robots are slowly, but certainly, entering people's professional and private lives. They require the attention of regulators due to the challenges they present to existing legal frameworks and the new legal and ethical questions they raise. This paper discusses four major regulatory dilemmas in the field of robotics: how to keep up with technological advances; how to strike a balance between stimulating innovation and the protection of fundamental rights and values; whether to affirm prevalent social norms or nudge social norms in a different direction; and, how to balance effectiveness versus legitimacy in techno-regulation. The four dilemmas are each treated in the context of a particular modality of regulation: law, market, social norms, and technology as a regulatory tool; and for each, we focus on particular topicssuch as liability, privacy, and autonomythat often feature as the major issues requiring regulatory attention. The paper then highlights the role and potential of the European framework of rights and values, responsible research and innovation, smart regulation and soft law as means of dealing with the dilemmas.
The Future of Legal and Ethical Regulations for Autonomous Robotics
2018 IEEE Workshop on Advanced Robotics and its Social Impacts (ARSO)
Autonomous robotics" promise significant improvements across a host of different complex systems, which will need to be managed within regulatory frameworks to promote, at a minimum, device safety. Contrary to how they are often portrayed, however, these systems do not necessarily require fundamentally new approaches to engineering or regulatory challenges, i.e., the development of a novel "autonomy framework" applicable to different types of devices. Rather, because autonomous systems generally represent a progressive improvement of existing complex systems, preexisting regulatory scheme offer the best guidance for considering future regulation of autonomous elements. Moreover, the regulatory landscape differs considerably based on the type of device at issue (e.g., consumer electronics vis-à-vis medical devices). This paper argues that users and regulators must consider future autonomy regulations within the specific framework those devices currently inhabit, rather than focusing on a novel set of rules divorced from the preexisting context.
Ethical Regulations on Robotics In Europe
AI & Society, 2008
There are only a few ethical regulations that deal explicitly with robots, in contrast to a vast number of regulations, which may be applied. We will focus on ethical issues with regard to ''responsibility and autonomous robots'', ''machines as a replacement for humans'', and ''tele-presence''. Furthermore we will examine examples from special fields of application (medicine and healthcare, armed forces, and entertainment). We do not claim to present a complete list of ethical issue nor of regulations in the field of robotics, but we will demonstrate that there are legal challenges with regard to these issues. M. Nagenborg (
The European strategy on robotics and artificial intelligence: too much ethics, too little security
European Cybersecurity Journal
There is an increasing interest in the ethical design of robots. As evidence of this fact, one may refer to some recent reports and the European Parliament’s resolution on civil law rules on robotics.The latter will be the primary focus of this analysis since the EU Parliament is the first legal institution in the world to have initiated work of a law on robots and artificial intelligence. The European strategy on robotics seems affected by two main problems: an excessive emphasis on ethics at the expense of security, and more gen-erally, a lack of awareness of the critical role played by the operation of striking a balance between competing interests. Balancing is pivotal to the interpretation and application of the law. And the current development of AI technologies does not enable the delegation of the operation to robots. Certainly, the most controversial point regards the status of robots as electronic persons. Even though the suggestion may seem extreme, it may prove to be successful, for at least three reasons. First, robots are becoming more and more similar to humans (anthropomorfisation and AI). Second, humans are becoming increasingly akin to robots (artificial enhancement). Third, the robot’s legal personality would be profitable for the robotic industry.
Reporting on the case study of autonomous robots (policy recommendations)
2015
This report documents the EPINET projects investigations into assessments of ethical, legal and societal aspects of autonomous robots in the European Union. Its main objective is to evaluate the state-of-the-art in assessments in this domain, especially focusing on the ways in which these may interact with or become integrated into main research and innovation networks, including the making of research and policy agendas. These recommendations are aimed at, and relevant to, different groups and networks involved in robotics and governance at European and national levels. At one level of policy action there are the many advisory and expert groups involved in the making of robotics agendas, such as the ELS Topic group of euRobotics, follow-ups to ISTAG (information society advisory group), the SPARC PPP, and advisory bodies to DG Research (Robotics unit), DG Connect and DG Health. Also relevant are the expert groups participating in the shaping of Horizon 2020 ICT programme (Societal Challenges and LEIT) and the European Institute of Innovation and Technology, national research councils and their advisory bodies. Our recommendations are especially relevant to so-called cross-cutting actions in Horizon 2020, especially relating to Responsible Research and Innovation and Social and Humanistic Sciences. Next, our recommendations are also directed to national and EU legislators and regulators charged with adapting to and accommodating the actions of policy makers and the robotics community. Finally, our recommendations are directed to the technology assessment community, including those dedicated to Responsible Research and Innovation, integrated ELSA and impact assessments. Background and approach taken Recent policy agendas and research trends are openly pushing for "smarter", more dynamic and more autonomous robotics systems (e.g. European Commission, 2008 i ; EUROP, 2009 ii ; euRobotics 2014 iii ; Robot Companions for Citizens, 2012 iv). Future robots are expected to help address the grand societal challenges for Europe, in particular, those of an ageing population, sustainable healthcare and welfare. Such developments raise a number of questions across sector domains and disciplines, and among the potential and real users of robotics systems services. This becomes especially evident when seen in the light of parallel efforts towards Responsible Research and Innovation, according to which it is expected that different actors come together in ways conducive to more responsible, sustainable and socially robust innovation policies (von Schomberg 2011 v , Owen 2013 vi , European Commission 2012 vii). EPINET chose to study the implications of the development of more autonomous robots to be used for care and companionship. A main reason for this is that this policy agenda finds itself at the crossroads of several of the complicated issues emanating from present-day and near-future robotics. This recommendation does not deal with particular (ethical, societal) issues, although it builds on an extensive mapping of many such issues. Rather, recommendations are made on the level of: interactions between sectorial domains (ie. Science, law, politics); on the level of
2022
The policy papers here collected are the purport of the work carried out by EURA and its teams during the entire project duration (August 2018- today), and rest upon the analysis carried out over a number of events and debates, including previous editions of the yearly EURA Conference, and seminars, all of which may still be watched on our website (https://www.eura.santannapisa.it/) and social media, in particular the EURA YouTube channel1. Indeed, one of the focal points of our Jean Monnet Centre of Excellence (CoE) is to establish a constant dialogue with policy-makers, identifying topics of relevance for societal debate, and selecting trends and issues relevant for businesses and investors. Our members participate in various entities for the support of policy building in the area of robotics and embedment of ethical standards in AI system. In order to convey to legislative and governmental entities the outcome of its research and activities, EURA released a number of opinions and policy-papers, providing early identification of relevant issues, assessment, recommendations, participating in many debates leading to the adoption of regulation, advancing proposals for actions to be taken. EURA as an entity, as well as its fellows in their individual capacity, have participated in the policy debates on Artificial Intelligence, in particular about the AI Act, with a focus on prohibited practices and user deception, civil liability, risk-management and insurance, product safety, industrial and social robotics applications and platforms regulation.
RoboLaw: Towards a European framework for robotics regulation
Robotics and Autonomous Systems, 2016
Robotics as special case of regulation The direct and indirect role played by ethics in regulating technology Transformation of vulnerabilities and capabilities to be weighted in policy decisions The strategic role of liability in technology development
Is it Time to Consider EU Criminal Law Rules on Robotics?
Croatian Yearbook of European Law and Policy, 2020
This paper is devoted to issues which have not been sufficiently explored in European literature, and which have had fragmented consideration in comparative literature. These issues raise the question of whether the EU legislator should develop a framework of criminal law rules which would regulate the use of Artificial Intelligence (hereinafter: AI) in the near future, and what such rules should specifically address. The authors recognise two issues of particular importance for the future regulation of AI development within the EU, and offer their perspective on the areas which should be subjected to regulation in this regard. In order to provide a systematic overview of this topic, the paper starts with a description of the recent regulatory action of the EU in the field of AI, with special reflection on the Ethics Guidelines for Trustworthy AI. The authors then describe what are, in their opinion, the most important intersections of AI and criminal law in the broader sense, and in conclusion present their views of which areas should be specifically regulated by EU legislature in this context.
2019
This report presents the results of SIENNA research on legal developments and approaches to specific legal issues and human rights challenges related to artificial intelligence (AI) and robotics at the international, EU and national level (12 countries, EU and non-EU). The report broadly discusses the legal issues and human rights challenges of AI and robotics and analyses relevant international, EU and regional laws and human rights standards. It summarises and compares the results of the country studies on law, AI and robotics. It also discusses existing norms and standards and gaps and presents some recommendations and ways to overcome gaps. This report will feed into the forthcoming SIENNA work on enhancing the existing legal framework that will identify potential changes needed in dialogue with legislators and relevant committees.