Rights in The Age of Data (original) (raw)

HUMAN RIGHTS, BIG DATA AND ARTIFICIAL INTELLIGENCE: ELEMENTS OF A COMPLEX ALGORITHM

Security and Defence: Ethical and Legal Challenges in the Face of Current Conflict, 2022

The concept of data is not new, and its use is not necessarily a human characteristic. However, the concept is currently found in a specific context, where it appears inextricably linked to three phenomena, digitalisation, macrodata, and Internet. The interaction of these three elements has changed our perception of reality. In some manner, we have amplified both our capacity to perceive information about the reality that surrounds us, and our capacity to process and convey information, and interact with other amplified human beings. At the same time, we are living in a unprecedented rate expanding virtual world. In fact, exceeds human intervention, creating create an autonomous process because the dialogue between machines In the bigdata and Internet context, the protection of personal data associated to the comprehensive development of human dignity (human rights), is needed. As individuals, we are under the make-up power of new technology used by other powers (multinationals, banks, politics-ideologies) because machines have an enormous computing and processing power of information that can be used to direct and control wills. Through the study of big data, and artificial intelligence legal concepts, and their regulation, we can propose juridical considerations as the use of the Dynamogenesis of values and law linked with the Design for Values, and The HRESIA model to create spaces of human security and freedom in a future where the power of technology could become one of the state powers

What is data justice? The case for connecting digital rights and freedoms on the global level

The increasing availability of 'data fumes' (Thatcher, 2014) – data produced as a byproduct of people's use of technological devices and services – has both political and practical implications for the way people are seen and treated by the state and by the private sector. Yet the data revolution is so far primarily a technical one: the power of data to sort, categorise and intervene has not yet been explicitly connected to a social justice agenda. In fact, while data-driven discrimination is advancing at exactly the same pace as data processing technologies, awareness and mechanisms for combating it are not. This paper posits that just as an idea of justice is needed in order to establish the rule of law, an idea of data justice is necessary to determine ethical paths through a datafying world. Based on three proposed pillars of a notion of data justice: (in)visibility, (dis)engagement with technology and antidiscrimination, I propose a vision that integrates positive with negative rights and freedoms. The resulting framework encourages us to debate both the basis of current data protection regulations and the growing assumption that being visible through the data we emit is part of the contemporary social contract.

A Posthuman Data Subject? The Right to Be Forgotten and Beyond

The general assumption in the West is that there still is an inherent difference between persons and things. This divide informs how “the human” and human subjectivity are constructed as distinct from all others. Recently, the distinction has been challenged in posthumanist theory, where it has been argued that the divide between human and nonhuman agents — or rather, bodies —is always an effect of a differential set of powers. For this reason, the boundaries between human and nonhuman are always in flux. As posthumanist theorists have argued, this change in boundaries may be specifically visualized in relation to digital technology. Today, such technologies obfuscate the boundaries between persons and things, and the extensive utilization of smartphones, social media, and online search engines are just three common examples. In parallel to the continuous expansion of digital technologies, critical understandings of how “data” and human personhood are produced are increasingly raised in legal theory. Recent developments establishing increased privacy online through EU law, including the new General Data Protection Regulation and the famous Right to Be Forgotten case could possibly be understood to have struck a balance between interests of the human in the form of privacy —and the digital— in the form of information diffusion. In this Article, a posthumanist theoretical perspective is utilized to show how the new data protection legislation, with a focus on the Right to Be Forgotten, produces such protection yet continuously withdraws data as a separate body from human bodies. For this reason, it is argued that the construction of new human rights,such as those considering data protection, would benefit from understanding how the separation is, in itself, an effect of advanced capitalism.

The Right to Data Protection Versus “Security”

Revista Direitos Culturais, 2020

The protection of personal data in the cyberspace has been an issue of concern for quite some time. However, with the revolutions in information technology, big data and the internet of things, data privacy protection has become paramount in an era of free information flows. Considering this context, this research intends to shine a light on the experience of Brazil regarding data privacy protection, through the analysis of a brand new bill passed by Congress: the Brazilian General Personal Data Protection Act. Our assessment of the legislation was made from the perspective of a human rights-based approach to data, aiming to analyze both advancements, limitations and contradictions of the rights-discourse in the LGPD. Our main conclusions were that the (public and national) security rhetoric, also present in the bill, can create a state of exception regarding the processing of personal data of those considered “enemies of the state”, which may result in violations of fundamental rig...

Data Rights and Collective Needs: A New Framework for Social Protection in a Digitized World

Digital New Deal - IT for Change, 2020

All social programs employ some ‘legibility’ scheme, to make citizens visible, readable, and verifiable to the state. Today, this trait is combined and enhanced by the datafication process. Social protection systems around the world are becoming increasingly computerized and reliant on beneficiaries’ data for related decision-making. Digital technologies that are capable of collecting and verifying large amounts of data are employed to this end, impacting the exercise of both digital and social rights. In this essay, we will address the differential impacts of the datafication of social protection on marginalized populations, using examples from existing literature and our own research. We then engage with existing reflections on social protection and datafication to highlight the importance of a data justice framework for the current global political and economic context.

Fundamental Rights and the Rule of Law in the Algorithmic Society

Constitutional Challenges in the Algorithmic Society

2.1 new technologies and the rise of the algorithmic society New technologies offer human agents entirely new ways of doing things. 1 However, as history shows, 'practical' innovations always bring with them more significant changes. Each new option introduced by technological evolution allowing new forms affects the substance, eventually changing the way humans think and relate to each other. 2 The transformation is especially true when we consider information and communication technologies (so-called ICT); as indicated by Marshall McLuhan, 'the media is the message'. 3 Furthermore, this scenario has been accelerated by the appearance of artificial intelligence systems (AIS), based on the application of machine learning (ML). These new technologies not only allow people to find information at an incredible speed; they also recast decision-making processes once in the exclusive remit of human beings. 4 By learning from vast amounts of data-the socalled Big Data-AIS offer predictions, evaluations, and hypotheses that go beyond the mere application of pre-existing rules or programs. They instead 'induce' their own rules of action from data analysis; in a word, they make autonomous decisions. 5 1 Pedro Domingos, The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World (Basic Books 2015). 2 One of the most prominent prophets of the idea of a new kind of progress generated through the use of technologies is surely Jeremy Rifkin. See his book The Zero Marginal Cost Society: The Internet of Things, the Collaborative Commons, and the Eclipse of Capitalism (St. Martin's Press 2014). 3 Marshall McLuhan and Quentin Fiore, The Medium Is the Massage (Ginko Press 1967). 4 Committee of Experts on Internet Intermediaries of the Council of Europe (MSI-NET), 'Algorithms and Human Rights. Study on the Human Rights Dimensions of Automated Data Processing Techniques and Possible Regulatory Implications' (2016) DGI(2017)12. 5 According to the European Parliament, 'Resolution of 16 February 2017 with recommendations to the Commission on Civil Law Rules on Robotics (2015/2103(INL))' (P8_TA(2017)0051, Bruxelles), 'a robot's autonomy can be defined as the ability to take decisions and implement them in the outside world, independently of external control or influence.' 27

The Right to Privacy and Data Protection in the Information Age

Journal of Siberian Federal University. Humanities & Social Sciences, 2020

The article considers the legality of mass surveillance and protection of personal data in the context of the international human rights law and the right to respect for private life. Special attention is paid to the protection of data on the Internet, where the personal data of billions of people are stored. The author emphasizes that mass surveillance and technology that allows the storage and processing of the data of millions of people pose a serious threat to the right to privacy guaranteed by Article 8 of the ECHR of 1950. Few companies comply with the human rights principles in their operations by providing user data in response to requests from public services. In this regard, States must prove that any interference with the personal integrity of an individual is necessary and proportionate to address a particular security threat. Mandatory data storage, where telephone companies and Internet service providers are required to store metadata about their users’ communications ...

A Posthuman Data Subject in the European Data Protection Regime

The general assumption in the West is that there still is an inherent difference between persons and things. This divide informs how “the human” and human subjectivity are constructed as opposite to all others. Recently, the distinction has been challenged in posthumanist theory, where it has been argued that the divide between human and nonhuman agents - or rather bodies - is always an effect of the works of a differential set of powers. For this reason, the boundaries between human and nonhuman is always in a flux. As posthumanist theorists have argued, this change in boundaries may be specifically visualized in relation to digital technology. Today, such technologies obfuscate the boundaries between persons and things, where the use of smartphones, social media, and the extensive utilization of online search engines are just three common examples. In parallel to the continuous expansion of digital technologies, critical understandings of how “data” and human personhood are produced are increasingly raised in legal theory. Recent development establishing increased privacy online through EU law, including the new General Data Protection Regulation and the famous Right to Be Forgotten case could possibly be understood to strike a balance between interests of the human (in the form of privacy) and the digital technology (in the form of information diffusion). In this article, a posthumanist theoretical perspective is utilized to show how the new data protection legislation, with a focus on the Right to Be Forgotten, produces such protection yet continuously withdraws data as a separate body from human bodies. For this reason, it is argued that the construction of new human rights such as those considering data protection would benefit from understanding how the separation is, in itself, an effect of advanced capitalism.

Human Rights in the Big Data World

CGHR Cambridge Working Paper , 2018

Ethical approach to human rights conceives and evaluates law through the underlying value concerns. This paper examines human rights after the introduction of big data using an ethical approach to rights. First, the central value concerns such as equity, equality, sustainability and security are derived from the history of digital technological revolution. Then, the properties and characteristics of big data are analyzed to understand emerging value concerns such as accountability, transparency, tracability, explainability and disprovability. Using these value points, this paper argues that big data calls for two types of evaluations regarding human rights. The first is the reassessment of existing human rights in the digital sphere predominantly through right to equality and right to work. The second is the conceptualization of new digital rights such as right to privacy and right against propensity-based discrimination. The paper concludes that as we increasingly share the world with intelligence systems, these new values expand and modify the existing human rights paradigm.