Yadlin-Segal A. and Oppenheim, Y. (2021). Whose Dystopia is it Anyway? Deepfake Technology and Social Media Regulation. Convergence – The International Journal of Research into New Media Technologies, 27(1), 36-51. (original) (raw)
Related papers
Deepfakes: A Digital Transformation Leads to Misinformation
2021
Deepfakes are a product of artificial intelligence (AI) and software applications used to create convincing falsified audiovisual content. Linguistically, a portmanteau combines deep learning aspects of AI with the doctored or falsified enhancements that deem the content fake and now deepfake or misinformation results. A variety of sophisticated software programs' exacting algorithms create high-quality videos and manipulated audio of people who may not exist, twisting others who do exist, creating the potential for leading to the spread of serious misinformation often with serious consequences. The rate of detection of this digital emergence is proliferating exponentially and the sourcing is challenging to verify, causing alarms. Examples of this pervasive information warfare are is associated with deepfakes that range from identity theft, discrediting public figures and celebrities, cyberbullying, blackmail, threats to national security, personal privacy, intensifying pornography and sexual exploitation, cybersecurity, baiting hate crimes, abusing social media platforms and manipulating metadata. Deepfakes that are difficult to cite, acquire, or track have some parallel attributes to grey literature by that definition. Often detectable, yet problematic, activities such as phishing and robocalling may be common attempts of deepfake activities that threaten and interrupt rhythms of daily life. The increasing online personas that many people create or assume contribute to this fake content and potential for escalated exploitation due to technical abilities to copy and reimagine details that are not true. AI image generators create completely false images of people that simply don't exist within seconds and are nearly impossible to track. While AI is perceived as a positive benefit for science and policy, it can have negative roles in this new AI threatened environment. Deepfakes have crossover targets in common business applications and society at large. Examples of this blur are targeted advertising, undetected security cameras in public spaces, blockchain, tabloid press/paparazzi, entertainment, computer games, online publishing, data and privacy, courtroom testimony, public opinion, scientific evidence, political campaigns, and rhetoric. This paper explores the impact and intersections of these behaviors and activities, products of AI, and emerging technologies with how digital grey and the optics of grey expose the dangers of deepfakes on everyday life. Applying a security and privacy lens, we offer insights of extending libel and slander into more serious criminal behavior as deepfakes become more pervasive, construing reality, endangering personal, social, and global safety nets adding to the new normal we assume today. How we became more sensitized to misinformation and fake news tells the story about deepfakes.
Studies in Communication and Media, 2021
Using artificial intelligence, it is becoming increasingly easy to create highly realistic but fake video content - so-called deepfakes. As a result, it is no longer possible always to distinguish real from mechanically created recordings with the naked eye. Despite the novelty of this phenomenon, regulators and industry players have started to address the risks associated with deepfakes. Yet research on deepfakes is still in its infancy. This paper presents findings from a systematic review of English-language deepfake research to identify salient discussions. We find that, to date, deepfake research is driven by computer science and law, with studies focusing on deepfake detection and regulation. While a number of studies address the potential of deepfakes for political disinformation, few have examined user perceptions of and reactions to deepfakes. Other notable research topics include challenges to journalistic practices and pornographic applications of deepfakes. We identify r...
Media and Communication, 2021
Deepfakes, one of the most novel forms of misinformation, have become a real challenge in the communicative environment due to their spread through online news and social media spaces. Although fake news have existed for centuries, its circulation is now more harmful than ever before, thanks to the ease of its production and dissemination. At this juncture, technological development has led to the emergence of deepfakes, doctored videos, audios or photos that use artificial intelligence. Since its inception in 2017, the tools and algorithms that enable the modification of faces and sounds in audiovisual content have evolved to the point where there are mobile apps and web services that allow average users its manipulation. This research tries to show how three renowned media outlets-The Wall Street Journal, The Washington Post, and Reuters-and three of the biggest Internet-based companies-Google, Facebook, and Twitter-are dealing with the spread of this new form of fake news. Results show that identification of deepfakes is a common practice for both types of organizations. However, while the media is focused on training journalists for its detection, online platforms tended to fund research projects whose objective is to develop or improve media forensics tools.
Politics and porn: how news media characterizes problems presented by deepfakes
Critical Studies in Media Communication, 2020
"Deepfake" is a form of machine learning that creates fake videos by superimposing the face of one person on to the body of another in a new video. The technology has been used to create nonconsensual fake pornography and sexual imagery, but there is concern that it will soon be used for politically nefarious ends. This study seeks to understand how the news media has characterized the problem(s) presented by deepfakes. We used discourse analysis to examine news articles about deepfakes, finding that news media discuss the problems of deepfakes in four ways: as (too) easily produced and distributed; as creating false beliefs; as undermining the political process; and as nonconsensual sexual content. We provide an overview of how news media position each problem followed by a discussion about the varying degrees of emphasis given to each problem and the implications this has for the public's perception and construction of deepfakes.
Computational Propaganda and Misinformation AI Technologies as tools of media manipulation Published
World Journal of Advanced Research and Reviews, 2025
The purpose of this study was to investigate how artificial intelligence (AI) influences and improves computational propaganda and misinformation efforts. The growing complexity of AI-driven technologies, like deepfakes, bots, and algorithmic manipulation, which have turned conventional propaganda strategies into more widespread and damaging media manipulation techniques, served as the researcher's inspiration. The study used a mixed-methods approach, combining quantitative data analysis from academic studies and digital forensic investigations with qualitative case studies of misinformation efforts. The results brought to light important tactics including the platform-specific use of X (formerly Twitter) to propagate false information, emotional exploitation through fear-based messaging, and purposeful amplification through bot networks. According to this research, AI technologies enhanced controversial content by taking use of algorithmic biases, so generating echo chambers and eroding confidence in democratic processes. The study also emphasized how deepfake technologies and their ability to manipulate susceptible populations' emotions present ethical and sociopolitical issues. In order to counteract AI-generated misinformation, the study suggested promoting digital literacy and creating more potent detection methods, such digital watermarking. Future studies should concentrate on the long-term psychological effects of AI-driven misinformation on democratic participation, public trust, and regulatory reactions in various countries. Furthermore, investigating how new AI technologies are influencing other media, like video games and virtual reality, may help humans better comprehend how they affect society as a whole.
“All Around Me Are Synthetic Faces”: The Mad World of AI-Generated Media
IT Professional
Advances in artificial intelligence and deep neural networks have led to a rise in synthetic media, i.e., automatically and artificially generated or manipulated photo, audio, and video content. Synthetic media today is highly believable and "true to life"; so much so that we will no longer be able to trust what we see or hear is unadulterated and genuine. Among the different forms of synthetic media, the most concerning forms are deepfakes and general adversarial networks (GANs). For IT professionals, it is important to understand what these new phenomena are. In this article, we explain what deepfakes and GANs are, how they work and discuss the threats and opportunities resulting from these forms of synthetic media. ANTICIPATING A "MAD WORLD" & BARACK OBAMA'S PUBLIC service announcement starts with the usual backdrop of American flags within the Oval Office. His distinctive vocal pauses and hand gestures lend credibility to his address about the modern threat of digital technologies and artificial intelligence (AI). But suddenly, his own address starts to take a strange turn, culminating in an alarming and outof-character statement: "President Trump is a total and complete dipsh%t." Wait, what? Obama pauses to clarify, "See, now I would never say these things, at least not in a public address. But someone else would."
“The Word Real Is No Longer Real”: Deepfakes, Gender, and the Challenges of AI-Altered Video
Open Information Science, 2019
It is near-impossible for casual consumers of images to authenticate digitally-altered images without a keen understanding of how to “read” the digital image. As Photoshop did for photographic alteration, so to have advances in artificial intelligence and computer graphics made seamless video alteration seem real to the untrained eye. The colloquialism used to describe these videos are “deepfakes”: a portmanteau of deep learning AI and faked imagery. The implications for these videos serving as authentic representations matters, especially in rhetorics around “fake news.” Yet, this alteration software, one deployable both through high-end editing software and free mobile apps, remains critically under examined. One troubling example of deepfakes is the superimposing of women’s faces into pornographic videos. The implication here is a reification of women’s bodies as a thing to be visually consumed, here circumventing consent. This use is confounding considering the very bodies used ...
No innocents: Platforms, politics, and media struggling with digital governance
Communications
In retrospect, the communication world was so different in February 2020, when scholarly members of the Euromedia Research Group applied to become a Jean Monnet Network, focusing on media and platform policy (EuromediApp). Shortly after sending off the application, Covid-19 conquered the planet and jeopardized the main objective of networks, namely, to strengthen ties between network nodes. When the three-year network started operating in October 2020, it immediately became clear that dominant features of the pandemic would be fake news and harmful content online. Additionally, it was evident that digital platforms would play an even more central role in opinion-shaping during lockdowns than they had before. During the following three years, it turned out that the concept of the Eurome-diApp network was smart. Focusing on digital platforms, their relations to mass communication, and their performance regarding democracy and human rights allowed the network to organize cutting-edge workshops and conferences. For these events, it invited scholars to contribute scientific state-of-the-art texts and presentations on this fast-moving topic. This special issue of Communications serves to consolidate the learnings from that journey, timely addressing burning issues in digital platform governance. It explores questions such as how to limit hate speech and other harmful content online, how to hold digital platforms accountable for publishing it, how to accommodate automated decision-making (a.k.a. artificial intelligence), and how to economically balance platform profits achieved at the expense of mass media. Several attempts have been made over the last years to allow digital platform communication to thrive within the boundaries of the wider policy concept of
Defending the state from digital Deceit: the reflexive securitization of deepfake
Critical Studies in Media Communication, 2021
ABSTRACT Recent revelation of disinformation campaigns conducted by external adversaries on social media platforms has triggered anxiety among western liberal democracies. One focus of this anxiety has been the emerging technology known as deepfake. In examining related controversy, I use the theoretical lens of securitization to establish how communicative reflexivity shapes the attribution of threat to digital media. Next, focusing on the case of the U.S. government, I critique deepfake’s securitization by applying two theories of media and state (in-) security. I argue that deepfake sustains the liberal state’s conventional dread of mimetic threats posed to its ontological security. I then challenge this narrative by exploring satire as an alternate configuration of deepfake’s capabilities. I conclude by summarizing the implications of this case for ongoing study of digital media, conflict, and politics.
Deepfake: A Multifaceted Dilemma in Ethics and Law
Journal of Information Ethics, 2023
The present paper explores the relationship between deepfake and fake news through the tools of the law. The paper first introduces the conceptual basis, then presents the relationship between disinformation and deepfake, the relevant U .S ., European and alternative regulations in the context of unlawful deepfake content, and possible solutions. Particular attention is paid to the legal perception of disinformation in the context of deepfake technology, highlighting the harmful social and legal processes involved. The structure of the study is based on a review of national and international regulations and relevant literature and the authors' proposed solutions to the controversies caused by deepfake disinformation .