Deepfakes and cheap fakes (original) (raw)

Deepfakes: The Threat to Data Authenticity and Public Trust in the Age of AI-Driven Manipulation of Visual and Audio Content

Journal of AI-Assisted Scientific Discovery, 2022

The advent of artificial intelligence (AI) has revolutionized numerous industries, but it has also introduced profound risks, particularly through the development of deepfake technology. Deepfakes, which are AI-generated synthetic media that manipulate visual and audio content to create hyper-realistic but entirely fabricated representations, present a significant threat to data authenticity and public trust. The rapid advancements in machine learning, specifically in generative adversarial networks (GANs), have fueled the proliferation of deepfakes, enabling the creation of indistinguishable digital forgeries that can easily deceive viewers and listeners. This paper explores the multifaceted threat posed by deepfakes in undermining the authenticity of digital content and eroding public confidence in media and information. In an era where visual and auditory content is heavily relied upon for communication, governance, and decision-making, the rise of deepfakes brings forth unprecedented challenges in maintaining the integrity of information. This research examines the technical mechanisms driving deepfake creation, emphasizing the role of GANs and neural networks in producing lifelike simulations of human faces, voices, and behaviors. A detailed analysis is provided on how these technologies can be weaponized for nefarious purposes, such as the dissemination of political misinformation, character defamation, and even identity theft. As the accessibility of AI-driven tools expands, malicious actors are increasingly leveraging deepfakes to manipulate public opinion, disrupt democratic processes, and compromise cybersecurity. The paper highlights the alarming potential of deepfakes to distort reality, making it challenging for individuals and institutions to differentiate between authentic and manipulated content. The paper also delves into the technical countermeasures being developed to detect and mitigate the spread of deepfakes. Current detection methodologies, such as deep learning-based classifiers, digital watermarking, and forensic techniques, are critically evaluated for their effectiveness in identifying manipulated content. However, the ongoing arms race between deepfake creation and detection technologies poses significant challenges, as adversaries continuously refine their models to evade detection systems. This research underscores the need for continued innovation in detection algorithms and the integration of AI-driven solutions to stay ahead of increasingly sophisticated forgeries. Furthermore, the legal and regulatory landscape surrounding deepfakes is scrutinized, with an emphasis on the inadequacies of current frameworks to effectively address the complexities introduced by this technology. The paper discusses potential policy interventions, such as stricter digital content verification laws and international cooperation to combat the proliferation of deepfake-driven misinformation. Legal efforts to hold creators of malicious deepfakes accountable are explored, alongside the ethical considerations involved in balancing free speech with the need for data integrity. Beyond the technical and legal dimensions, this paper also examines the broader societal implications of deepfakes. The erosion of trust in digital media has far-reaching consequences, particularly in the realms of politics, journalism, and corporate governance. Public trust in authoritative sources of information is essential for the functioning of democratic institutions, and deepfakes pose a direct threat to this trust. The paper argues that the widespread dissemination of manipulated content can lead to a destabilization of public discourse, the spread of disinformation, and the breakdown of social cohesion. In addition, the psychological and cultural impacts of deepfakes are explored, highlighting how individuals' perceptions of reality can be shaped and distorted by AI-generated content. The research concludes by offering recommendations for a multi-stakeholder approach to addressing the deepfake phenomenon. This includes fostering collaboration between AI researchers, technologists, policymakers, and civil society organizations to develop comprehensive strategies for mitigating the risks associated with deepfakes. The paper emphasizes the need for a proactive, rather than reactive, approach in dealing with deepfake technology, advocating for the development of robust technical solutions, legal frameworks, and public awareness campaigns to protect the integrity of digital information.

Deepfakes: A Digital Transformation Leads to Misinformation

2021

Deepfakes are a product of artificial intelligence (AI) and software applications used to create convincing falsified audiovisual content. Linguistically, a portmanteau combines deep learning aspects of AI with the doctored or falsified enhancements that deem the content fake and now deepfake or misinformation results. A variety of sophisticated software programs' exacting algorithms create high-quality videos and manipulated audio of people who may not exist, twisting others who do exist, creating the potential for leading to the spread of serious misinformation often with serious consequences. The rate of detection of this digital emergence is proliferating exponentially and the sourcing is challenging to verify, causing alarms. Examples of this pervasive information warfare are is associated with deepfakes that range from identity theft, discrediting public figures and celebrities, cyberbullying, blackmail, threats to national security, personal privacy, intensifying pornography and sexual exploitation, cybersecurity, baiting hate crimes, abusing social media platforms and manipulating metadata. Deepfakes that are difficult to cite, acquire, or track have some parallel attributes to grey literature by that definition. Often detectable, yet problematic, activities such as phishing and robocalling may be common attempts of deepfake activities that threaten and interrupt rhythms of daily life. The increasing online personas that many people create or assume contribute to this fake content and potential for escalated exploitation due to technical abilities to copy and reimagine details that are not true. AI image generators create completely false images of people that simply don't exist within seconds and are nearly impossible to track. While AI is perceived as a positive benefit for science and policy, it can have negative roles in this new AI threatened environment. Deepfakes have crossover targets in common business applications and society at large. Examples of this blur are targeted advertising, undetected security cameras in public spaces, blockchain, tabloid press/paparazzi, entertainment, computer games, online publishing, data and privacy, courtroom testimony, public opinion, scientific evidence, political campaigns, and rhetoric. This paper explores the impact and intersections of these behaviors and activities, products of AI, and emerging technologies with how digital grey and the optics of grey expose the dangers of deepfakes on everyday life. Applying a security and privacy lens, we offer insights of extending libel and slander into more serious criminal behavior as deepfakes become more pervasive, construing reality, endangering personal, social, and global safety nets adding to the new normal we assume today. How we became more sensitized to misinformation and fake news tells the story about deepfakes.

Configuring Fakes: Digitized Bodies, the Politics of Evidence, and Agency

Social Media + Society

This comparative case study analysis used more than 200 examples of audiovisual manipulation collected from 2016 to 2021 to understand manipulated audiovisual and visual content produced by artificial intelligence, machine learning, and unsophisticated methods. This article includes a chart that categorizes the methods used to produce and disseminate audiovisual content featuring false personation as well as the harms that result. The article and the findings therein answer questions surrounding the broad issues of politics of evidence and harm related to audiovisual manipulation, harassment, privacy, and silencing to offer suggestions towards reconfiguring the public’s agency over technical systems and envisioning ways forward that meaningfully promote justice.

“The Word Real Is No Longer Real”: Deepfakes, Gender, and the Challenges of AI-Altered Video

Open Information Science, 2019

It is near-impossible for casual consumers of images to authenticate digitally-altered images without a keen understanding of how to “read” the digital image. As Photoshop did for photographic alteration, so to have advances in artificial intelligence and computer graphics made seamless video alteration seem real to the untrained eye. The colloquialism used to describe these videos are “deepfakes”: a portmanteau of deep learning AI and faked imagery. The implications for these videos serving as authentic representations matters, especially in rhetorics around “fake news.” Yet, this alteration software, one deployable both through high-end editing software and free mobile apps, remains critically under examined. One troubling example of deepfakes is the superimposing of women’s faces into pornographic videos. The implication here is a reification of women’s bodies as a thing to be visually consumed, here circumventing consent. This use is confounding considering the very bodies used ...

Dealing with deepfakes – an interdisciplinary examination of the state of research and implications for communication studies

Studies in Communication and Media, 2021

Using artificial intelligence, it is becoming increasingly easy to create highly realistic but fake video content - so-called deepfakes. As a result, it is no longer possible always to distinguish real from mechanically created recordings with the naked eye. Despite the novelty of this phenomenon, regulators and industry players have started to address the risks associated with deepfakes. Yet research on deepfakes is still in its infancy. This paper presents findings from a systematic review of English-language deepfake research to identify salient discussions. We find that, to date, deepfake research is driven by computer science and law, with studies focusing on deepfake detection and regulation. While a number of studies address the potential of deepfakes for political disinformation, few have examined user perceptions of and reactions to deepfakes. Other notable research topics include challenges to journalistic practices and pornographic applications of deepfakes. We identify r...

Don't Trust Your Eyes: Image Manipulation in the Age of DeepFakes

Frontiers in Communication, 2021

We review the phenomenon of deepfakes, a novel technology enabling inexpensive manipulation of video material through the use of artificial intelligence, in the context of today’s wider discussion on fake news. We discuss the foundation as well as recent developments of the technology, as well as the differences from earlier manipulation techniques and investigate technical countermeasures. While the threat of deepfake videos with substantial political impact has been widely discussed in recent years, so far, the political impact of the technology has been limited. We investigate reasons for this and extrapolate the types of deepfake videos we are likely to see in the future.

Fighting Deepfakes: Media and Internet Giants' Converging and Diverging Strategies Against Hi-Tech Misinformation

Media and Communication, 2021

Deepfakes, one of the most novel forms of misinformation, have become a real challenge in the communicative environment due to their spread through online news and social media spaces. Although fake news have existed for centuries, its circulation is now more harmful than ever before, thanks to the ease of its production and dissemination. At this juncture, technological development has led to the emergence of deepfakes, doctored videos, audios or photos that use artificial intelligence. Since its inception in 2017, the tools and algorithms that enable the modification of faces and sounds in audiovisual content have evolved to the point where there are mobile apps and web services that allow average users its manipulation. This research tries to show how three renowned media outlets-The Wall Street Journal, The Washington Post, and Reuters-and three of the biggest Internet-based companies-Google, Facebook, and Twitter-are dealing with the spread of this new form of fake news. Results show that identification of deepfakes is a common practice for both types of organizations. However, while the media is focused on training journalists for its detection, online platforms tended to fund research projects whose objective is to develop or improve media forensics tools.

2023 - The Spiral of Digital Falsehood

The International Journal for the Semiotics of Law, 2023

The article defines the research field of a semiotically oriented philosophy of digital communication. It lays out its methodological perspective, pointing out how the fake has always been at the center of semiotic research. It traces the origin of deepfakes back to the conception of GANs, whose essential semiotic workings it expounds on. It enucleates the specificities of the digital fake, especially in the production of artificial faces. It reviews the deepfake phenomenon, enunciating its most recent statistics, prevalent areas of application, risks, and opportunities. It surveys the most current literature. It concludes by emphasizing the novelty of a situation in which the fake, in human societies and cultures, is produced mostly by machines. It stresses the desirability for a semiotic and interdisciplinary study of these productions.

“All Around Me Are Synthetic Faces”: The Mad World of AI-Generated Media

IT Professional

Advances in artificial intelligence and deep neural networks have led to a rise in synthetic media, i.e., automatically and artificially generated or manipulated photo, audio, and video content. Synthetic media today is highly believable and "true to life"; so much so that we will no longer be able to trust what we see or hear is unadulterated and genuine. Among the different forms of synthetic media, the most concerning forms are deepfakes and general adversarial networks (GANs). For IT professionals, it is important to understand what these new phenomena are. In this article, we explain what deepfakes and GANs are, how they work and discuss the threats and opportunities resulting from these forms of synthetic media. ANTICIPATING A "MAD WORLD" & BARACK OBAMA'S PUBLIC service announcement starts with the usual backdrop of American flags within the Oval Office. His distinctive vocal pauses and hand gestures lend credibility to his address about the modern threat of digital technologies and artificial intelligence (AI). But suddenly, his own address starts to take a strange turn, culminating in an alarming and outof-character statement: "President Trump is a total and complete dipsh%t." Wait, what? Obama pauses to clarify, "See, now I would never say these things, at least not in a public address. But someone else would."