DeFaking Deepfakes: Understanding Journalists’ Needs for Deepfake Detection (original) (raw)
Related papers
Media and Communication, 2021
Deepfakes, one of the most novel forms of misinformation, have become a real challenge in the communicative environment due to their spread through online news and social media spaces. Although fake news have existed for centuries, its circulation is now more harmful than ever before, thanks to the ease of its production and dissemination. At this juncture, technological development has led to the emergence of deepfakes, doctored videos, audios or photos that use artificial intelligence. Since its inception in 2017, the tools and algorithms that enable the modification of faces and sounds in audiovisual content have evolved to the point where there are mobile apps and web services that allow average users its manipulation. This research tries to show how three renowned media outlets-The Wall Street Journal, The Washington Post, and Reuters-and three of the biggest Internet-based companies-Google, Facebook, and Twitter-are dealing with the spread of this new form of fake news. Results show that identification of deepfakes is a common practice for both types of organizations. However, while the media is focused on training journalists for its detection, online platforms tended to fund research projects whose objective is to develop or improve media forensics tools.
Journal of AI-Assisted Scientific Discovery, 2022
The advent of artificial intelligence (AI) has revolutionized numerous industries, but it has also introduced profound risks, particularly through the development of deepfake technology. Deepfakes, which are AI-generated synthetic media that manipulate visual and audio content to create hyper-realistic but entirely fabricated representations, present a significant threat to data authenticity and public trust. The rapid advancements in machine learning, specifically in generative adversarial networks (GANs), have fueled the proliferation of deepfakes, enabling the creation of indistinguishable digital forgeries that can easily deceive viewers and listeners. This paper explores the multifaceted threat posed by deepfakes in undermining the authenticity of digital content and eroding public confidence in media and information. In an era where visual and auditory content is heavily relied upon for communication, governance, and decision-making, the rise of deepfakes brings forth unprecedented challenges in maintaining the integrity of information. This research examines the technical mechanisms driving deepfake creation, emphasizing the role of GANs and neural networks in producing lifelike simulations of human faces, voices, and behaviors. A detailed analysis is provided on how these technologies can be weaponized for nefarious purposes, such as the dissemination of political misinformation, character defamation, and even identity theft. As the accessibility of AI-driven tools expands, malicious actors are increasingly leveraging deepfakes to manipulate public opinion, disrupt democratic processes, and compromise cybersecurity. The paper highlights the alarming potential of deepfakes to distort reality, making it challenging for individuals and institutions to differentiate between authentic and manipulated content. The paper also delves into the technical countermeasures being developed to detect and mitigate the spread of deepfakes. Current detection methodologies, such as deep learning-based classifiers, digital watermarking, and forensic techniques, are critically evaluated for their effectiveness in identifying manipulated content. However, the ongoing arms race between deepfake creation and detection technologies poses significant challenges, as adversaries continuously refine their models to evade detection systems. This research underscores the need for continued innovation in detection algorithms and the integration of AI-driven solutions to stay ahead of increasingly sophisticated forgeries. Furthermore, the legal and regulatory landscape surrounding deepfakes is scrutinized, with an emphasis on the inadequacies of current frameworks to effectively address the complexities introduced by this technology. The paper discusses potential policy interventions, such as stricter digital content verification laws and international cooperation to combat the proliferation of deepfake-driven misinformation. Legal efforts to hold creators of malicious deepfakes accountable are explored, alongside the ethical considerations involved in balancing free speech with the need for data integrity. Beyond the technical and legal dimensions, this paper also examines the broader societal implications of deepfakes. The erosion of trust in digital media has far-reaching consequences, particularly in the realms of politics, journalism, and corporate governance. Public trust in authoritative sources of information is essential for the functioning of democratic institutions, and deepfakes pose a direct threat to this trust. The paper argues that the widespread dissemination of manipulated content can lead to a destabilization of public discourse, the spread of disinformation, and the breakdown of social cohesion. In addition, the psychological and cultural impacts of deepfakes are explored, highlighting how individuals' perceptions of reality can be shaped and distorted by AI-generated content. The research concludes by offering recommendations for a multi-stakeholder approach to addressing the deepfake phenomenon. This includes fostering collaboration between AI researchers, technologists, policymakers, and civil society organizations to develop comprehensive strategies for mitigating the risks associated with deepfakes. The paper emphasizes the need for a proactive, rather than reactive, approach in dealing with deepfake technology, advocating for the development of robust technical solutions, legal frameworks, and public awareness campaigns to protect the integrity of digital information.
Journal of Communication and Management, 2023
Machine learning and artificial intelligence in Journalism are aid and not a replacement or challenge to a journalist's ability. Artificial intelligence-backed fake news characterized by misinformation and disinformation is the new emerging threat in our broken information ecosystem. Deepfakes erode trust in visual evidence, making it increasingly challenging to discern real from fake. Deepfakes are an increasing cause for concern since they can be used to propagate false information, fabricate news, or deceive people. While Artificial intelligence is used to create deepfakes, the same technology is also used to detect them. Digital Media literacy, along with technological deepfake detection tools, is an effective solution to the menace of deepfake. The paper reviews the creation and detection of deepfakes using machine learning and deep learning models. It also discusses the implications of cognitive biases and social identity theories in deepfake creation and strategies for establishing a trustworthy information ecosystem. The researchers have developed a prototype deepfake detection model, which can lay a foundation to expose deepfake videos. The prototype model correctly identified 35 out of 50 deepfake videos, achieving 70% accuracy. The researcher considers 65% and above as "fake" and 65% and below as "real". 15 videos were incorrectly classified as real, potentially due to model limitations and the quality of the deepfakes. These deepfakes were highly convincing and flawless. Deepfakes have a high potential to damage reputations and are often obscene or vulgar. There is no specific law for deepfakes, but general laws require offensive/fake content to be taken down. Deepfakes are often used to spread misinformation or harm someone's reputation. They are designed to harass, intimidate, or spread fear. A significant majority of deepfake videos are pornographic and target female celebrities.
Fake News and Deepfakes: A Dangerous Threat for 21 st Century Information Security
Fake news, often referred to as junk news or pseudo-news, is a form of yellow journalism or propaganda created with the purpose of distributing deliberate disinformation or false news using traditional print or online social media. Fake news has become a significant problem globally in the past few years. It has become common to find popular individuals and even members of the state using misinformation to influence individuals' actions whether consciously or subconsciously. The latest trend is using Artificial Intelligence (AI) to create fake videos known as "deepfakes". Deepfake, a portmanteau of "deep learning" and "fake", is an artificial intelligence-based human image synthesis technique. It is used to combine and superimpose existing images and videos onto source images or videos using a machine learning technique called a "generative adversarial network" (GAN). The combination of the existing and source videos results in a fake video that shows a person or persons performing an action at an event that never occurred in reality. This paper provides an overview of the currently available creation and detection techniques to identify fake news and deepfakes. The outcome of this paper provides the reader with an adequate literature review that summarises the current state of fake news and deepfakes, with special attention given to the tools and technologies that can be used to both create and detect fake news or deepfake material.
Studies in Communication and Media, 2021
Using artificial intelligence, it is becoming increasingly easy to create highly realistic but fake video content - so-called deepfakes. As a result, it is no longer possible always to distinguish real from mechanically created recordings with the naked eye. Despite the novelty of this phenomenon, regulators and industry players have started to address the risks associated with deepfakes. Yet research on deepfakes is still in its infancy. This paper presents findings from a systematic review of English-language deepfake research to identify salient discussions. We find that, to date, deepfake research is driven by computer science and law, with studies focusing on deepfake detection and regulation. While a number of studies address the potential of deepfakes for political disinformation, few have examined user perceptions of and reactions to deepfakes. Other notable research topics include challenges to journalistic practices and pornographic applications of deepfakes. We identify r...
Political Deepfakes Are As Credible As Other Fake Media And (Sometimes) Real Media
2021
We demonstrate that fabricated videos of public officials synthesized by deep learning ("deepfakes'") are credible to a large portion of the American public - up to 50% of a representative sample of 5,750 subjects - however no more than equivalent misinformation in extant modalities like text headlines or audio recordings. Moreover, there are no meaningful heterogeneities in these credibility perceptions nor greater affective responses relative to other mediums across subgroups. However, when asked to discern real videos from deepfakes, partisanship explains a large gap in viewers' detection accuracy, but only for real videos, not deepfakes. Brief informational messages or accuracy primes only sometimes (and somewhat) attenuate deepfakes' effects. Above all else, broader literacy in politics and digital technology increases discernment between deepfakes and authentic videos of political elites. Our findings come from two experiments testing exposure to a novel ...
Deepfakes: A Digital Transformation Leads to Misinformation
2021
Deepfakes are a product of artificial intelligence (AI) and software applications used to create convincing falsified audiovisual content. Linguistically, a portmanteau combines deep learning aspects of AI with the doctored or falsified enhancements that deem the content fake and now deepfake or misinformation results. A variety of sophisticated software programs' exacting algorithms create high-quality videos and manipulated audio of people who may not exist, twisting others who do exist, creating the potential for leading to the spread of serious misinformation often with serious consequences. The rate of detection of this digital emergence is proliferating exponentially and the sourcing is challenging to verify, causing alarms. Examples of this pervasive information warfare are is associated with deepfakes that range from identity theft, discrediting public figures and celebrities, cyberbullying, blackmail, threats to national security, personal privacy, intensifying pornography and sexual exploitation, cybersecurity, baiting hate crimes, abusing social media platforms and manipulating metadata. Deepfakes that are difficult to cite, acquire, or track have some parallel attributes to grey literature by that definition. Often detectable, yet problematic, activities such as phishing and robocalling may be common attempts of deepfake activities that threaten and interrupt rhythms of daily life. The increasing online personas that many people create or assume contribute to this fake content and potential for escalated exploitation due to technical abilities to copy and reimagine details that are not true. AI image generators create completely false images of people that simply don't exist within seconds and are nearly impossible to track. While AI is perceived as a positive benefit for science and policy, it can have negative roles in this new AI threatened environment. Deepfakes have crossover targets in common business applications and society at large. Examples of this blur are targeted advertising, undetected security cameras in public spaces, blockchain, tabloid press/paparazzi, entertainment, computer games, online publishing, data and privacy, courtroom testimony, public opinion, scientific evidence, political campaigns, and rhetoric. This paper explores the impact and intersections of these behaviors and activities, products of AI, and emerging technologies with how digital grey and the optics of grey expose the dangers of deepfakes on everyday life. Applying a security and privacy lens, we offer insights of extending libel and slander into more serious criminal behavior as deepfakes become more pervasive, construing reality, endangering personal, social, and global safety nets adding to the new normal we assume today. How we became more sensitized to misinformation and fake news tells the story about deepfakes.
On the Truth Claims of Deepfakes: Indexing Images and Semantic Forensics
The Journal of Media Art Study and Theory, 2022
When news media shared a video of outgoing president Donald Trump acknowledging the victory of president-elect Joe Biden, some social media users conspired that it was a deepfake, a synthetic image made with machine learning (ML) algorithms, despite evidence to the contrary. Employing this example in the following, I focus on how images generate veracity through the interrelated actions of human and machine learning (ML) algorithms. I argue that ML presents an opportunity to revisit the semiotic infrastructures of images as an approach towards asking how photorealistic images produce truth claims in ways that exceed the purely visual. Drawing from photographic theories of the image index and diagrammatic understandings of ML, I argue that meaning, described here as what images do in the world, is a product of negotiation between multiple technological processes and social registers, spanning data sets, engineering decisions, and human biases. Focusing on Generative Adversarial Networks (GANs), I analyze sociopolitical and scientific discourses around deepfakes to understand the ways in which ML affords hegemonic ways of seeing. I conclude that ML operationalizes the evidentiary power of images, generating new thresholds of visibility to manage uncertainty. My aim is to critically challenge post-truth paranoias by analyzing how ML algorithms come to have ethicopolitical agency in visual culture, with implications for how images are made to matter in post-truth media ecologies.
Expert Systems with Applications, 2024
Due to the fast spread of data through digital media, individuals and societies must assess the reliability of information. Deepfakes are not a novel idea but they are now a widespread phenomenon. The impact of deepfakes and disinformation can range from infuriating individuals to affecting and misleading entire societies and even nations. There are several ways to detect and generate deepfakes online. By conducting a systematic literature analysis, in this study we explore automatic key detection and generation methods, frameworks, algorithms, and tools for identifying deepfakes (audio, images, and videos), and how these approaches can be employed within different situations to counter the spread of deepfakes and the generation of disinformation. Moreover, we explore state-of-the-art frameworks related to deepfakes to understand how emerging machine learning and deep learning approaches affect online disinformation. We also highlight practical challenges and trends in implementing policies to counter deepfakes. Finally, we provide policy recommendations based on analyzing how emerging artificial intelligence (AI) techniques can be employed to detect and generate deepfakes online. This study benefits the community and readers by providing a better understanding of recent developments in deepfake detection and generation frameworks. The study also sheds a light on the potential of AI in relation to deepfakes.