Towards a New Science of Disinformation (original) (raw)

Fake News and Deepfakes: A Dangerous Threat for 21 st Century Information Security

Fake news, often referred to as junk news or pseudo-news, is a form of yellow journalism or propaganda created with the purpose of distributing deliberate disinformation or false news using traditional print or online social media. Fake news has become a significant problem globally in the past few years. It has become common to find popular individuals and even members of the state using misinformation to influence individuals' actions whether consciously or subconsciously. The latest trend is using Artificial Intelligence (AI) to create fake videos known as "deepfakes". Deepfake, a portmanteau of "deep learning" and "fake", is an artificial intelligence-based human image synthesis technique. It is used to combine and superimpose existing images and videos onto source images or videos using a machine learning technique called a "generative adversarial network" (GAN). The combination of the existing and source videos results in a fake video that shows a person or persons performing an action at an event that never occurred in reality. This paper provides an overview of the currently available creation and detection techniques to identify fake news and deepfakes. The outcome of this paper provides the reader with an adequate literature review that summarises the current state of fake news and deepfakes, with special attention given to the tools and technologies that can be used to both create and detect fake news or deepfake material.

Fighting Deepfakes: Media and Internet Giants' Converging and Diverging Strategies Against Hi-Tech Misinformation

Media and Communication, 2021

Deepfakes, one of the most novel forms of misinformation, have become a real challenge in the communicative environment due to their spread through online news and social media spaces. Although fake news have existed for centuries, its circulation is now more harmful than ever before, thanks to the ease of its production and dissemination. At this juncture, technological development has led to the emergence of deepfakes, doctored videos, audios or photos that use artificial intelligence. Since its inception in 2017, the tools and algorithms that enable the modification of faces and sounds in audiovisual content have evolved to the point where there are mobile apps and web services that allow average users its manipulation. This research tries to show how three renowned media outlets-The Wall Street Journal, The Washington Post, and Reuters-and three of the biggest Internet-based companies-Google, Facebook, and Twitter-are dealing with the spread of this new form of fake news. Results show that identification of deepfakes is a common practice for both types of organizations. However, while the media is focused on training journalists for its detection, online platforms tended to fund research projects whose objective is to develop or improve media forensics tools.

Unmasking the Deepfake Infocalypse: Debunking Manufactured Misinformation with a Prototype Model in the AI Era “Seeing and hearing, no longer believing.”

Journal of Communication and Management, 2023

Machine learning and artificial intelligence in Journalism are aid and not a replacement or challenge to a journalist's ability. Artificial intelligence-backed fake news characterized by misinformation and disinformation is the new emerging threat in our broken information ecosystem. Deepfakes erode trust in visual evidence, making it increasingly challenging to discern real from fake. Deepfakes are an increasing cause for concern since they can be used to propagate false information, fabricate news, or deceive people. While Artificial intelligence is used to create deepfakes, the same technology is also used to detect them. Digital Media literacy, along with technological deepfake detection tools, is an effective solution to the menace of deepfake. The paper reviews the creation and detection of deepfakes using machine learning and deep learning models. It also discusses the implications of cognitive biases and social identity theories in deepfake creation and strategies for establishing a trustworthy information ecosystem. The researchers have developed a prototype deepfake detection model, which can lay a foundation to expose deepfake videos. The prototype model correctly identified 35 out of 50 deepfake videos, achieving 70% accuracy. The researcher considers 65% and above as "fake" and 65% and below as "real". 15 videos were incorrectly classified as real, potentially due to model limitations and the quality of the deepfakes. These deepfakes were highly convincing and flawless. Deepfakes have a high potential to damage reputations and are often obscene or vulgar. There is no specific law for deepfakes, but general laws require offensive/fake content to be taken down. Deepfakes are often used to spread misinformation or harm someone's reputation. They are designed to harass, intimidate, or spread fear. A significant majority of deepfake videos are pornographic and target female celebrities.

Deepfakes: The Threat to Data Authenticity and Public Trust in the Age of AI-Driven Manipulation of Visual and Audio Content

Journal of AI-Assisted Scientific Discovery, 2022

The advent of artificial intelligence (AI) has revolutionized numerous industries, but it has also introduced profound risks, particularly through the development of deepfake technology. Deepfakes, which are AI-generated synthetic media that manipulate visual and audio content to create hyper-realistic but entirely fabricated representations, present a significant threat to data authenticity and public trust. The rapid advancements in machine learning, specifically in generative adversarial networks (GANs), have fueled the proliferation of deepfakes, enabling the creation of indistinguishable digital forgeries that can easily deceive viewers and listeners. This paper explores the multifaceted threat posed by deepfakes in undermining the authenticity of digital content and eroding public confidence in media and information. In an era where visual and auditory content is heavily relied upon for communication, governance, and decision-making, the rise of deepfakes brings forth unprecedented challenges in maintaining the integrity of information. This research examines the technical mechanisms driving deepfake creation, emphasizing the role of GANs and neural networks in producing lifelike simulations of human faces, voices, and behaviors. A detailed analysis is provided on how these technologies can be weaponized for nefarious purposes, such as the dissemination of political misinformation, character defamation, and even identity theft. As the accessibility of AI-driven tools expands, malicious actors are increasingly leveraging deepfakes to manipulate public opinion, disrupt democratic processes, and compromise cybersecurity. The paper highlights the alarming potential of deepfakes to distort reality, making it challenging for individuals and institutions to differentiate between authentic and manipulated content. The paper also delves into the technical countermeasures being developed to detect and mitigate the spread of deepfakes. Current detection methodologies, such as deep learning-based classifiers, digital watermarking, and forensic techniques, are critically evaluated for their effectiveness in identifying manipulated content. However, the ongoing arms race between deepfake creation and detection technologies poses significant challenges, as adversaries continuously refine their models to evade detection systems. This research underscores the need for continued innovation in detection algorithms and the integration of AI-driven solutions to stay ahead of increasingly sophisticated forgeries. Furthermore, the legal and regulatory landscape surrounding deepfakes is scrutinized, with an emphasis on the inadequacies of current frameworks to effectively address the complexities introduced by this technology. The paper discusses potential policy interventions, such as stricter digital content verification laws and international cooperation to combat the proliferation of deepfake-driven misinformation. Legal efforts to hold creators of malicious deepfakes accountable are explored, alongside the ethical considerations involved in balancing free speech with the need for data integrity. Beyond the technical and legal dimensions, this paper also examines the broader societal implications of deepfakes. The erosion of trust in digital media has far-reaching consequences, particularly in the realms of politics, journalism, and corporate governance. Public trust in authoritative sources of information is essential for the functioning of democratic institutions, and deepfakes pose a direct threat to this trust. The paper argues that the widespread dissemination of manipulated content can lead to a destabilization of public discourse, the spread of disinformation, and the breakdown of social cohesion. In addition, the psychological and cultural impacts of deepfakes are explored, highlighting how individuals' perceptions of reality can be shaped and distorted by AI-generated content. The research concludes by offering recommendations for a multi-stakeholder approach to addressing the deepfake phenomenon. This includes fostering collaboration between AI researchers, technologists, policymakers, and civil society organizations to develop comprehensive strategies for mitigating the risks associated with deepfakes. The paper emphasizes the need for a proactive, rather than reactive, approach in dealing with deepfake technology, advocating for the development of robust technical solutions, legal frameworks, and public awareness campaigns to protect the integrity of digital information.

Deepfakes: A Digital Transformation Leads to Misinformation

2021

Deepfakes are a product of artificial intelligence (AI) and software applications used to create convincing falsified audiovisual content. Linguistically, a portmanteau combines deep learning aspects of AI with the doctored or falsified enhancements that deem the content fake and now deepfake or misinformation results. A variety of sophisticated software programs' exacting algorithms create high-quality videos and manipulated audio of people who may not exist, twisting others who do exist, creating the potential for leading to the spread of serious misinformation often with serious consequences. The rate of detection of this digital emergence is proliferating exponentially and the sourcing is challenging to verify, causing alarms. Examples of this pervasive information warfare are is associated with deepfakes that range from identity theft, discrediting public figures and celebrities, cyberbullying, blackmail, threats to national security, personal privacy, intensifying pornography and sexual exploitation, cybersecurity, baiting hate crimes, abusing social media platforms and manipulating metadata. Deepfakes that are difficult to cite, acquire, or track have some parallel attributes to grey literature by that definition. Often detectable, yet problematic, activities such as phishing and robocalling may be common attempts of deepfake activities that threaten and interrupt rhythms of daily life. The increasing online personas that many people create or assume contribute to this fake content and potential for escalated exploitation due to technical abilities to copy and reimagine details that are not true. AI image generators create completely false images of people that simply don't exist within seconds and are nearly impossible to track. While AI is perceived as a positive benefit for science and policy, it can have negative roles in this new AI threatened environment. Deepfakes have crossover targets in common business applications and society at large. Examples of this blur are targeted advertising, undetected security cameras in public spaces, blockchain, tabloid press/paparazzi, entertainment, computer games, online publishing, data and privacy, courtroom testimony, public opinion, scientific evidence, political campaigns, and rhetoric. This paper explores the impact and intersections of these behaviors and activities, products of AI, and emerging technologies with how digital grey and the optics of grey expose the dangers of deepfakes on everyday life. Applying a security and privacy lens, we offer insights of extending libel and slander into more serious criminal behavior as deepfakes become more pervasive, construing reality, endangering personal, social, and global safety nets adding to the new normal we assume today. How we became more sensitized to misinformation and fake news tells the story about deepfakes.

Design Lessons from Building Deep Learning Disinformation Generation and Detection Solutions

European Conference on Cyber Warfare and Security

In its essence, social media is on its way of representing the superposition of all digital representations of human concepts, ideas, believes, attitudes, and experiences. In this realm, the information is not only shared, but also {mis, dis}interpreted either unintentionally or intentionally guided by (some kind of) awareness, uncertainty, or offensive purposes. This can produce implications and consequences such as societal and political polarization, and influence or alter human behaviour and beliefs. To tackle these issues corresponding to social media manipulation mechanisms like disinformation and misinformation, a diverse palette of efforts represented by governmental and social media platforms strategies, policies, and methods plus academic and independent studies and solutions are proposed. However, such solutions are based on a technical standpoint mainly on gaming or AI-based techniques and technologies, but often only consider the defender’s perspective and address in a ...

Leveraging Artificial Intelligence (AI) by a Strategic Defense against Deepfakes and Digital Misinformation

International Journal of Scientific Research and Modern Technology (IJSRMT),Volume 3, Issue 11, 2024

With rapid technological advancements, the emergence of deepfakes and digital misinformation has become both a powerful tool and a formidable challenge. Deepfakes—realistic yet fabricated media generated through artificial intelligence—threaten media credibility, public perception, and democratic integrity. This study explores the intersection of AI technology with these concerns, highlighting AI's role both as a driver of innovation and as a defense mechanism. By conducting an in-depth review of literature, analyzing current technologies, and examining case studies, this research evaluates AI-based strategies for identifying and addressing misinformation. Additionally, it considers the ethical and policy implications, calling for greater transparency, accountability, and media literacy. Through examining present AI techniques and predicting future trends, this paper underscores the importance of collaborative efforts among tech companies, government agencies, and the public to uphold truth and integrity in the digital age.

Computational Propaganda and Misinformation AI Technologies as tools of media manipulation Published

World Journal of Advanced Research and Reviews, 2025

The purpose of this study was to investigate how artificial intelligence (AI) influences and improves computational propaganda and misinformation efforts. The growing complexity of AI-driven technologies, like deepfakes, bots, and algorithmic manipulation, which have turned conventional propaganda strategies into more widespread and damaging media manipulation techniques, served as the researcher's inspiration. The study used a mixed-methods approach, combining quantitative data analysis from academic studies and digital forensic investigations with qualitative case studies of misinformation efforts. The results brought to light important tactics including the platform-specific use of X (formerly Twitter) to propagate false information, emotional exploitation through fear-based messaging, and purposeful amplification through bot networks. According to this research, AI technologies enhanced controversial content by taking use of algorithmic biases, so generating echo chambers and eroding confidence in democratic processes. The study also emphasized how deepfake technologies and their ability to manipulate susceptible populations' emotions present ethical and sociopolitical issues. In order to counteract AI-generated misinformation, the study suggested promoting digital literacy and creating more potent detection methods, such digital watermarking. Future studies should concentrate on the long-term psychological effects of AI-driven misinformation on democratic participation, public trust, and regulatory reactions in various countries. Furthermore, investigating how new AI technologies are influencing other media, like video games and virtual reality, may help humans better comprehend how they affect society as a whole.

The Role of Technology In Combating Fake and Malicious Contents

Advances in Multidisciplinary and Scientific Research Journal, 2024

Technology presents a double-edged sword in the fight against fake content online This paper investigates the multifaceted roles of technology in the dissemination of fake content online. It explores how advancements like automation, social media algorithms, and deep fakes facilitate the spread of misinformation. Conversely, it examines how technologies like AI, fact-checking tools, and content moderation can be harnessed to mitigate this challenge Conversely, the paper examines how technologies like artificial intelligence (AI), fact-checking tools, and content moderation can be harnessed to mitigate this challenge. Finally, the discussion goes into policy considerations and potential future technological solutions for fostering a more trustworthy online environment (Shu et al., 2017; Tandoc et al., 2018).

Fake Data and AI: Debunking Fake News to Educate and Enhance Media Literacy – A Study

Advances in computer science research, 2023

Everyone in this modern and technology-prone era relies on various online news sources for quick access of information. In addition, with the rise in popularity of social networking sites, within a short span of time, rumors circulate quickly among millions of users. Fake news is a threat to democratic societies and political systems by fostering hatred through a variety of methods, including satirical or fake data, imposter information, created content, fake links, false context, and manipulated content spread through social media platforms such as WhatsApp, Facebook, and Twitter [1] (Quandt, 2018). To halt the dissemination of fake content in emerging nations like India, it has become the need of the hour to educate every person about the debunking of false information, thus resulting in digital media literacy. The research paper is a study that is descriptive in nature, and it explores and analyses various available digital tools and technology for debunking virtual reality and fake news in the media. The study reveals that in India rise in machine based digital literacy is required to be familiar with reliable artificial intelligence supported fact-checking mechanisms to make people aware of the styles and techniques available for easily identifying and debunking fake news.