Blame It on the Algorithm? Russian Government-Sponsored Media and Algorithmic Curation of Political Information on Facebook (original) (raw)
Related papers
This paper uses artificial intelligence to identify a Russian disinformation narrative and track it to its original sources online. This project, using online content that is collected and categorized by the VAST (Veracity Authentication Systems Technology) OSINT system, identifies and analyzes content associated with Russian propaganda with strategic narrative insights. We use the example of accusations of Nazism in Ukraine, specifically related to the Ukrainian Azov regiment, to demonstrate how different stories within this propaganda narrative appear on far-right U.S. websites. At the same time, our study shows little engagement with these stories in the mainstream U.S. media. This paper demonstrates how to scale human content analysis by using artificial intelligence to analyze how foreign propaganda penetrates the U.S. media ecosystem. Using this technology, we can identify disinformation ‘supply chains’ and hopefully disrupt this supply more effectively than we have in the past.
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing
Amidst growing concern over media manipulation, NLP attention has focused on overt strategies like censorship and "fake news". Here, we draw on two concepts from the political science literature to explore subtler strategies for government media manipulation: agenda-setting (selecting what topics to cover) and framing (deciding how topics are covered). We analyze 13 years (100K articles) of the Russian newspaper Izvestia and identify a strategy of distraction: articles mention the U.S. more frequently in the month directly following an economic downturn in Russia. We introduce embedding-based methods for cross-lingually projecting English frames to Russian, and discover that these articles emphasize U.S. moral failings and threats to the U.S. Our work offers new ways to identify subtle media manipulation strategies at the intersection of agenda-setting and framing.
Human Communication Research
Social bots, or algorithmic agents that amplify certain viewpoints and interact with selected actors on social media, may influence online discussion, news attention, or even public opinion through coordinated action. Previous research has documented the presence of bot activities and developed detection algorithms. Yet, how social bots influence attention dynamics of the hybrid media system remains understudied. Leveraging a large collection of both tweets (N = 1,657,551) and news stories (N = 50,356) about the early COVID-19 pandemic, we employed bot detection techniques, structural topic modeling, and time series analysis to characterize the temporal associations between the topics Twitter bots tend to amplify and subsequent news coverage across the partisan spectrum. We found that bots represented 8.98% of total accounts, selectively promoted certain topics and predicted coverage aligned with partisan narratives. Our macro-level longitudinal description highlights the role of bo...
Proceedings of the 7th Joint SIGHUM Workshop on Computational Linguistics for Cultural Heritage, Social Sciences, Humanities and Literature
This study relies on natural language processing to explore the nature of online communication in Russia during the war on Ukraine in 2022. The analysis of a large corpus of publications in traditional media and on social media identifies massive state interventions aimed at manipulating public opinion. The study relies on expertise in media studies and political science to trace the major themes and strategies of propagandist narratives on three major Russian social media platforms over several months as well as their perception by the users. Distributions of several keyworded pro-war and anti-war topics are examined to reveal the crossplatform specificity of social media audiences. We release WarMM-2022, a 1.7M posts corpus. This corpus includes publications related to the Russia-Ukraine war, which appeared in Russian mass media (February to September 2022) and on social networks (July to September 2022). The corpus can be useful for the development of NLP approaches to propaganda detection and subsequent studies of propaganda campaigns in social sciences in addition to traditional methods, such as content analysis, focus groups, surveys, and experiments.
Agenda-Setting Via Exploitation of Facebook Ad Targeting
University of California Santa Barbara, 2018
This paper is a content analysis exploring the events of the 2016 United States election with regards to Russian interference. It highlights the ways in which the strategies used relate to findings of previous research and provides insight to the persuasive capabilities of the agenda setting theory, as well as its conceptual components. Furthermore, it seeks to extend the theoretical scope of the agenda setting theory by factoring in the dynamic agenda-setting effects of social media outlets and how these outlets allow for a new form of gatekeeping.
“Donald Trump Is My President!”: The Internet Research Agency Propaganda Machine
Social Media + Society, 2019
This article presents a typological study of the Twitter accounts operated by the Internet Research Agency (IRA), a company specialized in online influence operations based in St. Petersburg, Russia. Drawing on concepts from 20th-century propaganda theory, we modeled the IRA operations along propaganda classes and campaign targets. The study relies on two historical databases and data from the Internet Archive’s Wayback Machine to retrieve 826 user profiles and 6,377 tweets posted by the agency between 2012 and 2017. We manually coded the source as identifiable, obfuscated, or impersonated and classified the campaign target of IRA operations using an inductive typology based on profile descriptions, images, location, language, and tweeted content. The qualitative variables were analyzed as relative frequencies to test the extent to which the IRA’s black, gray, and white propaganda are deployed with clearly defined targets for short-, medium-, and long-term propaganda strategies. The results show that source classification from propaganda theory remains a valid framework to understand IRA’s propaganda machine and that the agency operates a composite of different user accounts tailored to perform specific tasks, including openly pro-Russian profiles, local American and German news sources, pro-Trump conservatives, and Black Lives Matter activists.
SSRN Electronic Journal, 2018
Ever since the 2016 U.S. elections, disquiet about the role of Russian propaganda in the U.S. media system has grown into outrage and fear. How has one authoritarian state been able to wreck so much havoc in the U.S. media system? The answer lies in a relatively small area of political communication scholarship: the study of national media models. Scholars ranging from Fred Siebert to Paulo Mancini have eloquently articulated how closely media systems reflect their national political systems and cultures. While this debate has remained mostly academic, it now holds the key to understanding (and trying to control) the vulnerabilities to disinformation in the U.S. media ecosystem. This paper pushes back against the idea that media literacy of the audience is the critical problem in combatting 'fake news' created as disinformation by other countries. The U.S. media audience has not fundamentally changed in the past two decades. What has changed in the way in which media is supplied to the American public, notably the decline of traditional news, the fragmentation of the information space online, the rise of news distributed within trusted circles via social media platforms, and the flooding of the U.S. news supply with both foreign and domestic disinformation. In thinking about the role of Russian propaganda as one central challenge to the U.S. media system, it is clear the affordances of the Russian media system strongly favor the ability of Russia to exploit the U.S. media sphere. Under the Russian media system, journalists are considered mouthpieces for political interests and mold information to support those in power, unfettered by the U.S. ideal of media in service to the public or a greater truth. The U.S. media cannot completely ignore these messages, but must expend precious resources refuting or attempting to find some sort of impossible 'balance' between disinformation and actual news. Nor can the mainstream U.S. media counter with its own disinformation, as this violates American media ethics. This paper will discuss evidence from the 2016 U.S. elections to showcase the role Russian disinformation has played in undermining the U.S. media system and how it has dovetailed with other challenges to the supply of information to U.S. citizens. The paper also will suggest possible solutions to the issue of Russian disinformation, with an emphasis on how social media platforms such as Facebook and Twitter should acknowledge their essential role in preserving the free media system and significantly increase their efforts to help the American audience to identify disinformation.
A GUIDE FOR CONTENT CREATORS TO IDENTIFY AND COUNTER RUSSIAN PROPAGANDA IN THE LATEST TECHNOLOGIES
Institute of Innovative Governance, 2024
The widespread adoption of artificial intelligence tools, such as ChatGPT, has significantly increased the risk of disseminating disinformation and propaganda. These technologies can rapidly generate diverse forms of content, including the distortion of historical facts and news, posing critical challenges in the current global political landscape and the context of the ongoing war in Ukraine. This guide explores the capabilities of artificial intelligence in content creation and provides a detailed framework for identifying Russian propaganda generated through AI in the information space. It highlights the essential role of fact-checking and verification as key strategies to combat the spread of misinformation and safeguard information integrity.
Computational propaganda on social media: A new challenge to democracy in the digital age
Red’shine Publications, 2023
With the advent of digital technology enabled social media platforms, coupled with the disruptive technology of Artificial Intelligence (A.I) and Machine Learning (M.L), the designing of propaganda is also evolving at a fast pace. The pace unknown to us before and its implication yet to be understood. Recent research in usageof social media has indicated that more studies are required to understand the online campaign, especially the computational propaganda on social media platforms. The objective of this article is to throw light on the computational propaganda used on the social media platforms and threats associated with it.
International Journal of Communication, 2021
In 2018, the election of Jair Bolsonaro for the Brazilian presidency was associated with dubious propaganda strategies implemented through social media. The purpose of this article is to understand the early development of key communication strategies of his presidential campaign since 2016. We used a combination of observational, discourse, and content analysis based on digital trace data to investigate how Bolsonaro had been testing his campaign targets and segmentation, as well as cultivating bot accounts and botnets on Twitter during the 2016 Rio de Janeiro municipal election. Our research suggests that the automation of different supporter profiles to target potential voter identities and the experimental dissemination of divisive narratives ensured the effectiveness of his communication persuasion. This finding contributes to the growing body of knowledge regarding his controversial online efforts, adding to the urgent research agenda on Brazil's democratic setback.