Exploiting the Right: Inferring Ideological Alignment in Online Influence Campaigns Using Shared Images (original) (raw)

Examining Similar and Ideologically Correlated Imagery in Online Political Communication

arXiv (Cornell University), 2021

This paper investigates visual media shared by US national politicians on Twitter, how a politician's variety of image types shared reflects their political position, and identifies a hazard in using standard methods for image characterization in this context. While past work has yielded valuable results on politicians' use of imagery in social media, that work has focused primarily on photographic media, which may not be sufficient given the variety of visual media shared in such spaces (e.g., infographics, illustrations, or memes). Leveraging three popular deep learning models to characterize politicians' visuals, this work uses clustering to identify eight types of visual media shared on Twitter, several of which are not photographic in nature. Results also show individual politicians share a variety of these types, and the distributions of their imagery across these clusters is correlated with their overall ideological position-e.g., liberal politicians appear to share a larger proportion of infographic-style images, and conservative politicians appear to share more patriotic imagery. At the same time, manual assessment reveals that these image characterization models group images with vastly different semantic meaning into the same clusters, as confirmed in a post-hoc analysis of hateful memetic imagery. These results suggest that, while image-characterization techniques do identify general types of imagery that correlate with political ideology, these methods miss critical semantic-and therefore politically relevant-differences among images. Consequently, care should be taken when dealing with the varieties of imagery shared in online spaces, especially in political contexts.

Ideological Asymmetry in the Reach of Pro-Russian Digital Disinformation to United States Audiences

Journal of Communication, 2019

Despite concerns about the effects of pro-Russian disinformation on Western public opinion, evidence of its reach remains scarce. We hypothesize that conservative individuals will be more likely than liberals to be potentially exposed to pro-Russian disinfor-mation in digital networks. We evaluate the hypothesis using a large data set of U.S.-based Twitter users, testing how ideology is associated with disinformation about the 2014 crash of the MH17 aircraft over eastern Ukraine. We find that potential exposure to disinformation is concentrated among the most conservative individuals. Moving from the most liberal to the most conservative individuals in the sample is associated with a change in the conditional probability of potential exposure to disinformation from 6.5% to 45.2%. We corroborate the finding using a second, validated data set on individual party registration. The results indicate that the reach of online, pro-Russian disinformation into U.S. audiences is distinctly ideologically asymmetric.

Ideological Congruence and Social Media Text as Data

Representation, 2019

Earlier studies on ideological congruence mostly rely on public opinion surveys to measure voter ideology , while politicians' ideology is measured by instruments such as roll call votes, expert surveys, and legislative texts. One crucial problem with such approaches is that the tools used to measure the elites' ideology are not identical to those used to measure the voters' ideology. The rapid growth of social media use offers a unique opportunity to directly examine the ideological overlap of elites and the electorate on a common platform using a common technique. This study examines over four million Twitter posts by legislative candidates from four major Turkish parties and their supporters between 2012 and 2016. After applying machine-learning algorithms to clean non-political content from the data, we employ Wordfish text scaling technique to extract the policy positions and compare the party positions to those of other parties and to those of their supporters.

Characterizing the 2016 Russian IRA influence campaign

Social Network Analysis and Mining, 2019

Until recently, social media were seen to promote democratic discourse on social and political issues. However, this powerful communication ecosystem has come under scrutiny for allowing hostile actors to exploit online discussions in an attempt to manipulate public opinion. A case in point is the ongoing U.S. Congress investigation of Russian interference in the 2016 U.S. election campaign, with Russia accused of, among other things, using trolls (malicious accounts created for the purpose of manipulation) and bots (automated accounts) to spread propaganda and politically biased information. In this study, we explore the effects of this manipulation campaign, taking a closer look at users who re-shared the posts produced on Twitter by the Russian troll accounts publicly disclosed by U.S. Congress investigation. We collected a dataset of 13 million election-related posts shared on Twitter in the year of 2016 by over a million distinct users. This dataset includes accounts associated with the identified Russian trolls as well as users sharing posts in the same time period on a variety of topics around the 2016 elections. We use label propagation to infer the users' ideology based on the news sources they share. We are able to classify a large number of the users as liberal or con

Characterizing networks of propaganda on twitter: a case study

Applied Network Science

The daily exposure of social media users to propaganda and disinformation campaigns has reinvigorated the need to investigate the local and global patterns of diffusion of different (mis)information content on social media. Echo chambers and influencers are often deemed responsible of both the polarization of users in online social networks and the success of propaganda and disinformation campaigns. This article adopts a data-driven approach to investigate the structuration of communities and propaganda networks on Twitter in order to assess the correctness of these imputations. In particular, the work aims at characterizing networks of propaganda extracted from a Twitter dataset by combining the information gained by three different classification approaches, focused respectively on (i) using Tweets content to infer the “polarization” of users around a specific topic, (ii) identifying users having an active role in the diffusion of different propaganda and disinformation items, and...

FACTOID: A New Dataset for Identifying Misinformation Spreaders and Political Bias

2022

Proactively identifying misinformation spreaders is an important step towards mitigating the impact of fake news on our society. In this paper, we introduce a new contemporary Reddit dataset for fake news spreader analysis, called FACTOID, monitoring political discussions on Reddit since the beginning of 2020. The dataset contains over 4K users with 3.4M Reddit posts, and includes, beyond the users' binary labels, also their fine-grained credibility level (very low to very high) and their political bias strength (extreme right to extreme left). As far as we are aware, this is the first fake news spreader dataset that simultaneously captures both the long-term context of users' historical posts and the interactions between them. To create the first benchmark on our data, we provide methods for identifying misinformation spreaders by utilizing the social connections between the users along with their psycho-linguistic features. We show that the users' social interactions can, on their own, indicate misinformation spreading, while the psycho-linguistic features are mostly informative in non-neural classification settings. In a qualitative analysis we observe that detecting affective mental processes correlates negatively with right-biased users, and that the openness to experience factor is lower for those who spread fake news.

Quantifying Political Leaning from Tweets, Retweets, and Retweeters

—The widespread use of online social networks (OSNs) to disseminate information and exchange opinions, by the general public, news media, and political actors alike, has enabled new avenues of research in computational political science. In this paper, we study the problem of quantifying and inferring the political leaning of Twitter users. We formulate political leaning inference as a convex optimization problem that incorporates two ideas: (a) users are consistent in their actions of tweeting and retweeting about political issues, and (b) similar users tend to be retweeted by similar audience. We then apply our inference technique to 119 million election-related tweets collected in seven months during the 2012 U.S. presidential election campaign. On a set of frequently retweeted sources, our technique achieves 94 percent accuracy and high rank correlation as compared with manually created labels. By studying the political leaning of 1,000 frequently retweeted sources, 232,000 ordinary users who retweeted them, and the hashtags used by these sources, our quantitative study sheds light on the political demographics of the Twitter population, and the temporal dynamics of political polarization as events unfold.

#bias: Measuring the Tweeting Behavior of Propagandists

Proceedings of the International AAAI Conference on Web and Social Media

Twitter is an efficient conduit of information for millions of usersaround the world. Its ability to quickly spread information to a largenumber of people makes it an efficient way to shape information and,hence, shape public opinion. We study the tweeting behavior of Twitter propagandists, users who consistently expressthe same opinion or ideology, focusing on two online communities: the2010 Nevada senate race and the 2011 debt-ceiling debate. We identify several extreme tweeting patterns thatcould characterize users who spread propaganda: (1) sending high volumesof tweets over short periods of time, (2) retweeting whilepublishing little original content, (3) quickly retweeting, and (4) colluding with other, seeminglyunrelated, users to send duplicate or near-duplicate messages on thesame topic simultaneously. These four features appear to distinguishtweeters who spread propaganda from other more neutral users and could serve asstarting point for developing behavioral-based propaga...

Birds of a Feather Tweet Together: Integrating Network and Content Analyses to Examine Cross-Ideology Exposure on Twitter

Journal of Computer-Mediated Communication, 2013

This study integrates network and content analyses to examine exposure to cross-ideological political views on Twitter. We mapped the Twitter networks of 10 controversial political topics, discovered clusters -subgroups of highly self-connected users -and coded messages and links in them for political orientation. We found that Twitter users are unlikely to be exposed to cross-ideological content from the clusters of users they followed, as these were usually politically homogeneous. Links pointed at grassroots web pages (e.g.: blogs) more frequently than traditional media websites. Liberal messages, however, were more likely to link to traditional media. Last, we found that more specific topics of controversy had both conservative and liberal clusters, while in broader topics, dominant clusters reflected conservative sentiment.

“Donald Trump Is My President!”: The Internet Research Agency Propaganda Machine

Social Media + Society, 2019

This article presents a typological study of the Twitter accounts operated by the Internet Research Agency (IRA), a company specialized in online influence operations based in St. Petersburg, Russia. Drawing on concepts from 20th-century propaganda theory, we modeled the IRA operations along propaganda classes and campaign targets. The study relies on two historical databases and data from the Internet Archive’s Wayback Machine to retrieve 826 user profiles and 6,377 tweets posted by the agency between 2012 and 2017. We manually coded the source as identifiable, obfuscated, or impersonated and classified the campaign target of IRA operations using an inductive typology based on profile descriptions, images, location, language, and tweeted content. The qualitative variables were analyzed as relative frequencies to test the extent to which the IRA’s black, gray, and white propaganda are deployed with clearly defined targets for short-, medium-, and long-term propaganda strategies. The results show that source classification from propaganda theory remains a valid framework to understand IRA’s propaganda machine and that the agency operates a composite of different user accounts tailored to perform specific tasks, including openly pro-Russian profiles, local American and German news sources, pro-Trump conservatives, and Black Lives Matter activists.