Are Proactive Interventions for Reddit Communities Feasible? (original) (raw)
Related papers
To Act or React: Investigating Proactive Strategies For Online Community Moderation
ArXiv, 2019
Reddit administrators have generally struggled to prevent or contain such discourse for several reasons including: (1) the inability for a handful of human administrators to track and react to millions of posts and comments per day and (2) fear of backlash as a consequence of administrative decisions to ban or quarantine hateful communities. Consequently, as shown in our background research, administrative actions (community bans and quarantines) are often taken in reaction to media pressure following offensive discourse within a community spilling into the real world with serious consequences. In this paper, we investigate the feasibility of proactive moderation on Reddit -- i.e., proactively identifying communities at risk of committing offenses that previously resulted in bans for other communities. Proactive moderation strategies show promise for two reasons: (1) they have potential to narrow down the communities that administrators need to monitor for hateful content and (2) th...
Tuning Out Hate Speech on Reddit: Automating Moderation and Detecting Toxicity in the Manosphere
AoIR Selected Papers of Internet Research, 2020
Over the past two years social media platforms have been struggling to moderate at scale. At the same time, they have come under fire for failing to mitigate the risks of perceived ‘toxic’ content or behaviour on their platforms. In effort to better cope with content moderation, to combat hate speech, ‘dangerous organisations’ and other bad actors present on platforms, discussion has turned to the role that automated machine-learning (ML) tools might play. This paper contributes to thinking about the role and suitability of ML for content moderation on community platforms such as Reddit and Facebook. In particular, it looks at how ML tools operate (or fail to operate) effectively at the intersection between online sentiment within communities and social and platform expectations of acceptable discourse. Through an examination of the r/MGTOW subreddit we problematise current understandings of the notion of ‘tox¬icity’ as applied to cultural or social sub-communities online and explai...
Hate speech, Censorship, and Freedom of Speech: The Changing Policies of Reddit
Journal of Data Mining & Digital Humanities
This paper examines the shift in focus on content policies and user attitudes on the social media platform Reddit. We do this by focusing on comments from general Reddit users from five posts made by admins (moderators) on updates to Reddit Content Policy. All five concern the nature of what kind of content is allowed to be posted on Reddit, and which measures will be taken against content that violates these policies. We use topic modeling to probe how the general discourse for Redditors has changed around limitations on content, and later, limitations on hate speech, or speech that incites violence against a particular group. We show that there is a clear shift in both the contents and the user attitudes that can be linked to contemporary societal upheaval as well as newly passed laws and regulations, and contribute to the wider discussion on hate speech moderation.
Reddit quarantined: can changing platform affordances reduce hateful material online?
Internet policy review, 2020
This paper studies the efficacy of the Reddit's quarantine, increasingly implemented in the platform as a means of restricting and reducing misogynistic and other hateful material. Using the case studies of r/TheRedPill and r/Braincels, the paper argues the quarantine successfully cordoned off affected subreddits and associated hateful material from the rest of the platform. It did not, however, reduce the levels of hateful material within the affected spaces. Instead many users reacted by leaving Reddit for less regulated spaces, with Reddit making this hateful material someone else's problem. The paper argues therefore that the success of the quarantine as a policy response is mixed. Issue 4 This paper is part of Trust in the system, a special issue of Internet Policy Review guestedited by Péter Mezei and Andreea Verteş-Olteanu. Content moderation is an integral part of the political economy of large social media platforms (Gillespie, 2018). While social media companies position themselves as platforms which offer unlimited potential of free expression (Gillespie, 2010), these same sites have always engaged in some form of content moderation (Marantz, 2019). In recent years, in response to increasing pressure from the public, lawmakers and advertisers, many large social media companies have given up much of their free speech rhetoric and have become more active in regulating abusive, misogynistic, racist and homophobic language on their platforms. This has occurred in particular through banning and restricting users and channels (Marantz, 2019). In 2018 for example, a number of large social media companies banned the high-profile conspiracy theorist Alex Jones and his platform InfoWars from their platforms (Hern, 2018), while in 2019 the web infrastructure company Cloudflare deplatformed the controversial site 8chan (Prince, 2019). In 2020 a number of platforms even began regulating material from President Donald Trump, with Twitter placing fact-checks and warnings on some of his tweets and the platform Twitch temporarily suspending his account (Copland and Davis, 2020). As one of the largest digital platforms in the world, Reddit has not been immune from this pressure. Built upon a reputation of being a bastion of free speech (Ohanian, 2013), Reddit has historically resisted censoring its users, despite the prominence of racist, misogynistic, homophobic and explicitly violent material on the platform (for examples,
Toward a Standard Approach for Echo Chamber Detection: Reddit Case Study
Applied Sciences, 2021
In a digital environment, the term echo chamber refers to an alarming phenomenon in which beliefs are amplified or reinforced by communication repetition inside a closed system and insulated from rebuttal. Up to date, a formal definition, as well as a platform-independent approach for its detection, is still lacking. This paper proposes a general framework to identify echo chambers on online social networks built on top of features they commonly share. Our approach is based on a four-step pipeline that involves (i) the identification of a controversial issue; (ii) the inference of users’ ideology on the controversy; (iii) the construction of users’ debate network; and (iv) the detection of homogeneous meso-scale communities. We further apply our framework in a detailed case study on Reddit, covering the first two and a half years of Donald Trump’s presidency. Our main purpose is to assess the existence of Pro-Trump and Anti-Trump echo chambers among three sociopolitical issues, as w...
When the Echo Chamber Shatters: Examining the Use of Community-Specific Language Post-Subreddit Ban
2021
Community-level bans are a common tool against groups that enable online harassment and harmful speech. Unfortunately, the efficacy of community bans has only been partially studied and with mixed results. Here, we provide a flexible unsupervised methodology to identify in-group language and track user activity on Reddit both before and after the ban of a community (subreddit). We use a simple word frequency divergence to identify uncommon words overrepresented in a given community, not as a proxy for harmful speech but as a linguistic signature of the community. We apply our method to 15 banned subreddits, and find that community response is heterogeneous between subreddits and between users of a subreddit. Top users were more likely to become less active overall, while random users often reduced use of in-group language without decreasing activity. Finally, we find some evidence that the effectiveness of bans aligns with the content of a community. Users of dark humor communities ...
Bot Detection in Reddit Political Discussion
Proceedings of the Fourth International Workshop on Social Sensing - SocialSense'19, 2019
The existence of social media bots on political forums can muddle the perception of public opinion. Bot detection has been successful on platforms such as Twitter, Facebook and Youtube. However, our research focuses on characterizing suspicious behavior on the social media platform Reddit; this platform structurally differs from its peers in that users subscribe to page topics and contribute to the discussion through comments or topical posts. We hypothesize that persons who intend to influence the public opinion would deploy paid users, or social media bots, to artificially amplify a political sentiment. We validate our hypothesis by using a networkbased approach that reveals a fully connected band of users which includes users with the word "bot" in their name, as well as users who exhibit otherwise a bot type behavior. CCS CONCEPTS • Networks → Network types; Social media networks; Overlay and other logical network structures;
Studying Anti-Social Behaviour on Reddit with Communalytic
The chapter presents a new social media research tool for studying subreddits (i.e., groups) on Reddit called Communalytic. It is an easy-to-use, web-based tool that can collect, analyze and visualize publicly available data from Reddit. In addition to collecting data, Communalytic can assess the toxicity of Reddit posts and replies using a machine learning API. The resulting anti-social scores from the toxicity analysis are then added as weights to each tie in a "who replies to whom" communication network, allowing researchers to visually identify and study toxic exchanges happening within a subreddit. The chapter consists of two parts: first, it introduces our methodology and Communalytic’s main functionalities. Second, it presents a case study of a public subreddit called r/metacanada. This subreddit, popular among the Canadian alt-right, was selected due to its polarizing nature. The case study demonstrates how Communalytic can support researchers studying toxicity in ...
ArXiv, 2021
Most platforms, including Reddit, face a dilemma when applying interventions such as subreddit bans to toxic communities — do they risk angering their user base by proactively enforcing stricter controls on discourse or do they defer interventions at the risk of eventually triggering negative media reactions which might impact their advertising revenue? In this paper, we analyze Reddit’s previous administrative interventions to understand one aspect of this dilemma: the relationship between the media and administrative interventions. More specifically, we make two primary contributions. First, using a mediation analysis framework, we find evidence that Reddit’s interventions for violating their content policy for toxic content occur because of media pressure. Second, using interrupted time series analysis, we show that media attention on communities with toxic content only increases the problematic behavior associated with that community (both within the community itself and across ...
Proceedings of the International AAAI Conference on Web and Social Media
The discussion-board site 4chan has been part of the Internet's dark underbelly since its inception, and recent political events have put it increasingly in the spotlight. In particular, /pol/, the “Politically Incorrect'” board, has been a central figure in the outlandish 2016 US election season, as it has often been linked to the alt-right movement and its rhetoric of hate and racism. However, 4chan remains relatively unstudied by the scientific community: little is known about its user base, the content it generates, and how it affects other parts of the Web. In this paper, we start addressing this gap by analyzing /pol/ along several axes, using a dataset of over 8M posts we collected over two and a half months. First, we perform a general characterization, showing that /pol/ users are well distributed around the world and that 4chan's unique features encourage fresh discussions. We also analyze content, finding, for instance, that YouTube links and hate speech are p...