Operationalising ‘toxicity’ in the manosphere: Automation, platform governance and community health (original) (raw)
Convergence: The International Journal of Research into New Media Technologies
Abstract
Social media platforms have been struggling to moderate at scale. In an effort to better cope with content moderation discussion has turned to the role that automated machine-learning (ML) tools might play. The development of automated systems by social media platforms is a notoriously opaque process and public values that pertain to the common good are at stake within these often-obscured processes. One site in which social values are being negotiated is in the framing of what is considered ‘toxic’ by platforms in the development of automated moderation processes. This study takes into consideration differing notions of toxicity – community, platform and societal by examining three measures of toxicity and community health (the ML tool Perspective API; Reddit’s 2020 Content Policy; and the Sense of Community Index-2) and how they are operationalised in the context of r/MGTOW – an antifeminist group known for its misogyny. Several stages of content analysis were conducted on the top...
Venessa Paech hasn't uploaded this paper.
Let Venessa know you want this paper to be uploaded.
Ask for this paper to be uploaded.