Computational Propaganda and Misinformation AI Technologies as tools of media manipulation Published (original) (raw)
The purpose of this study was to investigate how artificial intelligence (AI) influences and improves computational propaganda and misinformation efforts. The growing complexity of AI-driven technologies, like deepfakes, bots, and algorithmic manipulation, which have turned conventional propaganda strategies into more widespread and damaging media manipulation techniques, served as the researcher's inspiration. The study used a mixed-methods approach, combining quantitative data analysis from academic studies and digital forensic investigations with qualitative case studies of misinformation efforts. The results brought to light important tactics including the platform-specific use of X (formerly Twitter) to propagate false information, emotional exploitation through fear-based messaging, and purposeful amplification through bot networks. According to this research, AI technologies enhanced controversial content by taking use of algorithmic biases, so generating echo chambers and eroding confidence in democratic processes. The study also emphasized how deepfake technologies and their ability to manipulate susceptible populations' emotions present ethical and sociopolitical issues. In order to counteract AI-generated misinformation, the study suggested promoting digital literacy and creating more potent detection methods, such digital watermarking. Future studies should concentrate on the long-term psychological effects of AI-driven misinformation on democratic participation, public trust, and regulatory reactions in various countries. Furthermore, investigating how new AI technologies are influencing other media, like video games and virtual reality, may help humans better comprehend how they affect society as a whole.