misinformation – Techdirt (original) (raw)

Ctrl-Alt-Speech: Is This The Real Life? Is This Just Fakery?

from the ctrl-alt-speech dept

Ctrl-Alt-Speech is a weekly podcast about the latest news in online speech, from Mike Masnick and Everything in Moderation‘s Ben Whitelaw.

Subscribe now on Apple Podcasts, Overcast, Spotify, Pocket Casts, YouTube, or your podcast app of choice — or go straight to the RSS feed.

In this week’s round-up of the latest news in online speech, content moderation and internet regulation, Ben is joined by guest host Cathryn Weems, who has held T&S roles at Yahoo, Google, Dropbox, Twitter and Epic Games. They cover:

This episode is brought to you with financial support from the Future of Online Trust & Safety Fund, and by our sponsor Concentrix, the technology and services leader driving trust, safety, and content moderation globally. In our Bonus Chat at the end of the episode, clinical psychologist Dr Serra Pitts, who leads the psychological health team for Trust & Safety at Concentrix, talks to Ben about how to keep moderators healthy and safe at work and the innovative use of heart rate variability technology to monitor their physical response to harmful content.

Filed Under: ai, artificial intelligence, content moderation, disinformation, elon musk, misinformation
Companies: google, telegram, twitter, x

Newsom’s Unconstitutional AI Bills Draw First Amendment Lawsuit Within Minutes Of Signing

from the stop-passing-shit-bills dept

I do not understand why California Governor Gavin Newsom thinks he has to be the Democratic equivalent of Texas Governor Greg Abbott or Florida Governor Ron DeSantis, signing obviously unconstitutional laws for the sake of winning culture war arguments.

It’s really shameful. It’s cheap political pandering, while disrespecting the rights of everyone he’s supposed to represent.

Newsom and his Attorney General, Rob Bonta, keep losing First Amendment lawsuits challenging the bad internet laws he keeps signing (even if Newsom pretends he won them). And he’s wasting California taxpayer money fighting losing battles while engaging in petty political stunts.

The latest are a pair of obviously unconstitutional AI bills, AB 2655 and AB 2839, about AI and deepfakes (and possibly AB 2355, which might be slightly more defensible, but not much). While much of the media coverage has been about SB 1047, an equally bad bill Newsom seems unlikely to sign, the California legislature spent this session coming up with a ton of awful and unconstitutional ideas.

The specifics of the laws here place limits on “election-related deepfakes.” This gets a bit trickier from a First Amendment standpoint because of two things pushing in opposite directions. The first is that election-related political speech is definitely considered some of the most well-protected, most untouchable speech under the First Amendment.

A big reason why we have that First Amendment is that the founders wanted to encourage a vigorous and sometimes contentious debate on the issues and our leaders. For that reason, I think courts will pretty clearly toss out these laws as unconstitutional.

The one thing pushing back on this is there is that there is one area where courts have granted states more leeway in saying certain election-related information is not protected: when it’s lies about actual voting, such as where voting will be, when and how. There was a recent paper looking at some of these restrictions.

But the problem is that the laws Newsom just signed are not, in any way, limited in that manner. AB 2839 bans the sharing of some election-related deepfakes around election time.

AB 2655 then requires “a large online platform” to block political deepfakes around election time. Notably, it exempts broadcast TV, newspapers, magazines, and vaguely defined “satire or parody” content, which increases the list of reasons it’s clearly unconstitutional. Similar laws are thrown out for being “underinclusive” in not covering other similar content, since that proves that the government’s action here is not necessary.

Of course, that “satire or parody” exception just means everyone sharing these videos will claim they’re satire or parody. Any lawsuits would then be fought over whether or not they’re satire or parody, and that’s something judges shouldn’t be deciding.

AB 2355 requires political ads to disclose if they used AI. This is… kinda meaningless? As digital creation tools increasingly will use AI in the background for all sorts of things (fix the lighting! adjust the cloud cover!) this gets kind of silly.

Gavin Newsom tweeted about how he would use these laws to force Elon to remove a stupid, obvious deepfake he had posted of Kamala Harris, as if playing up that this is an unconstitutional stunt.

Image

Look, I get that Newsom isn’t big on the First Amendment. But tweeting out that you signed a bill to make sure a specific piece of content gets removed from social media is pretty much waving a giant red flag that says “HEY, I’M HERE VIOLATING THE FIRST AMENDMENT, LOOK AT ME, WHEEEEEEEE!!!”

Anyway, within probably minutes of Newsom signing the bill into law, the first lawsuit challenging 2655 and 2839 was filed. It’s by Christopher Kohls, who created the video that Elon shared, and which Newsom directly called out as one that he intended to forcibly remove. As expected, Kohl claims his video (which is, I assure you, very, very stupid) is a “parody.”

On July 26, 2024, Kohls posted a video parodying candidate Kamala Harris’s first presidential campaign ad. The humorous YouTube video (“the July 26 video”) is labeled “parody” and acknowledges “Sound or visuals were significantly edited or digitally generated.”

The July 26 video features AI-generated cuts of a voice sounding like Vice President Harris narrating why she should be President. In the video “Harris” announces she is the “Democrat candidate for President because Joe Biden”—her prior running mate, current boss, and the President—“finally exposed his senility at the” infamous presidential debate with former President Trump on June 27, 2024. The video’s voiceover closely resembles Harris’s voice and the production itself mirrors the aesthetic of a real campaign ad—using clips from Harris’s own campaign videos—but the comedic effect of the video becomes increasingly clear with over-the-top assertions parodying political talking points about Harris and her mannerisms. “She” claims to have been “selected because [she is] the ultimate diversity hire and a person of color, so if you criticize anything [she] say[s], you’re both sexist and racist.” The “Harris” narrator claims that “exploring the significance of the insignificant is in itself significant,” before the video cuts to a clip of the real Harris making similarly incomprehensible remarks about “significance.”

Kohls, who is ideologically opposed to Harris’ political agenda, created this content to comment about Harris’s candidacy in humorous fashion

So here’s the thing. If Newsom had kept his mouth shut, California AG Rob Bonta could have turned around and said Kohls has no standing to sue here, because the video is clearly a parody and the law exempts parody videos. But Newsom, wanting to be a slick social media culture warrior, opened his trap and told the world that the intent of this law was to remove videos exactly like Kohls’ stupid video.

And as stupid as I think Kohls’ video is, and as pathetic as it is that Elon would retweet it, the lawsuit is correct on this:

Political speech like Kohls’ is protected by the First Amendment

I don’t think the lawsuit is particularly well done. It’s a bit sloppy, and the arguments are not as strong as they might otherwise be. I think there could be better filings challenging these laws, but this law seems so blatantly unconstitutional that even a poorly argued case should be able to win.

Yes, this is equally as bad as the awful laws being passed in Florida and Texas (and to some extent New York). It’s kind of incredible how these four states (two strongly Republican, two strongly Democrat) just keep passing the worst, most obviously unconstitutional internet/speech laws, and thinking that just because partisan idiots cheer them on it must be fine.

Filed Under: 1st amendment, ab 2355, ab 2655, ab 2839, ai, california, chirstopher kohls, deepfake videos, deepfakes, elections, elon musk, free speech, gavin newsom, misinformation, moral panic, parody, politics, rob bonta
Companies: twitter, x

Educate, Don’t Isolate: How To Combat Elon Musk’s Misinformation Machine

from the elon-is-not-a-nation-state dept

You may have heard that Elon Musk and the UK are fighting. And both of them are looking ridiculous.

Riots are happening across the UK in response to the stabbing deaths of three children. The background for the riots is that a bunch of shitlord agitators used Telegram to organize further nonsense on other social media channels, leading to riots over misleading claims about who was responsible. Elon Musk, as he’s been known to do, has been a gullible simp for the lies and conspiracy theories that the agitators are pushing out blaming asylum seekers and refugees, and has been not just endorsing the nonsense, but spreading it further.

Most normal people would recognize that this is not great. But there’s a question of what to do about it, and the UK seems to be choosing the worst possible approach: yelling at Elon Musk, which only enables him to pretend he’s a martyr for free speech.

I wrote a piece at the Daily Beast talking about how the UK’s response to Musk is extremely counterproductive. The key point is that the UK’s Secretary for Innovation and Technology, Peter Kyle, says they need to treat Elon as if he’s a nation state. But, as I argue in my piece, that makes no sense, in part because nation states and individuals are very different, and because the affordances for dealing with each are totally different. But, also, because it only works to Musk’s advantage here.

The different realities and the different ways that nation states and private entities can and do interact lead to very different affordances and very different outcomes.

Also, most of the interactions over the past decade were along the lines of: “you need to be better about limiting the spread of content designed to incite violence.” Often, these are situations where the companies agree and want to limit the spread of such content for their own reasons, but may just disagree on how to do so.

It’s entirely different when one of the companies is owned and operated by an individual who himself is one of the leading spreaders of that kind of content.

Most companies don’t want to be spreading that kind of information because it’s bad for their own reputation and the willingness of both advertisers and users to continue to use the platform. But, in Elon’s case, there appear to be extraordinarily different incentives at play.

As I note, Elon lives for this kind of thing, and it only fuels him:

Elon relishes fighting with certain governments (so long as they’re not run by his ideological kindred spirits among right-leaning authoritarians). Pushing or threatening Elon over this is likely to just lead him to playing “free speech martyr” as he’s done in the past. And, to some extent, he wouldn’t be wrong.

The key is understanding that the UK isn’t necessarily mad about general disinformation trending on ExTwitter. They’re mad that Elon’s speaking. And, yes, even in the UK with much lower levels of free speech rights, if you’re arguing over the right of one person to say nonsense, it just becomes a fight over free speech. And there are plenty of others saying the same thing. Musk isn’t the only one doing this.

So, I suggest in the article that rather than focus on yelling about Elon, we should do more to better educate people not to fall for the kind of nonsense that Elon falls for. We should focus on enabling there to be more competition and other places for people to speak, so that one person isn’t in control of one of the most popular spots to speak.

In other words, focus on making fewer people care what he has to say, rather than focusing on trying to silence Elon.

And, of course, some of that should be to take a page from the Democratic party of the last few weeks and move away from scolding to mockery. Elon is the richest man in the world, has access to any expert he wants, and can absolutely find out what’s true. Shouldn’t people start calling out how absolutely ridiculous it is that he instead decides to get his information from a rando named CatTurd2?

There’s a lot more in the full article, so go check it out. I fear we’re just going to keep making the same mistake over and over again, which only plays into the framing that we need people like Elon controlling our speech platforms. He’s not a nation state. He’s a very, very wealthy gullible sucker spreading conspiracy theories. Just point out how ridiculous he is and move on.

Filed Under: elon musk, misinformation, peter kyle, riots, uk
Companies: twitter, x

Techdirt Podcast Episode 397: The People Who Turn Lies Into Reality

from the checking-in dept

It was over six years ago when we last had Renée DiResta on the podcast for a detailed discussion about misinformation and disinformation on social media. Since then, she’s not only led extensive research on the subject, she’s also become a central figure in the fever-dream conspiracy theories of online disinformation peddlers. Her new book Invisible Rulers: The People Who Turn Lies Into Reality dives deep into the modern ecosystem of online disinformation, and she joins us again on this week’s episode to discuss the many things that have changed in the past six years.

Follow the Techdirt Podcast on Soundcloud, subscribe via Apple Podcasts or Spotify, or grab the RSS feed. You can also keep up with all the latest episodes right here on Techdirt.

Filed Under: disinformation, misinformation, podcast, renee diresta, social media

Ctrl-Alt-Speech: The Internet Is (Still) For Porn, With Yoel Roth

from the ctrl-alt-speech dept

Ctrl-Alt-Speech is a weekly podcast about the latest news in online speech, from Mike Masnick and Everything in Moderation‘s Ben Whitelaw.

Subscribe now on Apple Podcasts, Overcast, Spotify, Pocket Casts, YouTube, or your podcast app of choice — or go straight to the RSS feed.

In this week’s round-up of the latest news in online speech, content moderation and internet regulation, Mike is joined by guest host Yoel Roth, former head of trust & safety at Twitter and now head of trust & safety at Match Group. Together they cover:

This episode is brought to you with financial support from the Future of Online Trust & Safety Fund.

Filed Under: content moderation, deplatforming, digital services act, disinformation, dsa, eu, gaza, israel, misinformation
Companies: meta, shein, temu, twitch, twitter, x

Grandma’s Retweets: How Suburban Seniors Spread Disinformation

from the get-the-karens-to-log-out dept

In recent years, there have been concerns about social media and disinformation. The narrative has three dominant threads: (1) foreign troll farms pushing disinfo, (2) grifter “influencers” pushing disinfo, and (3) the poor kids these days suckered in by disinformation.

A new study in Science suggests that instead of the kids or the trolls, perhaps we should be concerned about suburban moms. We discussed this on the most recent Ctrl-Alt-Speech episode, but let’s look more closely at the details.

The authors of the report got access to data on over 600,000 registered voters on Twitter (back when it was still Twitter), looking at data shared during the 2020 election. They found a small number of “supersharers” of false information, who were older suburban Republican women.

We found that supersharers were important members of the network, reaching a sizable 5.2% of registered voters on the platform. Supersharers had a significant overrepresentation of women, older adults, and registered Republicans. Supersharers’ massive volume did not seem automated but was rather generated through manual and persistent retweeting. These findings highlight a vulnerability of social media for democracy, where a small group of people distort the political reality for many.

The researchers found that although the number of supersharers seemed low, they had a decent following. It’s not surprising, as people are more likely to follow those who share “useful” links (though, obviously it depends on what people consider “useful”).

… we found that supersharers had significantly higher network influence than both the panel and the SS-NF groups (P < 0.001). The median supersharer ranked in the 86th percentile in the panel in terms of network influence and measured 29% higher than the median SS-NF (supplementary materials, section S11). Next, we measured engagement with supersharers’ content as the fraction of panelists who replied, retweeted, or quoted supersharers’ tweets relative to their number of followers in the panel. More supersharers had people engaging with their content compared with the panel (P < 0.001), and more panelists engaged with supersharers’ content compared with all groups

None of this is to say that there aren’t Democrats who share fake news (there are) or men (obviously, there are) or young people (again, duh). But there appears to be a cluster of older Republican women who do so at a ridiculous pace. This chart below is fairly damning. Even as the panel had a higher Democratic component, Democrats were much more likely to share “non-fake” news (“SS-NF”) as compared to fake news or, and much less likely to be “supersharers.”

Image

The age distribution is also pretty notable as well:

Image

Basically, the further you go down the spreading false info chart, the likely you are to be older.

This isn’t wholly surprising. It’s been said that the worst misinfo spreaders are boomers on social media who lack media literacy to understand that Turducken301384 isn’t reliable source. But it’s nice to see a study backing that up.

What will be more interesting is to see what happens over time. Will the issue of disinformation and misinformation diminish as younger, internet-savvy generations grow up, or will new issues arise?

My sense is that part of this is just the “adjustment” period to a new communication medium. A decade and a half ago, Clay Shirky talked about the generational divide over new technologies, and how it took more or less a century of upheaval before people became comfortable with the printing press existing and able to produce things that (*gasp*) everyone might read.

It feels like we might be going through something similar with the internet. Though it’s frustrating that the policy discussion is mostly dominated by some of that older generation who really, really, really wants to blame the tools and the young people, rather than maybe taking a harder look at themselves.

Filed Under: disinformation, misinformation, republicans, research, senior citizens, supersharers, superspreaders

Vivek Ramaswamy Buys Pointless Buzzfeed Stake So He Can Pretend He’s ‘Fixing Journalism’

from the puffery-and-performance dept

Fri, May 31st 2024 05:30am - Karl Bode

We’ve noted repeatedly how the primary problem with U.S. media and journalism often isn’t the actual journalists, or even the sloppy automation being used to cut corners; it’s the terrible, trust fund brunchlords that fail upwards into positions of power. The kind of owners and managers who, through malice or sheer incompetence, turn the outlets they oversee into either outright propaganda mills (Newsweek), or money-burning, purposeless mush (Vice, Buzzfeed, The Messenger, etc., etc.)

Very often these collapses are framed with the narrative that doing journalism online somehow simply can’t be profitable; something quickly disproven every time a group of journalists go off to start their own media venture without a useless executive getting outsized compensation and setting money on fire (see: 404 Media and countless other successful worker-owned journalistic ventures).

Of course these kinds of real journalistic outlets still have to scrap and fight for every nickel. At the same time, there’s just an unlimited amount of money available if you want to participate in the right wing grievance propaganda engagement economy, telling white young males that all of their very worst instincts are correct (see: Rogan, Taibbi, Rufo, Greenwald, Tracey, Tate, Peterson, etc. etc. etc. etc.).

One key player in this far right delusion farm, failed Presidential opportunist Vivek Ramaswamy, recently tried to ramp up his own make believe efforts to “fix journalism.” He did so by purchasing an 8 percent stake in what’s left of Buzzfeed after it basically gave up on trying to do journalism last year.

Ramaswamy’s demands are silly toddler gibberish, demanding that the outlet pivot to video, and hire such intellectual heavyweights as Tucker Carlson and Aaron Rodgers:

“Mr. Ramaswamy is pushing BuzzFeed to add three new members to its board of directors, to hone its focus on audio and video content and to embrace “greater diversity of thought,” according to a copy of his letter shared with The New York Times.”

By “greater diversity of thought,” he means pushing facts-optional right wing grievance porn and propaganda pretending to be journalism, in a bid to further distract the public from issues of substance, and fill American heads with pudding.

But it sounds like Ramaswamy couldn’t even do that successfully. For one thing, Buzzfeed simply isn’t relevant as a news company any longer. Gone is the real journalism peppered between cutesy listicles, replaced mostly with mindless engagement bullshit. For another, Buzzfeed CEO Jonah Peretti (and affiliates) still hold 96 percent of the Class B stock, giving them 50 times voting rights of Ramaswamy.

So as Elizabeth Lopatto at The Verge notes, Ramaswamy is either trying to goose and then sell his stock, or is engaging in a hollow and performative PR exercise where he can pretend that he’s “fixing liberal media.” Or both. The entire venture is utterly purposeless and meaningless:

“You’ve picked Buzzfeed because the shares are cheap, and because you have a grudge against a historically liberal outlet. It doesn’t matter that Buzzfeed News no longer exists — you’re still mad that it famously published the Steele dossier and you want to replace a once-respected, Pulitzer-winning brand with a half-assed “creators” plan starring Tucker Carlson and Aaron Rodgers. Really piss on your enemies’ graves, right, babe?”

While Ramaswamy’s bid is purely decorative, it, of course, was treated as a very serious effort to “fix journalism” by other pseudo-news outlets like the NY Post, The Hill, and Fox Business. It’s part of the broader right wing delusion that the real problem with U.S. journalism isn’t that it’s improperly financed and broadly mismanaged by raging incompetents, but that it’s not dedicated enough to coddling wealth and power. Or telling terrible, ignorant people exactly what they want to hear.

Of course none of this is any dumber than what happens in the U.S. media sector every day, as the Vice bankruptcy or the $50 million dollar Messenger implosion so aptly illustrated. U.S. journalism isn’t just dying, the corpses of what remains are being abused by terrible, wealthy puppeteers with no ideas and nothing of substance to contribute (see the postmortem abuse of Newsweek or Sports Illustrated), and in that sense Vivek fits right in.

Filed Under: disinformation, journalism, media, misinformation, politics, propaganda, vivek ramaswamy
Companies: buzzfeed

Freshly Indicted Biden Deepfaker Prompts Uncharacteristically Fast FCC Action On AI Political Robocalls

from the careful-what-you-wish-for dept

Fri, May 24th 2024 05:30am - Karl Bode

Earlier this year you probably saw the story about how a political consultant used a (sloppy) deepfake of Joe Biden in a bid to try and trick voters into staying home during the Presidential Primary. It wasn’t particularly well done; nor was it clear it reached all that many people or had much of an actual impact.

But it clearly spooked the government, which was already nervously watching AI get quickly integrated in global political propaganda and disinformation efforts.

The Biden deepfake quickly resulted in an uncharacteristically efficient joint investigation by the FCC and state AGs leading to multiple culprits, including Life Corp., a Texas telecom marketing company, a political consultant by the name Steve Kramer, and a magician named Paul Carpenter, who apparently “holds a world record in straitjacket escapes.”

But Kramer was the “mastermind” of the effort, and when busted back in February, claimed to NBC News that he was secretly trying to prompt regulatory action on robocalls, likening himself to American Revolutionary heroes Paul Revere and Thomas Paine (seriously):

This is a way for me to make a difference, and I have,” he said in the interview. “For 500,Igotabout500, I got about 500,Igotabout5 million worth of action, whether that be media attention or regulatory action.”

This week Kramer was indicted in New Hampshire, and now faces five counts that include bribery, intimidation and suppression. Now that he’s been formally indicted, Kramer, likely heeding the advice of counsel, is significantly less chatty than he was earlier this year.

Whether he’s telling the truth about his intentions or not, Kramer has gotten his wish. The whole mess has prompted numerous new AI-related efforts by the historically somewhat feckless FCC. Back in February, the FCC proposed a new rule declaring such calls illegal under the Telephone Consumer Protection Act (TCPA), which it already uses to combat robocalls (often badly).

And this week, the FCC announced it would also be considering new rules requiring disclosure of the use of AI in political ads:

“As artificial intelligence tools become more accessible, the Commission wants to make sure consumers are fully informed when the technology is used,” [FCC boss Jessica] Rosenworcel said in a news release. “Today, I’ve shared with my colleagues a proposal that makes clear consumers have a right to know when AI tools are being used in the political ads they see, and I hope they swiftly act on this issue.”

We’ve explored in great detail how the FCC has been a bit of a feckless mess when it comes to the policing of robocalls. In part because it’s had its legal authority chipped away by industry lobbying and dodgy court rulings for years, but also because big telecom giants (affixed to our domestic surveillance apparatus) and “legit” marketing companies lobby revolving door regulators for rule loopholes.

Everything the FCC does, however wimpy, inevitably faces a major court challenge by corporations keen on making money off of whatever the FCC is trying to protect the public from. It’s why in the year 2024 scammers and scumbags have rendered our voice communications networks nearly unusable. Hopefully the FCC’s efforts to combat AI deep fake political robocalls results in a more productive outcome.

Filed Under: ai, automation, deepfake, disinformation, fcc, misinformation, politics, propaganda, robocalls

Fake ‘Pink Slime’ Propaganda Newspapers Surge Ahead Of Fall Election

from the I've-got-a-head-full-of-pudding dept

Mon, Apr 15th 2024 05:26am - Karl Bode

For decades, academics have been trying to warn anybody who’d listen that the death of your local newspaper and the steady consolidation of local TV broadcasters was creating either “news deserts,” or local news that’s mostly just low-calorie puffery and simulacrum. Despite claims that the “internet would fix this,” fixing local journalism just wasn’t profitable enough for the dipsy brunchlords that fail upward into positions of prominence at most media companies, so the internet… didn’t.

Those same academics will then tell you that the end result is an American populace that’s decidedly less informed and more divided, something that not only has a measurable impact on electoral outcomes, but paves the way for more state and local corruption (since fewer journalists are reporting on stuff like local city council meetings or local political decisions). It also opened the door to authoritarianism.

Every six months or so, a news report will emerge showing how all manner of political propagandists and bullshit artists have rushed to fill the vacuum created by longstanding policy failures and our refusal to competently fund local journalism at scale.

Of particular problem has been so-called “pink slime” newspapers, or fake local news papers built by local partisan operatives to seed misinformation and propaganda in the minds of poorly educated and already misinformed local voters.

Pri Bengani, a senior researcher at the Tow Center for Digital Journalism at Columbia University studied the phenomenon in 2022 and found that there were 1,200 bogus local news outlets around the country, all feeding gullible readers a steady diet of misleading bullshit (on top of the bullshit they already consume online).

And, as expected, the problem is accelerating as we head into another election season. The total of such fake newspapers has tripled since 2019, and now roughly equals the number of real journalism organizations in America. In many instances, these networks are better funded and better organized than real journalism orgs, which find themselves relentlessly under fire by those with wealth and power who’d prefer journalism simulacrum over hard-nosed reporting:

“Kathleen Carley, a computer science professor at Carnegie Mellon University, said her research suggests that following the 2022 midterms “a lot more money” is being poured into pink slime sites, including advertising on Meta.

“A lot of these sites have had makeovers and look more realistic,” she said. “I think we’ll be seeing a lot more of that moving forward.”

When the “both sides” press covers this pink slime phenomenon, they sometimes try to imply that this kind of stuff is happening perfectly symmetrically among both major parties (as this Financial Times story does). But one NPR report indicates that roughly 95 percent of the fake newspapers they tracked were created to aid Republican candidates.

Angry at the factual reality espoused by academia, science, and journalism, the ever-more-extremist U.S. right wing has engaged in a very successful 45+ year effort to undermine U.S. journalism, academia, and even libraries at every turn, and then replace them with a vast and highly successful propaganda and delusion network across AM radio, broadcast TV, cable TV, and now the internet.

It’s a massive propaganda ecosystem that extends way beyond fake newspapers. It’s a self-contained participatory alternate reality where ideology is king and facts no longer matter. It’s everything academics spent decades warning us about. And, if you somehow hadn’t noticed in the Trump era, it’s working. Just ask your family members who think a NYC real estate conman is pious.

Democrats tend to be feckless and often incoherent when it comes to coherent and forceful counter-messaging to increasingly radical right wing propaganda. They also haven’t understood the severity of the problem, and have generally avoided having any kind of coherent media reform policies. If they respond to the problem, it will likely involve behavior that looks similar.

Meanwhile many in the academic and journalism industries still don’t seem to have the slightest awareness they’re under systemic, existential attack, often blaming the implosion on ambiguous but somehow always unavoidable market realities.

But the evidence is everywhere you look. Journalists are being fired by the thousands; folks with expertise are being replaced by incompetent brunchlords; and the ad-engagement based infotainment economy continues to shift from real reporting to controversy-churning, distraction-engagement bait.

There’s plenty we could do to address the problem. We could adopt stronger education and media criticism standards like Finland to prepare kids for a world full of propaganda. We could staff outlets with competent leadership and find new and creative ways to fund real, independent journalism. We could adopt media policies that rein in mindless consolidation, which tends to steadily erode opinion diversity.

But we do absolutely none of that because it’s simply not profitable enough. And in a country where mindlessly chasing wealth always takes top priority, you ultimately get what you pay for.

Filed Under: consolidation, disinformation, fake news, fake newspapers, journalism, misinformation, pink slime, politics, propaganda

Elon’s Censorial Lawsuit Against Media Matters Inspiring Many More People To Find ExTwitter Ads On Awful Content

from the elon-should-learn-about-the-streisand-effect dept

We’ve already discussed the extremely censorial nature of ExTwitter’s lawsuit against Media Matters for accurately describing ads from major brands that appeared next to explicitly neoNazi content. The lawsuit outright admits that Media Matters did, in fact, see those ads next to that content. Its main complaint is that Elon is mad that he thinks they said that such ads regularly appear next to such content, when it only (according to him) rarely appears next to that content, which he admits the site mostly allows.

Of course, there are a few rather large problems with all of this. The first is that the lawsuit admits that what Media Matters observed and said is truthful. The second is that while Elon and his fans keep insisting that the problem is about how often those ads appear next to such content, Media Matters never made any such claim about how frequently such ads showed up, and as IBM noted in pulling its ads, it wants a zero tolerance policy on its ads showing up next to Nazi content, meaning that even if it’s true that only Media Matters employees saw that content, that’s still one too many people.

But there’s a bigger problem: in making a big deal out of this and filing one of the worst SLAPP suits I’ve ever seen, all while claiming that Media Matters “manipulated” things (even as the lawsuit admits that it did no such thing), it is only begging more people to go looking for ads appearing next to terrible content.

And they’re finding them. Easily.

As the DailyDot pointed out, a bunch of users started looking around and found that ads were being served next to the tag #HeilHitler and “killjews” among other neo-Nazi content and accounts. Avi Bueno kicked things off, noting that he didn’t need to do any of the things the lawsuit accuses Media Matters of doing:

Image

Image

Image

Image

Image

Image

Image

Image

Of course, lots of others found similar things, again without any sort of “manipulation,” and, if anything, showing that it was possible to see big name brands show up in ads next to vile content in a manner that is even easier to find than Media Matters ever implied.

Image

Image

Image

Some users started calling for the #ElonHitlerChallenge, asking users to search the hashtag #heilhitler and screenshot that ads they found:

Image

Bizarrely, a bunch of people found that if you searched that hashtag, ExTwitter recommended you follow the fast food chain Jack in the Box.

Image

On Sunday evening I tested this, and it’s true that if you do a search on #heilhitler, and then see who are the “people” it recommends you follow, it lists two authenticated accounts: Jack in the Box and Linda Yaccarino, and then a bunch of accounts with “HeilHitler” either in their username or display name. Cool cool.

Image

Meanwhile, if Musk thought that his SLAPP suits against the Center for Countering Digital Hate and Media Matters were somehow going to stop organizations from looking to see if big time company ads are showing up next to questionable content, he seems to have predicted poorly.

A few days after the lawsuit against Media Matters, NewsGuard released a report looking at ads that appeared “below 30 viral tweets that contained false or egregiously misleading information” regarding the Israeli/Hamas conflict. And, well, it’s not good news for companies that believe in trying to avoid having their ads appear next to nonsense.

These 30 viral tweets were posted by ten of X’s worst purveyors of Israel-Hamas war-related misinformation; these accounts have previously been identified by NewsGuard as repeat spreaders of misinformation about the conflict. These 30 tweets have cumulatively reached an audience of over 92 million viewers, according to X data. On average, each tweet was seen by 3 million people.

A list of the 30 tweets and the 10 accounts used in NewsGuard’s analysis is available here.

The 30 tweets advanced some of the most egregious false or misleading claims about the war, which NewsGuard had previously debunked in its Misinformation Fingerprints database of the most significant false and misleading claims spreading online. These include that the Oct. 7, 2023, Hamas attack against Israel was a “false flag” and that CNN staged footage of an October 2023 rocket attack on a news crew in Israel. Half of the tweets (15) were flagged with a fact-check by Community Notes, X’s crowd-source fact-checking feature, which under the X policy would have made them ineligible for advertising revenue. However, the other half did not feature a Community Note. Ads for major brands, such as Pizza Hut, Airbnb, Microsoft, Paramount, and Oracle, were found by NewsGuard on posts with and without a Community Note (more on this below).

In total, NewsGuard analysts cumulatively identified 200 ads from 86 major brands, nonprofits, educational institutions, and governments that appeared in the feeds below 24 of the 30 tweets containing false or egregiously misleading claims about the Israel-Hamas war. The other six tweets did not feature advertisements.

As NewsGuard notes, the accounts in question appear to pass the threshold to make money from the ads on their posts:

It is worth noting that to be eligible for X’s ad revenue sharing, account holders must meet three specific criteria: they must be subscribers to X Premium ($8 per month), have garnered at least five million organic impressions across their posts in the past three months, and have a minimum of 500 followers. Each of the 10 super-spreader accounts NewsGuard analyzed appears to fit those criteria.

Hell, NewsGuard even found that the FBI is paying for ads on ExTwitter, and they’re showing up next to nonsense:

For example, NewsGuard found an ad for the FBI on a Nov. 9, 2023, post from Jackson Hinkle that claimed a video showed an Israeli military helicopter firing on its own citizens. The post did not contain a Community Note and had been viewed more than 1.7 million times as of Nov. 20.

This seems especially noteworthy given the false Twitter Files claim (promoted by Elon Musk) that any time the FBI gives a company money, it’s for “censorship.” In that case, the FBI reimbursed Twitter for information lookups, which is required under the law.

Image

Either way, good job, Elon, in filing the world’s worst SLAPP suit against Media Matters, and insisting that their report about big name brands appearing next to awful content was “manipulated,” you’ve made sure that lots of people tested that claim, and found that it was quite easy to see big brand ads next to terrible content.

Filed Under: ads, brand safety, elon musk, hate, misinformation, neonazis
Companies: media matters, newsguard, twitter, x