newsguard – Techdirt (original) (raw)

GOP Really Committed To The Bit That Speech They Don’t Like Is Censorship

from the the-grifter-cinematic-universe-expands dept

The House Oversight Committee is investigating NewsGuard, a private company, for supposed “censorship” for the crime of… offering its own opinions on the quality of news sites. The old marketplace of ideas seems to keep getting rejected whenever Republicans find that their ideas aren’t selling quite as well as they’d hoped.

Up is down, left is right, black is white, day is night. The modern GOP, which has left any semblance of its historical roots in the trash, continues to make sure that “every accusation, a confession” is the basic party line. And now, it’s claiming that free speech is censorship.

Apparently Rep. James Comer was getting kinda jealous that his good buddy Rep. Jim Jordan was out there weaponizing the government to suppress speech, all while pretending it was in an attempt to stop the weaponization of the government to suppress speech.

Comer heads the House Committee on Oversight and Accountability. He has apparently decided that it’s his job to investigate companies for the kind of speech he dislikes. In this case, it’s NewsGuard he’s investigating.

Today, House Committee on Oversight and Accountability Chairman James Comer (R-Ky.) launched an investigation into the impact of NewsGuard on protected First Amendment speech and its potential to serve as a non-transparent agent of censorship campaigns. In a letter to NewsGuard Chief Executive Officers Steven Brill and Gordon Crovitz, Chairman Comer raises concerns over reports highlighting NewsGuard’s contracts with federal agencies and possible actions being taken to suppress accurate information. Chairman Comer’s letter includes requests seeking documents and information on NewsGuard’s business relationships with federal agencies and its adherence to its own policies in light of highly political social media activity by NewsGuard employees.

First off, it helps to understand what NewsGuard is. The organization was set up in 2018 by two journalism moguls, Steven Brill and Gordon Crovitz, in an effort to combat the rise of disinformation and nonsense peddling online. The basic product is rating journalism websites to give a scoring of how credible and reliable they are.

And, let me be upfront: I’m not a fan of NewsGuard’s methodology, which I think isn’t particularly useful for doing what they’re trying to do. It’s formulaic in a somewhat arcane way, which enables terrible news sites to get rated well, while dinging (especially smaller, newer) publications that don’t check off all the boxes NewsGuard demands.

But, they’re allowed to do whatever they want. They are expressing their own First Amendment-protected opinion. And that’s a good thing. People don’t have to believe NewsGuard’s rankings (and my personal opinion is that everyone should take them with a large grain of salt). But it’s still their opinion. It’s their speech.

However, NewsGuard has been singled out as one of the enemies of free speech, like so much of the fantasy industrial complex that is making the rounds these days. This is because some of the nuttier nonsense-peddling grifters out there have been rated poorly by NewsGuard, and that’s resulted in some advertisers deciding to pull advertising.

Somehow that is a form of censorship. Of course, it’s not: it’s speech by a private party, in which other private parties listen to and potentially take some action on, exercising their own rights of association.

But, as the Comer “investigation” calls out, some US government agencies have worked with NewsGuard, most notably the Defense Department. A few years back, the DoD signed a contract with NewsGuard, in which NewsGuard would flag content it found online that it believed were foreign influence campaigns. Basically, it’s the Defense Department contracting with some internet watchers to see if they spot anything the DoD should be aware of.

I have no idea if NewsGuard is any good at this, and frankly, I’d be surprised if the DoD actually got any value out of the deal. But, it’s got nothing to do with “censorship” of any kind. It’s still just more speech.

To date, Crovitz (who was formerly the publisher of the Wall Street Journal, so you’d think the GOP grifter class would realize he’s much closer to them politically speaking) has tried defending NewsGuard by (1) inserting some facts into a discussion that will reject such facts and (2) stupidly insisting that his is the only “non-partisan” rating service, and the rest are all leftists.

“We look forward to clarifying the misunderstanding by the committee about our work for the Defense Department,” Crovitz said in a statement to The Hill. “Our work for the Pentagon has been solely related to hostile disinformation efforts by Russian, Chinese and Iranian government-linked operations targeting Americans and our allies.”

Crovitz, a former publisher of The Wall Street Journal, also touted NewsGuard as “the only apolitical service” that rates news outlets, saying, “the others are either digital platforms with their secret ratings or left-wing partisan advocacy groups.”

In some ways, this strategy of responding to the investigation kinda serves to explain why NewsGuard has always been kinda useless. They bring fact-checking to a vibes fight. That doesn’t work.

If we’ve learned anything from the failures of media over the past decade, it is not that we had a lack of fact-checking or other “objective” ways of measuring news. It’s that people don’t want that. What we’ve discovered is that tons of people are in the market for the Confirmation Bias Times, and they’re going to lap up anything that confirms their priors and outright reject anything that challenges what they believe.

We’ve seen things like Stanford’s Internet Observatory try to respond to similar attacks by coming back with facts, only to have those facts distorted, twisted, and turned right back around to accuse them of even worse things. Crovitz and NewsGuard seem likely to go through the same nonsense ringer.

Because the whole point of this is that facts no longer matter to the modern GOP. If you bring facts that conflict with their feelings, they’re going to blame you for it and attack you.

Here, all that NewsGuard has done is add their opinions about news sources. Some people trust them. Others don’t. That’s the marketplace of ideas in action.

And that’s what Comer is trying to suppress.

Filed Under: 1st amendment, fact checking, free speech, gordon crovitz, james comer, news rating, opnion, steven brill
Companies: newsguard

We Can’t Have Serious Discussions About Section 230 If People Keep Misrepresenting It

from the that's-not-how-any-of-this-works dept

At the Supreme Court’s oral arguments about Florida and Texas’ social media content moderation laws, there was a fair bit of talk about Section 230. As we noted at the time, a few of the Justices (namely Clarence Thomas and Neil Gorsuch) seemed confused about Section 230 and also about what role (if any) it had regarding these laws.

The reality is that the only role for 230 is in preempting those laws. Section 230 has a preemption clause that basically says no state laws can go into effect that contradict Section 230 (in other words: no state laws that dictate how moderation must work). But that wasn’t what the discussion was about. The discussion was mostly about Thomas and Gorsuch’s confusion over 230 and thinking that the argument for Section 230 (that you’re not held liable for third party speech) contradicts the arguments laid out by NetChoice/CCIA in these cases, where they talked about the platforms’ own speech.

Gorsuch and Thomas were mixing up two separate things, as both the lawyers for the platforms and the US made clear. There are multiple kinds of speech at issue here. Section 230 does not hold platforms liable for third-party speech. But the issue with these laws was whether or not it constricted the platforms’ ability to express themselves in the way in which they moderated. That is, the editorial decisions that were being made expressing “this is what type of community we enable” are a form of public expression that the Florida & Texas laws seek to stifle.

That is separate from who is liable for individual speech.

But, as is the way of the world whenever it comes to discussions on Section 230, lots of people are going to get confused.

Today that person is Steven Brill, one of the founders of NewsGuard, a site that seeks to “rate” news organizations, including for their willingness to push misinformation. Brill publishes stories for NewsGuard on a Substack (!?!?) newsletter titled “Reality Check.” Unfortunately, Brill’s piece is chock full of misinformation regarding Section 230. Let’s do some correcting:

February marks the 28th anniversary of the passage of Section 230 of the Telecommunications Act of 1996. Today, Section 230 is notorious for giving social media platforms exemptions from all liability for pretty much anything their platforms post online. But in February of 1996, this three-paragraph section of a massive telecommunications bill aimed at modernizing regulations related to the nascent cable television and cellular phone industries was an afterthought. Not a word was written about it in mainstream news reports covering the passage of the overall bill.

The article originally claimed it was the 48th anniversary, though it was later corrected (without a correction notice — which is something Newsguard checks on when rating the trustworthiness of publications). That’s not that big a deal, and I don’t think there’s anything wrong with “stealth” corrections for typos and minor errors like that.

But this sentence is just flat out wrong: “Section 230 is notorious for giving social media platforms exemptions from all liability for pretty much anything their platforms post online.” It’s just not true. Section 230 gives limited exemptions from some forms of liability for third party content that they had no role in creating. That’s quite different than what Brill claims. His formulation suggests they’re not liable for anything they, themselves, put online. That’s false.

Section 230 is all about putting the liability on whichever party created the violation under the law. If a website is just hosting the content, but someone else created the content, the liability should go to the creator of the content, not the host.

Courts have had no problem finding liability on social media platforms for things they themselves post online. We have a string of such cases, covering Roommates, Amazon, HomeAway, InternetBrands, Snap and more. In every one of those cases (contrary to Brill’s claims), the courts have found that Section 230 does not protect things these platforms post online.

Brill gets a lot more wrong. He discusses the Prodigy and CompuServe cases and then says this (though he gives too much credit to CompuServe’s lack of moderation being the reason why the court ruled that way):

That’s why those who introduced Section 230 called it the “Protection for Good Samaritans” Act. However, nothing in Section 230 required screening for harmful content, only that those who did screen and, importantly, those who did not screen would be equally immune. And, as we now know, when social media replaced these dial-up services and opened its platforms to billions of people who did not have to pay to post anything, their executives and engineers became anything but good Samaritans. Instead of using the protection of Section 230 to exercise editorial discretion, they used it to be immune from liability when their algorithms deliberately steered people to inflammatory conspiracy theories, misinformation, state-sponsored disinformation, and other harmful content. As then-Federal Communications Commission Chairman Reed Hundt told me 25 years later, “We saw the internet as a way to break up the dominance of the big networks, newspapers, and magazines who we thought had the capacity to manipulate public opinion. We never dreamed that Section 230 would be a protection mechanism for a new group of manipulators — the social media companies with their algorithms. Those companies didn’t exist then.”

This is both wrong and misleading. First of all, nothing in Section 230 could “require” screening for harmful content, because both the First and Fourth Amendments would forbid that. So the complaint that it did not require such screening is not just misplaced, it’s silly.

We’ve gone over this multiple times. Pre-230, the understanding was that, under the First Amendment, liability of a distributor was dependent on whether or not the distributor had clear knowledge of the violative nature of the content. As the court in Smith v. California made clear, it would make no sense to hold someone liable without knowledge:

For if the bookseller is criminally liable without knowledge of the contents, and the ordinance fulfills its purpose, he will tend to restrict the books he sells to those he has inspected; and thus the State will have imposed a restriction upon the distribution of constitutionally protected as well as obscene literature.

That’s the First Amendment problem. But, we can take that a step further as well. If the state now requires scanning, you have a Fourth Amendment problem. Specifically, as soon as the government makes scanning mandatory, none of the content found during such scanning can ever be admissible in court, because no warrant was issued upon probable cause. As we again described a couple years ago:

The Fourth Amendment prohibits unreasonable searches and seizures by the government. Like the rest of the Bill of Rights, the Fourth Amendment doesn’t apply to private entities—except where the private entity gets treated like a government actor in certain circumstances. Here’s how that happens: The government may not make a private actor do a search the government could not lawfully do itself. (Otherwise, the Fourth Amendment wouldn’t mean much, because the government could just do an end-run around it by dragooning private citizens.) When a private entity conducts a search because the government wants it to, not primarily on its own initiative, then the otherwise-private entity becomes an agent of the government with respect to the search. (This is a simplistic summary of “government agent” jurisprudence; for details, see the Kosseff paper.) And government searches typically require a warrant to be reasonable. Without one, whatever evidence the search turns up can be suppressed in court under the so-called exclusionary rule because it was obtained unconstitutionally. If that evidence led to additional evidence, that’ll be excluded too, because it’s “the fruit of the poisonous tree.”

All of that seems kinda important?

Yet Brill rushes headlong on the assumption that 230 could have and should have required mandatory scanning for “harmful” content.

Also, most harmful content remains entirely protected by the First Amendment, making this idea even more ridiculous. There would be no liability for it.

Brill seems especially confused about how 230 and the First Amendment work together, suggesting (incorrectly) that 230 gives them some sort of extra editorial benefit that it does not convey:

With Section 230 in place, the platforms will not only have a First Amendment right to edit, but also have the right to do the kind of slipshod editing — or even the deliberate algorithmic promotion of harmful content — that has done so much to destabilize the world.

Again, this is incorrect on multiple levels. The First Amendment gives them the right to edit. It also gives them the right to slipshod editing. And the right to promote harmful content via algorithms. That has nothing to do with Section 230.

The idea that “algorithmic promotion of harmful content… has done so much to destabilize the world” is a myth that has mostly been debunked. Some early algorithms weren’t great, but most have gotten much better over time. There’s little to no supporting evidence that “algorithms” have been particularly harmful over the long run.

Indeed, what we’ve seen is that while there were some bad algorithms a decade or so ago, pressure from the market has pushed the companies to improve. Users, advertisers, the media, have all pressured the companies to improve their algorithms and it seems to work.

Either way, those algorithms still have nothing to do with Section 230. The First Amendment lets companies use algorithms to recommend things, because algorithms are, themselves, expressions of opinion (“we think you would like this thing more than the next thing”) and nothing in there would trigger legal liability even if you dropped Section 230 altogether.

It’s a best (or worst) of both worlds, enjoyed by no other media companies.

This is simply false. Outright false. EVERY company that has a website that allows third-party content is protected by Section 230 for that third-party content. No company is protected for first-party content, online or off.

For example, last year, Fox News was held liable to the tune of $787 million for defaming Dominion Voting Systems by putting on guests meant to pander to its audience by claiming voter fraud in the 2020 election. The social media platforms’ algorithms performed the same audience-pleasing editing with the same or worse defamatory claims. But their executives and shareholders were protected by Section 230.

Except… that’s not how any of this works, even without Section 230. Fox News was held liable because the content was produced by Fox News. All of the depositions and transcripts were… Fox News executives and staff. Because they created the defamatory content.

The social media apps didn’t create the content.

This is the right outcome. The blame should always go to the party who violated the law in creating the content.

And Fox News is equally as protected by Section 230 if there is defamation created by someone else but posted in a comment to a Fox News story (something that seems likely to happen frequently).

This whole column is misleading in the extreme, and simply wrong at other points. NewsGuard shouldn’t be publishing misinformation itself given that the company claims it’s promoting accuracy in news and pushing back against misinformation.

Filed Under: 1st amendment, 4th amendment, content moderation, section 230, steven brill
Companies: newsguard

Elon’s Censorial Lawsuit Against Media Matters Inspiring Many More People To Find ExTwitter Ads On Awful Content

from the elon-should-learn-about-the-streisand-effect dept

We’ve already discussed the extremely censorial nature of ExTwitter’s lawsuit against Media Matters for accurately describing ads from major brands that appeared next to explicitly neoNazi content. The lawsuit outright admits that Media Matters did, in fact, see those ads next to that content. Its main complaint is that Elon is mad that he thinks they said that such ads regularly appear next to such content, when it only (according to him) rarely appears next to that content, which he admits the site mostly allows.

Of course, there are a few rather large problems with all of this. The first is that the lawsuit admits that what Media Matters observed and said is truthful. The second is that while Elon and his fans keep insisting that the problem is about how often those ads appear next to such content, Media Matters never made any such claim about how frequently such ads showed up, and as IBM noted in pulling its ads, it wants a zero tolerance policy on its ads showing up next to Nazi content, meaning that even if it’s true that only Media Matters employees saw that content, that’s still one too many people.

But there’s a bigger problem: in making a big deal out of this and filing one of the worst SLAPP suits I’ve ever seen, all while claiming that Media Matters “manipulated” things (even as the lawsuit admits that it did no such thing), it is only begging more people to go looking for ads appearing next to terrible content.

And they’re finding them. Easily.

As the DailyDot pointed out, a bunch of users started looking around and found that ads were being served next to the tag #HeilHitler and “killjews” among other neo-Nazi content and accounts. Avi Bueno kicked things off, noting that he didn’t need to do any of the things the lawsuit accuses Media Matters of doing:

Image

Image

Image

Image

Image

Image

Image

Image

Of course, lots of others found similar things, again without any sort of “manipulation,” and, if anything, showing that it was possible to see big name brands show up in ads next to vile content in a manner that is even easier to find than Media Matters ever implied.

Image

Image

Image

Some users started calling for the #ElonHitlerChallenge, asking users to search the hashtag #heilhitler and screenshot that ads they found:

Image

Bizarrely, a bunch of people found that if you searched that hashtag, ExTwitter recommended you follow the fast food chain Jack in the Box.

Image

On Sunday evening I tested this, and it’s true that if you do a search on #heilhitler, and then see who are the “people” it recommends you follow, it lists two authenticated accounts: Jack in the Box and Linda Yaccarino, and then a bunch of accounts with “HeilHitler” either in their username or display name. Cool cool.

Image

Meanwhile, if Musk thought that his SLAPP suits against the Center for Countering Digital Hate and Media Matters were somehow going to stop organizations from looking to see if big time company ads are showing up next to questionable content, he seems to have predicted poorly.

A few days after the lawsuit against Media Matters, NewsGuard released a report looking at ads that appeared “below 30 viral tweets that contained false or egregiously misleading information” regarding the Israeli/Hamas conflict. And, well, it’s not good news for companies that believe in trying to avoid having their ads appear next to nonsense.

These 30 viral tweets were posted by ten of X’s worst purveyors of Israel-Hamas war-related misinformation; these accounts have previously been identified by NewsGuard as repeat spreaders of misinformation about the conflict. These 30 tweets have cumulatively reached an audience of over 92 million viewers, according to X data. On average, each tweet was seen by 3 million people.

A list of the 30 tweets and the 10 accounts used in NewsGuard’s analysis is available here.

The 30 tweets advanced some of the most egregious false or misleading claims about the war, which NewsGuard had previously debunked in its Misinformation Fingerprints database of the most significant false and misleading claims spreading online. These include that the Oct. 7, 2023, Hamas attack against Israel was a “false flag” and that CNN staged footage of an October 2023 rocket attack on a news crew in Israel. Half of the tweets (15) were flagged with a fact-check by Community Notes, X’s crowd-source fact-checking feature, which under the X policy would have made them ineligible for advertising revenue. However, the other half did not feature a Community Note. Ads for major brands, such as Pizza Hut, Airbnb, Microsoft, Paramount, and Oracle, were found by NewsGuard on posts with and without a Community Note (more on this below).

In total, NewsGuard analysts cumulatively identified 200 ads from 86 major brands, nonprofits, educational institutions, and governments that appeared in the feeds below 24 of the 30 tweets containing false or egregiously misleading claims about the Israel-Hamas war. The other six tweets did not feature advertisements.

As NewsGuard notes, the accounts in question appear to pass the threshold to make money from the ads on their posts:

It is worth noting that to be eligible for X’s ad revenue sharing, account holders must meet three specific criteria: they must be subscribers to X Premium ($8 per month), have garnered at least five million organic impressions across their posts in the past three months, and have a minimum of 500 followers. Each of the 10 super-spreader accounts NewsGuard analyzed appears to fit those criteria.

Hell, NewsGuard even found that the FBI is paying for ads on ExTwitter, and they’re showing up next to nonsense:

For example, NewsGuard found an ad for the FBI on a Nov. 9, 2023, post from Jackson Hinkle that claimed a video showed an Israeli military helicopter firing on its own citizens. The post did not contain a Community Note and had been viewed more than 1.7 million times as of Nov. 20.

This seems especially noteworthy given the false Twitter Files claim (promoted by Elon Musk) that any time the FBI gives a company money, it’s for “censorship.” In that case, the FBI reimbursed Twitter for information lookups, which is required under the law.

Image

Either way, good job, Elon, in filing the world’s worst SLAPP suit against Media Matters, and insisting that their report about big name brands appearing next to awful content was “manipulated,” you’ve made sure that lots of people tested that claim, and found that it was quite easy to see big brand ads next to terrible content.

Filed Under: ads, brand safety, elon musk, hate, misinformation, neonazis
Companies: media matters, newsguard, twitter, x