sundar pichai – Techdirt (original) (raw)

How Would Senator James Lankford React If A Democratic Senator Demanded Fox News Explain Its Editorial Policies?

from the intimidation-through-stupidity dept

A month ago, we wrote about a bizarre, nonsensical, Twitter rant from Senator James Lankford of Oklahoma that followed a bizarre, nonsensical appearance at the CPAC conference in which he lashed out at “big tech” for supposedly “censoring conservatives.” This fact-free grievance has been an item of blind faith among the Trumpist set, that big tech is somehow out to get them. The smart ones know it’s not true, but it plays well to the base, so they play it up. The dumb ones truly believe it, even as the evidence shows that Twitter and Facebook both have actually bent over backwards to give Republican politicians more leeway to violate the rules and not face any enforcement actions.

Still, a few weeks back, YouTube removed some videos from CPAC, not because of any anti-conservative bias, but for violating YouTube’s election integrity policy. You know what that means. YouTube has a policy that says you can’t mislead people about the election, and a bunch of Trumpists at CPAC whipped up the base into a frenzy with baseless conspiracy theories about the election.

Personally, I think YouTube should leave that content up. At some point, in the future, it’s going to be important to study the collective madness that has taken over much of the Republican party, causing it to completely throw out any semblance of principles, and start coasting on purely fictitious grievance culture wars, in which they must always be portrayed as the aggrieved victim. It would be nice to have a clear record of that.

However, YouTube has chosen to go in another direction and to actually enforce its policies, meaning a few such videos were removed. And, of course, this played right into the nonsense, fictitious grievance politics of the principle-less Republican Party, which sent out its derpiest politicians to whine about being censored.

Lankford, apparently, not humiliated enough by the nonsense he said on stage, has decided to double down, sending Google CEO Sundar Pichai a hilariously stupid letter, demanding to know why CPAC videos have been removed. The letter is like a greatest hits of wrongness and “that’s not how any of this works.” It accuses Google of censorship of conservative voices, it confuses Section 230, and asks all sorts of detailed questions about YouTube’s process that resulted in the videos being removed.

I could go through it bit by bit explaining how ridiculous each part of the letter is, but you can just read it yourself below and see.

But, just to demonstrate how ridiculous this letter is, all you have to do is replace “YouTube” with “Fox News” and replace any concept of “censorship of conservatives” with “failure to present liberal perspectives” and you might see how unhinged this letter is. I think if a Democratic Senator, say Elizabeth Warren or Amy Klobuchar, for example, sent a letter to Fox News saying:

It has come to my attention that Fox News recently refused to allow any liberal or Democratic commentators comment on Joe Biden’s performance, and did not provide any details why it is only presenting one side of the story concerning the federal government

And then demanding details on how Fox News goes about choosing what viewpoints are allowed to air, Senator Lankford, and tons of Republicans would absolutely freak out. And rightly so. No politician should be demanding to know the editorial decision making process of private media companies. To demand such information is a clear intimidation technique and should be seen as a violation of the 1st Amendment.

Senator Lankford has every right to spread nonsense, whether he believes it or not. But he doesn’t have the right, as a government official, to demand to know the editorial process of a private media company. Just as Senator Warren or Klobuchar should not and would not have the right to do the same for Fox News.

Filed Under: 1st amendment, anti-conservative bias, content moderation, editorial policies, election integrity, intimidation, james lankford, sundar pichai
Companies: google, youtube

It Can Always Get Dumber: Trump Sues Facebook, Twitter & YouTube, Claiming His Own Government Violated The Constitution

from the wanna-try-that-again? dept

Yes, it can always get dumber. The news broke last night that Donald Trump was planning to sue the CEOs of Facebook and Twitter for his “deplatforming.” This morning we found out that they were going to be class action lawsuits on behalf of Trump and other users who were removed, and now that they’re announced we find out that he’s actually suing Facebook & Mark Zuckerberg, Twitter & Jack Dorsey, and YouTube & Sundar Pichai. I expected the lawsuits to be performative nonsense, but these are… well… these are more performative and more nonsensical than even I expected.

These lawsuits are so dumb, and so bad, that there seems to be a decent likelihood Trump himself will be on the hook for the companies’ legal bills before this is all over.

The underlying claims in all three lawsuits are the same. Count one is that these companies removing Trump and others from their platforms violates the 1st Amendment. I mean, I know we’ve heard crackpots push this theory (without any success), but this is the former President of the United States arguing that private companies violated HIS 1st Amendment rights by conspiring with the government HE LED AT THE TIME to deplatform him. I cannot stress how absolutely laughably stupid this is. The 1st Amendment, as anyone who has taken a civics class should know, restricts the government from suppressing speech. It does not prevent private companies from doing so.

The arguments here are so convoluted. To avoid the fact that he ran the government at the time, he tries to blame the Biden transition team in the Facebook and Twitter lawsuits (in the YouTube one he tries to blame the Biden White House).

Pursuant to Section 230, Defendants are encouraged and immunized by Congress to censor constitutionally protected speech on the Internet, including by and among its approximately three (3) billion Users that are citizens of the United States.

Using its authority under Section 230 together and in concert with other social media companies, the Defendants regulate the content of speech over a vast swath of the Internet.

Defendants are vulnerable to and react to coercive pressure from the federal government to regulate specific speech.

In censoring the specific speech at issue in this lawsuit and deplatforming Plaintiff, Defendants were acting in concert with federal officials, including officials at the CDC and the Biden transition team.

As such, Defendants? censorship activities amount to state action.

Defendants? censoring the Plaintiff?s Facebook account, as well as those Putative Class Members, violates the First Amendment to the United States Constitution because it eliminates the Plaintiffs and Class Member?s participation in a public forum and the right to communicate to others their content and point of view.

Defendants? censoring of the Plaintiff and Putative Class Members from their Facebook accounts violates the First Amendment because it imposes viewpoint and contentbased restrictions on the Plaintiffs? and Putative Class Members? access to information, views, and content otherwise available to the general public.

Defendants? censoring of the Plaintiff and Putative Class Members violates the First Amendment because it imposes a prior restraint on free speech and has a chilling effect on social media Users and non-Users alike.

Defendants? blocking of the Individual and Class Plaintiffs from their Facebook accounts violates the First Amendment because it imposes a viewpoint and content-based restriction on the Plaintiff and Putative Class Members? ability to petition the government for redress of grievances.

Defendants? censorship of the Plaintiff and Putative Class Members from their Facebook accounts violates the First Amendment because it imposes a viewpoint and contentbased restriction on their ability to speak and the public?s right to hear and respond.

Defendants? blocking the Plaintiff and Putative Class Members from their Facebook accounts violates their First Amendment rights to free speech.

Defendants? censoring of Plaintiff by banning Plaintiff from his Facebook account while exercising his free speech as President of the United States was an egregious violation of the First Amendment.

So, let’s just get this out of the way. I have expressed significant concerns about lawmakers and other government officials that have tried to pressure social media companies to remove content. I think they should not be doing so, and if they do so with implied threats to retaliate for the editorial choices of these companies that is potentially a violation of the 1st Amendment. But that’s because it’s done by a government official.

It does not mean the private companies magically become state actors. It does not mean that the private companies can’t kick you off for whatever reason they want. Even if there were some sort of 1st Amendment violation here, it would be on behalf of the government officials trying to intimidate the platforms into acting — and none of the examples in any of the lawsuits seem likely to reach even that level (and, again the lawsuits are against the wrong parties anyway).

The second claim, believe it or not, is perhaps even dumber than the first. It asks for declaratory judgment that Section 230 itself is unconstitutional.

In censoring (flagging, shadow banning, etc.) Plaintiff and the Class, Defendants relied upon and acted pursuant to Section 230 of the Communications Decency Act.

Defendants would not have deplatformed Plaintiff or similarly situated Putative Class Members but for the immunity purportedly offered by Section 230.

Let’s just cut in here to point out that this point is just absolutely, 100% wrong and completely destroys this entire claim. Section 230 does provide immunity from lawsuits, but that does not mean without it no one would ever do any moderation at all. Most companies would still do content moderation — as that is still protected under the 1st Amendment itself. To claim that without 230 Trump would still be on these platforms is laughable. If anything the opposite is the case. Without 230 liability protections, if others sued the websites for Trump’s threats, attacks, potentially defamatory statements and so on, it would have likely meant that these companies would have pulled the trigger faster on removing Trump. Because anything he (and others) said would represent a potential legal liability for the platforms.

Back to the LOLsuit.

Section 230(c)(2) purports to immunize social media companies from liability for action taken by them to block, restrict, or refuse to carry ?objectionable? speech even if that speech is ?constitutionally protected.? 47 U.S.C. ? 230(c)(2).

In addition, Section 230(c)(1) also has been interpreted as furnishing an additional immunity to social media companies for action taken by them to block, restrict, or refuse to carry constitutionally protected speech.

Section 230(c)(1) and 230(c)(2) were deliberately enacted by Congress to induce, encourage, and promote social medial companies to accomplish an objective?the censorship of supposedly ?objectionable? but constitutionally protected speech on the Internet?that Congress could not constitutionally accomplish itself.

Congress cannot lawfully induce, encourage or promote private persons to accomplish what it is constitutionally forbidden to accomplish.? Norwood v. Harrison, 413 US 455, 465 (1973).

Section 230(c)(2) is therefore unconstitutional on its face, and Section 230(c)(1) is likewise unconstitutional insofar as it has interpreted to immunize social media companies for action they take to censor constitutionally protected speech.

This is an argument that has been advanced in a few circles, and it’s absolute garbage. Indeed, the state of Florida tried this basic argument in its attempt to defend its social media moderation law and that failed miserably just last week.

And those are the only two claims in the various lawsuits. That these private companies making an editorial decision to ban Donald Trump (in response to worries about him encouraging violence) violates the 1st Amendment (it does not) and that Section 230 is unconstitutional because it somehow involves Congress encouraging companies to remove Constitutionally protected speech. This is also wrong, because all of the cases related to this argument involve laws that actually pressure companies to act in this way. Section 230 has no such pressure involved (indeed, many of the complaints from some in government is that 230 is a “free pass” for companies to do nothing at all if they so choose).

There is a ton of other garbage — mostly performative throat-clearing — in the lawsuits, but none of that really matters beyond the two laughably dumb claims. I did want to call out a few really, really stupid points though. In the Twitter lawsuit, Trump’s lawyers misleadingly cite the Knight 1st Amendment Institute’s suit against Trump for blocking users on Twitter:

In Biden v. Knight 141 S. Ct. 1220 (2021), the Supreme Court discussed the Second Circuit?s decision in Knight First Amendment Inst. at Columbia Univ. v. Trump, No. 18- 1691, holding that Plaintiff?s threads on Twitter from his personal account were, in fact, official presidential statements made in a ?public forum.?

Likewise, President Trump would discuss government activity on Twitter in his official capacity as President of the United States with any User who chose to follow him, except for seven (7) Plaintiffs in the Knight case, supra., and with the public at large.

So, uh, “the Supreme Court” did not discuss it. Only Justice Clarence Thomas did, and it was a weird, meandering, unbriefed set of musings that were unrelated to the case at hand. It’s a stretch to argue that “the Supreme Court” did that. Second, part of President Trump’s argument in the Knight case was that his Twitter account was not being used in his “official capacity,” but was rather his personal account that just sometimes tweeted official information. Literally. This was President Trump appealing to the Supreme Court in that case:

The government?s response is that the President is not acting in his official capacity when he blocks users….

To then turn around in another case and claim that it was official action is just galaxy brain nonsense.

Another crazy point: in all three lawsuits, Donald Trump argues that government officials threatening the removal of Section 230 in response to social media companies’ content moderation policies itself proves that the decisions by those companies make them state actors. Here’s the version from the YouTube complaint (just insert the other two companies where it says YouTube to see what it is in the others):

Below are just some examples of Democrat legislators threatening new regulations, antitrust breakup, and removal of Section 230 immunity for Defendants and other social media platforms if YouTube did not censor views and content with which these Members of Congress disagreed, including the views and content of Plaintiff and the Putative Class Members

But, uh, Donald Trump spent much of the last year in office doing exactly the same thing. He literally demanded the removal of Section 230. He signed an executive order to try to remove Section 230 immunity from companies, then demaned Congress repeal all of Section 230 before he would fund the military. On the antitrust breakup front, Trump demanded that Bill Barr file antitrust claims against Google prior to the election as part of his campaign against “big tech.”

It’s just absolutely hilarious that he’s now claiming that members of Congress doing the very same thing he did, but to a lesser degree, and with less power magically turns these platforms into state actors.

There was a lot of speculation as to what lawyers Trump would have found to file such a lawsuit, and (surprisingly) it’s not any of the usual suspects. There is the one local lawyer in Florida (required to file such a suit there), two lawyers with AOL email addresses, and then a whole bunch of lawyers from Ivey, Barnum, & O’Mara, a (I kid you not) “personal injury and real estate” law firm in Connecticut. If these lawyers have any capacity for shame, they should be embarrassed to file something this bad. But considering that the bio for the lead lawyer on the case hypes up his many, many media appearances, and even has a gallery of photos of him appearing on TV shows, you get the feeling that perhaps these lawyers know it’s all performative and will get them more media coverage. That coverage should be mocking them for filing an obviously vexatious and frivolous lawsuit.

The lawsuit is filed in Florida, which has an anti-SLAPP law (not a great one, but not a horrible one either). It does seem possible that these companies might file anti-SLAPP claims in response to this lawsuit, meaning that Trump could potentially be on the hook for the legal fees of all three. Of course, if the whole thing is a performative attempt at playing the victim, it’s not clear that that would matter.

Filed Under: 1st amendment, class action, content moderation, donald trump, jack dorsey, mark zuckerberg, section 230, state actor, sundar pichai
Companies: facebook, twitter, youtube

Beware Of Facebook CEOs Bearing Section 230 Reform Proposals

from the good-for-facebook,-not-good-for-the-world dept

As you may know, tomorrow Congress is having yet another hearing with the CEOs of Google, Facebook, and Twitter, in which various grandstanding politicians will seek to rake Mark Zuckerberg, Jack Dorsey, and Sundar Pichai over the coals regarding things that those grandstanding politicians think Facebook, Twitter, and Google “got wrong” in their moderation practices. Some of the politicians will argue that these sites left up too much content, while others will argue they took down too much — and either way they will demand to know “why” individual content moderation decisions were made differently than they, the grandstanding politicians, wanted them to be made. We’ve already highlighted one approach that the CEOs could take in their testimony, though that is unlikely to actually happen. This whole dog and pony show seems all about no one being able to recognize one simple fact: that it’s literally impossible to have a perfectly moderated platform at the scale of humankind.

That said, one thing to note about these hearings is that each time, Facebook’s CEO Mark Zuckerberg inches closer to pushing Facebook’s vision for rethinking internet regulations around Section 230. Facebook, somewhat famously, was the company that caved on FOSTA, and bit by bit, Facebook has effectively lead the charge in undermining Section 230 (even as so many very wrong people keep insisting we need to change 230 to “punish” Facebook). That’s not true. Facebook is now perhaps the leading voice for changing 230, because the company knows that it can survive without it. Others? Not so much. Last February, Zuckerberg made it clear that Facebook was on board with the plan to undermine 230. Last fall, during another of these Congressional hearings, he more emphatically supported reforms to 230.

And, for tomorrow’s hearing, he’s driving the knife further into 230’s back by outlining a plan to further cut away at 230. The relevant bit from his testimony is here:

One area that I hope Congress will take on is thoughtful reform of Section 230 of the Communications Decency Act.

Over the past quarter-century, Section 230 has created the conditions for the Internet to thrive, for platforms to empower billions of people to express themselves online, and for the United States to become a global leader in innovation. The principles of Section 230 are as relevant today as they were in 1996, but the Internet has changed dramatically. I believe that Section 230 would benefit from thoughtful changes to make it work better for people, but identifying a way forward is challenging given the chorus of people arguing?sometimes for contradictory reasons?that the law is doing more harm than good.

Although they may have very different reasons for wanting reform, people of all political persuasions want to know that companies are taking responsibility for combatting unlawful content and activity on their platforms. And they want to know that when platforms remove harmful content, they are doing so fairly and transparently.

We believe Congress should consider making platforms? intermediary liability protection for certain types of unlawful content conditional on companies? ability to meet best practices to combat the spread of this content. Instead of being granted immunity, platforms should be required to demonstrate that they have systems in place for identifying unlawful content and removing it. Platforms should not be held liable if a particular piece of content evades its detection?that would be impractical for platforms with billions of posts per day?but they should be required to have adequate systems in place to address unlawful content.

Definitions of an adequate system could be proportionate to platform size and set by a third-party. That body should work to ensure that the practices are fair and clear for companies to understand and implement, and that best practices don?t include unrelated issues like encryption or privacy changes that deserve a full debate in their own right.

In addition to concerns about unlawful content, Congress should act to bring more transparency, accountability, and oversight to the processes by which companies make and enforce their rules about content that is harmful but legal. While this approach would not provide a clear answer to where to draw the line on difficult questions of harmful content, it would improve trust in and accountability of the systems and address concerns about the opacity of process and decision-making within companies.

As reform ideas go, this is certainly less ridiculous and braindead than nearly every bill introduced so far. It attempts to deal with the largest concerns that most people have — what happens when illegal, or even “lawful but awful,” activity is happening on websites and those websites have “no incentive” to do anything about it (or, worse, incentive to leave it up). It also responds to some of the concerns about a lack of transparency. Finally, to some extent it makes a nod at the idea that the largest companies can handle some of this burden, while other companies cannot — and it makes it clear that it does not support anything that would weaken encryption.

But that doesn’t mean it’s a good idea. In some ways, this is the flip side of the discussion that Mark Zuckerberg had many years ago regarding how “open” Facebook should be regarding third party apps built on the back of Facebook’s social graph. In a now infamous email, Mark told someone that one particular plan “may be good for the world, but it’s not good for us.” I’d argue that this 230 reform plan that Zuckerberg lays out “may be good for Facebook, but not good for the world.”

But it involves some thought, nuance, and predictions of how this plays out to understand why.

First, let’s go back to the simple question of what problem are we actually trying to solve for. Based on the framing of the panel — and of Zuckerberg’s testimony — it certainly sounds like there’s a huge problem of companies not having any incentive to clean up the garbage on the internet. We’ve certainly heard many people claim that, but it’s just not true. It’s only true if you think that the only incentives in the world are the laws of the land you’re in. But that’s not true and has never been true. Websites do a ton of moderation/trust & safety work not because of what legal structure is in place but because (1) it’s good for business, and (2) very few people want to be enabling cesspools of hate and garbage.

If you don’t clean up garbage on your website, your users get mad and go away. Or, in other cases, your advertisers go away. There are plenty of market incentives to make companies take charge. And of course, not every website is great at it, but that’s always been a market opportunity — and lots of new sites and services pop up to create “friendlier” places on the internet in an attempt to deal with those kinds of failures. And, indeed, lots of companies have to keep changing and iterating in their moderation practices to deal with the fact that the world keeps changing.

Indeed, if you read through the rest of Zuckerberg’s testimony, it’s one example after another of things that the company has already done to clean up messes on the platform. And each one describes putting huge resources in terms of money, technology, and people to combat some form of disinformation or other problematic content. Four separate times, Zuckerberg describes programs that Facebook has created to deal with those kinds of things as “industry-leading.” But those programs are incredibly costly. He talks about how Facebook now has 35,000 people working in “safety and security,” which is more than triple the 10,000 people in that role five years ago.

So, these proposals to create a “best practices” framework, judged by some third party, in which you only get to keep your 230 protections if you meet those best practices, won’t change anything for Facebook. Facebook will argue that its practices are the best practices. That’s effectively what Zuckerberg is saying in this testimony. But that will harm everyone else who can’t match that. Most companies aren’t going to be able to do this, for example:

Four years ago, we developed automated techniques to detect content related to terrorist organizations such as ISIS, al Qaeda, and their affiliates. We?ve since expanded these techniques to detect and remove content related to other terrorist and hate groups. We are now able to detect and review text embedded in images and videos, and we?ve built media-matching technology to find content that?s identical or near-identical to photos, videos, text, and audio that we?ve already removed. Our work on hate groups focused initially on those that posed the greatest threat of violence at the time; we?ve now expanded this to detect more groups tied to different hate-based and violent extremist ideologies. In addition to building new tools, we?ve also adapted strategies from our counterterrorism work, such as leveraging off-platform signals to identify dangerous content on Facebook and implementing procedures to audit the accuracy of our AI?s decisions over time.

And, yes, he talks about making those rules “proportionate to platform size” but there’s a whole lot of trickiness in making that work in practice. Size of what, exactly? Userbase? Revenue? How do you determine and where do you set the limits? As we wrote recently in describing our “test suite” of internet companies for any new internet regulation, there are so many different types of companies, dealing with so many different markets, that it wouldn’t make any sense to apply a single set of rules or best practices across each one. Because each one is very, very different. How do you apply similar “best practices” on a site like Wikipedia — where all the users themselves do the moderation — to a site like Notion, in which people are setting up their own database/project management setups, some of which may be shared with others. Or how do you set up the same best practices that will work in fan fiction communities that will also apply to something like Cameo?

And, even the “size” part can be problematic. In practice, it creates so many wacky incentives. The classic example of this is in France, where stringent labor laws kick in only for companies at 50 employees. So, in practice, there are a huge number of French companies that have 49 employees. If you create thresholds, you get weird incentives. Companies will seek to limit their own growth in unnatural ways just to avoid the burden, or if they’re going to face the burden, they may make a bunch of awkward decisions in figuring out how to “comply.”

And the end result is just going to be a lot of awkwardness and silly, wasteful lawsuits for companies arguing that they somehow fail to meet “best practices.” At worst, you end up with an incredible level of homogenization. Platforms will feel the need to simply adopt identical content moderation policies to ones who have already been adjudicated. It may create market opportunities for extractive third party “compliance” companies who promise to run your content moderation practices in the identical way to Facebook, since those will be deemed “industry-leading” of course.

The politics of this obviously make sense for Facebook. It’s not difficult to understand how Zuckerberg gets to this point. Congress is putting tremendous pressure on him and continually attacking the company’s perceived (and certainly, sometimes real) failings. So, for him, the framing is clear: set up some rules to deal with the fake problem that so many insist is real, of there being “no incentive” for companies to do anything to deal with disinformation and other garbage, knowing full well that (1) Facebook’s own practices will likely define “best practices” or (2) that Facebook will have enough political clout to make sure that any third party body that determines these “best practices” is thoroughly captured so as to make sure that Facebook skates by. But all those other platforms? Good luck. It will create a huge mess as everyone tries to sort out what “tier” they’re in, and what they have to do to avoid legal liability — when they’re all already trying all sorts of different approaches to deal with disinformation online.

Indeed, one final problem with this “solution” is that you don’t deal with disinformation by homogenization. Disinformation and disinformation practices continually evolve and change over time. The amazing and wonderful thing that we’re seeing in the space right now is that tons of companies are trying very different approaches to dealing with it, and learning from those different approaches. That experimentation and variety is how everyone learns and adapts and gets to better results in the long run, rather than saying that a single “best practices” setup will work. Indeed, zeroing in on a single best practices approach, if anything, could make disinformation worse by helping those with bad intent figure out how to best game the system. The bad actors can adapt, while this approach could tie the hands of those trying to fight back.

Indeed, that alone is the very brilliance of Section 230’s own structure. It recognizes that the combination of market forces (users and advertisers getting upset about garbage on the websites) and the ability to experiment with a wide variety of approaches, is how best to fight back against the garbage. By letting each website figure out what works best for their own community.

As I started writing this piece, Sundar Pichai’s testimony for tomorrow was also released. And it makes this key point about how 230, as is, is how to best deal with misinformation and extremism online. In many ways, Pichai’s testimony is similar to Zuckerberg’s. It details all these different (often expensive and resource intensive) steps Google has taken to fight disinformation. But when it gets to the part about 230, Pichai’s stance is the polar opposite of Zuckerberg’s. Pichai notes that they were able to do all of these things because of 230, and changing that would put many of these efforts at risk:

These are just some of the tangible steps we?ve taken to support high quality journalism and protect our users online, while preserving people?s right to express themselves freely. Our ability to provide access to a wide range of information and viewpoints, while also being able to remove harmful content like misinformation, is made possible because of legal frameworks like Section 230 of the Communications Decency Act.

Section 230 is foundational to the open web: it allows platforms and websites, big and small, across the entire internet, to responsibly manage content to keep users safe and promote access to information and free expression. Without Section 230, platforms would either over-filter content or not be able to filter content at all. In the fight against misinformation, Section 230 allows companies to take decisive action on harmful misinformation and keep up with bad actors who work hard to circumvent their policies.

Thanks to Section 230, consumers and businesses of all kinds benefit from unprecedented access to information and a vibrant digital economy. Today, more people have the opportunity to create content, start a business online, and have a voice than ever before. At the same time, it is clear that there is so much more work to be done to address harmful content and behavior, both online and offline.

Regulation has an important role to play in ensuring that we protect what is great about the open web, while addressing harm and improving accountability. We are, however, concerned that many recent proposals to change Section 230?including calls to repeal it altogether?would not serve that objective well. In fact, they would have unintended consequences?harming both free expression and the ability of platforms to take responsible action to protect users in the face of constantly evolving challenges.

We might better achieve our shared objectives by focusing on ensuring transparent, fair, and effective processes for addressing harmful content and behavior. Solutions might include developing content policies that are clear and accessible, notifying people when their content is removed and giving them ways to appeal content decisions, and sharing how systems designed for addressing harmful content are working over time. With this in mind, we are committed not only to doing our part on our services, but also to improving transparency across our industry.

That’s standing up for the law that helped enable the open internet, not tossing it under the bus because it’s politically convenient. It won’t make politicians happy. But it’s the right thing to say — because it’s true.

Filed Under: adaptability, best practices, content moderation, mark zuckerberg, section 230, sundar pichai, transparency
Companies: facebook, google

What I Hope Tech CEOs Will Tell Congress: 'We're Not Neutral'

from the just-lay-out-the-truth dept

The CEOs of Facebook, Google, and Twitter will once again testify before Congress this Thursday, this time on disinformation. Here?s what I hope they will say:

Thank you Mister Chairman and Madam Ranking Member.

While no honest CEO would ever say that he or she enjoys testifying before Congress, I recognize that hearings like this play an important role — in holding us accountable, illuminating our blind spots, and increasing public understanding of our work.

Some policymakers accuse us of asserting too much editorial control and removing too much content. Others say that we don?t remove enough incendiary content. Our platforms see millions of user-generated posts every day — on a global scale — but questions at these hearings often focus on how one of our thousands of employees handled a single individual post.

As a company we could surely do a better job of explaining — privately and publicly — our calls in controversial cases. Because it?s sometimes difficult to explain in time-limited hearing answers the reasons behind individual content decisions, we will soon launch a new public website that will explain in detail our decisions on cases in which there is considerable public interest. Today, I?ll focus my remarks on how we view content moderation generally.

Not ?neutral?

In past hearings, I and my CEO counterparts have adopted an approach of highlighting our companies? economic and social impact, answering questions deferentially, and promising to answer detailed follow up questions in writing. While this approach maximizes comity, I?ve come to believe that it can sometimes leave a false impression of how we operate.

So today I?d like to take a new approach: leveling with you.

In particular, in the past I have told you that our service is ?neutral.? My intent was to convey that we don?t pick political sides, or allow commercial influence over our editorial content.

But I?ve come to believe that characterizing our service as ?neutral? was a mistake. We are not a purely neutral speech platform, and virtually no user-generated-content service is.

Our philosophy

In general, we start with a Western, small-d democratic approach of allowing a broad range of human expression and views. From there, our products reflect our subjective — but scientifically informed — judgments about what information and speech our users will find most relevant, most delightful, most topical, or of the highest quality.

We aspire for our services to be utilized by billions of people around the globe, and we don?t ever relish limiting anyone?s speech. And while we generally reflect an American free speech norm, we recognize that norm is not shared by much of the world — so we must abide by more restrictive speech laws in many countries where we operate.

Even within the United States, however, we choose to forbid certain types of speech which are legal, but which we have chosen to keep off our service: incitements to violence, hate speech, Holocaust denial, and adult pornography, just to name a few.

We make these decisions based not on the law, but on what kind of service we want to be for our users.

While some people claim to want ?neutral? online speech platforms, we have seen that services with little or no content moderation whatsoever — such as Gab and Parler — become dominated by trolling, obscenities, and conspiracy theories. Most consumers reject this chaotic, noisy mess.

In contrast, we believe that millions of people use our service because they value our approach of airing a variety of views, but avoiding an ?anything goes” cesspool.

We realize that some people won?t like our rules, and go elsewhere. I?m glad that consumers have choices like Gab and Parler, and that the open Internet makes them possible. But we want our service to be something different: a pleasant experience for the widest possible audience.

Complicated info landscape means tough calls

When we first started our service decades ago, content moderation was a much less fractious topic. Today, we face a more complicated speech and information landscape including foreign propaganda, bots, disinformation, misinformation, conspiracy theories, deepfakes, distrust of institutions, and a fractured media landscape. It challenges all of us who are in the information business.

All user-generated content services are grappling with new challenges to our default of allowing most speech. For example, we have recently chosen to take a more aggressive posture toward election- and vaccine-related disinformation because those of us who run our company ultimately don?t feel comfortable with our platform being an instrument to undermine democracy or public health.

As much as we aim to create consistent rules and policies, many of the most difficult content questions we face are ones we?ve never seen before, or involve elected officials — so the questions often end up on my desk as CEO.

Despite the popularity of our services, I recognize that I?m not a democratically elected policymaker. I?m a leader of a private enterprise. None of us company leaders takes pleasure in making speech decisions that inevitably upset some portion of our user base – or world leaders. We may make the wrong call.

But our desire to make our platform a positive experience for millions of people sometimes demands that we make difficult decisions to limit or block certain types of controversial (but legal) content. The First Amendment prevents the government from making those extra-legal speech decisions for us. So it?s appropriate that I make these tough calls, because each decision reflects and shapes what kind of service we want to be for our users.

Long-term experience over short-term traffic

Some of our critics assert that we are driven solely by ?engagement metrics? or ?monetizing outrage? like heated political speech.

While we use our editorial judgment to deliver what we hope are joyful experiences to our users, it would be foolish for us to be ruled by weekly engagement metrics. If platforms like ours prioritized quick-hit, sugar-high content that polarizes our users, it might drive short term usage but it would destroy people?s long-term trust and desire to return to our service. People would give up on our service if it?s not making them happy.

We believe that most consumers want user-generated-content services like ours to maintain some degree of editorial control. But we also believe that as you move further down the Internet ?stack? — from applications towards ours toward app stores, then cloud hosting, then DNS providers, and finally ISPs — most people support a norm of progressively less content moderation at each layer.

In other words, our users may not want to see controversial speech on our service — but they don?t necessarily support disappearing it from the Internet altogether.

I fully understand that not everyone will agree with our content policies, and that some people feel disrespected by our decisions. I empathize with those that feel overlooked or discriminated against, and I am glad that the open Internet allows people to seek out alternatives to our service. But that doesn?t mean that the US government can or should deny our company?s freedom to moderate our own services.

First Amendment and CDA 230

Some have suggested that social media sites are the ?new public square? and that services should be forbidden by the government to block anyone?s speech. But such a rule would violate our company?s own First Amendment rights of editorial judgment within our services. Our legal freedom to prioritize certain content is no different than that of the New York Times or Breitbart.

Some critics attack Section 230 of the Communications Decency Act as a ?giveaway? to tech companies, but their real beef is with the First Amendment.

Others allege that Section 230?s liability protections are conditioned on our service following a false standard of political ?neutrality.? But Section 230 doesn?t require this, and in fact it incentivizes platforms like ours to moderate inappropriate content.

Section 230 is primarily a legal routing mechanism for defamation claims — making the speaker responsible, not the platform. Holding speakers directly accountable for their own defamatory speech ultimately helps encourage their own personal responsibility for a healthier Internet.

For example, if car rental companies always paid for their renters? red light tickets instead of making the renter pay, all renters would keep running red lights. Direct consequences improve behavior.

If Section 230 were revoked, our defamation liability exposure would likely require us to be much more conservative about who and what types of content we allowed to post on our services. This would likely inhibit a much broader range of potentially ?controversial? speech, but more importantly would impose disproportionate legal and compliance burdens on much smaller platforms.

Operating responsibly — and humbly

We?re aware of the privileged position our service occupies. We aim to use our influence for good, and to act responsibly in the best interests of society and our users. But we screw up sometimes, we have blind spots, and our services, like all tools, get misused by a very small slice of our users. Our service is run by human beings, and we ask for grace as we remedy our mistakes.

We value the public?s feedback on our content policies, especially from those whose life experiences differ from those of our employees. We listen. Some people call this ?working the refs,? but if done respectfully I think it can be healthy, constructive, and enlightening.

By the same token, we have a responsibility to our millions of users to make our service the kind of positive experience they want to return to again and again. That means utilizing our own constitutional freedom to make editorial judgments. I respect that some will disagree with our judgments, just as I hope you will respect our goal of creating a service that millions of people enjoy.

Thank you for the opportunity to appear here today.

Adam Kovacevich is a former public policy executive for Google and Lime, former Democratic congressional and campaign aide, and a longtime tech policy strategist based in Washington, DC.

Filed Under: 1st amendment, bias, big tech, congressional hearings, content moderation, disinformation, jack dorsey, mark zuckerberg, neutral, section 230, sundar pichai
Companies: facebook, google, twitter

The Senate Snowflake Grievance Committee Quizzes Tech CEOs On Tweets & Employee Viewpoints

from the this-is-not-how-any-of-this-works dept

On Wednesday morning the Senate Commerce Committee held a nearly four hour long hearing ostensibly about Section 230 with three internet CEOs: Mark Zuckerberg from Facebook, Sundar Pichai from Google, and Jack Dorsey from Twitter. The hearing went about as expected: meaning it was mostly ridiculous nonsense. You had multiple Republican Senators demanding that these CEOs explain why they had taken actions on certain content, with some silly “whataboutism” on other kinds of content where action wasn’t taken. Then you had multiple Democratic Senators demanding these CEOs explain why they hadn’t taken faster action on pretty much the same content that Republicans had complained some action had been taken on.

The shorter summary was that Republicans were demanding that their own lies and propaganda should be left alone, while Democrats demanded that lies and propaganda should be removed faster. Both of these positions are an anathema to the 1st Amendment, and the people advocating for them on both sides should be embarrassed. While each platform has the right, under the 1st Amendment, to host or not host whatever speech they want, based on whatever policies they set, Congress cannot and should not, be in the position of either telling companies what content they need to host or what content they must take down. And yet, we saw examples of both during the hearing. On the Democratic side, Senators Markey and Baldwin, among a few others, pushed the companies to take down more content. This is extremely troubling on 1st Amendment grounds. On the Republican side, many, many Senators demanded that certain content should be unblocked — in particular the NY Post’s Twitter account.

And there were a few (very limited) good points from both sides of the aisle. Senator Brian Schatz noted that the entire hearing was being done in bad faith by Senate Republicans to try to bully the companies into not removing disinformation in the final week of the election. He noted that, while he had many questions for the three CEOs, he would not participate in this “sham” by asking questions during this particular hearing. Kudos to him. On the Republican side, Senator Jerry Moran noted that changes to Section 230 were the kinds of things that the three companies before the Committee could handle, but which would hamstring smaller competitors (to be fair, Jack Dorsey made this point in his opening testimony as well).

But I wanted to focus on some specific grandstanding by a few key Senators who made particularly ridiculous statements. And, I will point out upfront that these all came from Republicans. I’m not pointing that out because I’m “biased” against them, but because of the simple objective fact that it was these Republican Senators who made the most ridiculous statements of the day. The key theme between them was a ridiculous sense of grievance, and a false belief that the company’s moderation practices unfairly targeted “conservatives.” Except nearly all of them assumed that because more Republicans were moderated, that was proof of bias — and not the idea that, perhaps, Republicans do more things that violate the policies of these companies. In the same manner that I’m picking on mostly Republican Senators here, that has more to do with their own actions, than any personal “bias.”

What was most frightening, however, in the comments from these Senators is how at home they would have been in the days of Joseph McCarthy. Multiple Senators demanded to know about the personal ideological viewpoints of people who worked for these companies. Both Dorsey and Mark Zuckerberg correctly pointed out that they do not ask their employees about their political leanings (Pichai stated that they hire from all over, implying that there was a diverse ideological pool within their workforce).

It is stunning and dangerous for Senators to be demanding to know the political leanings of employees at any particular company. Senators Mike Lee, Ron Johnson and Marsha Blackburn all asked questions along these lines. Lee, who historically has been aligned with libertarian viewpoints, completely misrepresented the content moderation policies of these companies and insisted that they disproportionately target conservatives. They do not. If conservatives are violating their policies more than others, then that’s on those people violating the policies, and not on the policies themselves. Lee also fell into the ridiculous myth that Google’s policies directly targeted conservatives in demonetizing The Federalist. As we’ve discussed multiple times, that’s utter bullshit. We received identical treatment to The Federalist. So did Slate and Buzzfeed. Lee, ridiculously, argued that the companies saying — accurately — that they do not target moderation decisions based on ideology perhaps violated laws against “unfair or deceptive trade practices.” Basically because Lee falsely believes these companies target conservative speech (because he’s so deep in his own filter bubble he doesn’t even know it hits others as well), that they’re engaging in deceptive practices.

Lee demanded that each company list “left leaning” accounts that had received similar treatment, and the various CEOs promised to get back to him, but this was a nonsense argument.

However the most ridiculous part of Lee’s grandstanding was his disingenuous framing of content moderation. He started asking about how these companies “censor” content. In the past, we’ve discussed how moderation and censorship differ, but Lee stretched the definition to insane levels:

I think the trend is clear, you almost always censor — meaning…. uh… when I use the word censor here I mean block content, fact check or label content, or demonetize websites.

In what fucking world does Senator Lee live in that fact checking is censorship? This is utter nonsense. Indeed, when Sundar Pichai actually pushed back and said “we don’t censor,” Lee jumped in obnoxiously to say that “I used the word censor as a term of art there and I defined that word.” That’s not how it works. You can’t redefine a term to mean the literal opposite of what it means “as a term of art” and then demand that everyone agrees that they “censored” when your own definition includes fact checking or responding to a statement with more speech.

Senator Ron Johnson’s time was particularly egregious. He read the following tweet into the record.

It’s a tweet from a “Mary T. Hagan” saying:

Sen Ron Johnson is my neighbor and strangled our dog, Buttons, right in front of my 4 yr old son and 3 yr old daughter. The police refuse to investigate. This is a complete lie but important to retweet and note that there are more of my lies to come.

Yes, he read that whole thing into the record, and then whined directly to Jack Dorsey that this should not have been left up, and that people might not go to the polls and vote for him if they read it. It’s hard to know where to begin on this one. Especially since it came right after Johnson was mad about other moderation choices Twitter had made to takedown content. But the most incredible bit was that the obvious point of this tweet (which seemed to fly right over Johnson’s head) is to make fun of Johnson’s own willingness over the past few months to push Russian-originated propaganda talking points, and then whine that the media won’t do anything about it and that law enforcement won’t investigate.

So, a simple question for Johnson to answer would be: if he wants this tweet removed, how about removing his own efforts at pushing unverified propagandistic nonsense about Joe Biden?

But, really, this was par for the course for so much of the hearing. Senators (both Democrats and Republicans) showing a vast misunderstanding of how content moderation works, would call up a single example of a content moderation choice and demand an explanation — often ignoring the explanations from Dorsey and Zuckerberg who would calmly explain what their policy was, why a certain piece of content did or did not violate that policy — and then scream louder as if they had found some sort of “gotcha” moment.

But honestly, the most insane moment of the hearing most likely involved Senator Marsha Blackburn from Tennessee. Blackburn has a long history of saying the complete opposite argument depending on which way the wind blows at any particular time. For example, during the net neutrality fight she screamed about how it was an example of government interference with innovation and that if we had net neutrality it would destroy “Facebook, YouTube, Twitter” (literally those three companies). Yet a few years later, she supported a bill to regulate the internet in the form of PIPA, a pro-censorship copyright bill.

Then, four years ago, she insisted that internet services had an obligation to delete “fake news.” Yet, in the hearing on Wednesday, she flipped out at the companies for trying to moderate any news at all.

And then she took it one step further, and demanded to know if a Google engineer who made fun of her was still employed at the company.

Blackburn (R-Tenn.) asked CEO Sundar Pichai whether Blake Lemoine, a senior software engineer and artificial intelligence researcher, still has a job at Google.

?He has had very unkind things to say about me and I was just wondering if you all had still kept him working there,? Blackburn said during the hearing, where she and other GOP lawmakers accused tech companies of squelching free speech.

Pichai said he did not know Lemoine?s employment status.

Breitbart News reported in 2018 that Lemoine had criticized Blackburn?s legislative record in excerpts of internal company messages and said Google should not ?acquiesce to the theatrical demands of a legislator.?

?I?m not big on negotiation with terrorists,? Lemoine said, according to Breitbart.

Having a sitting US Senator specifically call out an employee for criticizing her, and asking to know his employment status is fundamentally terrifying. It is, again, reminiscent of the McCarthy hearings, and having elected officials targeting people for their political viewpoints. If people were serious about calling out “cancel culture,” they’d be screaming about how dangerous it is that Blackburn was saying something like this.

The end result of the hearing was a lot more nonsense grandstanding that demonstrated that too many Senators simply do not understand the nature of content moderation, and because they personally disagree with certain policies, or the implementation of certain policies, it means that the companies are somehow doing it wrong. But this is the very nature of content moderation. No single human being will agree with every decision. To immediately leap to the assumption of bad intent (or bad policies) because you disagree with a small sample set of decisions is intellectually lazy and dishonest.

Oh, and Ted Cruz also did some theatrical bullshit, that was mostly performative idiocy, but since he only did it to get a few social media snippets and headlines, we’re not going to play that game, and just say simply that Senator Cruz came off like a total disingenuous jackass, who wanted to make the hearing about himself, rather than anything even remotely substantive.

Filed Under: content moderation, ed markey, grandstanding, jack dorsey, mark zuckerberg, marsha blackburn, mike lee, ron johnson, section 230, senate commerce committee, sundar pichai, ted cruz
Companies: facebook, google, twitter

House Judiciary Spends 5.5 Hours Making Themselves Look Foolish, Without Asking Many Actual Tough Questions Of Tech CEOs

from the what-did-I-just... dept

How was your Wednesday? I spent 5 and a half hours of mine watching the most inane and stupid hearing put on by Rep. David Cicilline, and the House Judiciary Committee’s Subcommittee on Antitrust, Commercial & Administrative Law. The hearing was billed as a big antitrust showdown, in which the CEOs of Google, Facebook, Apple and Amazon would all answer questions regarding an antitrust investigation into those four companies. If you are also a glutton for punishment, you can now watch the whole thing yourself too (though, at least you can watch it at 2x speed). I’ll save you a bit of time though: there was very little discussion of actual antitrust. There was plenty of airing of grievances, however, frequently with little to no basis in reality.

If you want to read my realtime reactions to the nonsense, there’s a fairly long Twitter thread. If you want a short summary, it’s this: everyone who spoke is angry about some aspect of these companies but (and this is kind of important) there is no consensus about why and the reasons for their anger is often contradictory. The most obvious example of this played out in regards to discussions that were raised about the decision earlier this week by YouTube and Facebook (and Twitter) to take down an incredibly ridiculous Breitbart video showing a group of “doctors” spewing dangerous nonsense regarding COVID-19 and how to treat it (and how not to treat it). The video went viral, and a whole bunch of people were sharing it, even though one of the main stars apparently believes in Alien DNA and Demon Sperm. Also, when Facebook took down the video, she suggested that God would punish Facebook by crashing its servers.

However, during the hearing, there were multiple Republican lawmakers who were furious at Facebook and YouTube for removing such content, and tried to extract promises that the platforms would no longer “interfere.” Amusingly (or, not really), at one point, Jim Sensenbrenner even demanded that Mark Zuckerberg answer why Donald Trump Jr.’s account had been suspended for sharing such a video — which is kind of embarrassing since it was Twitter, not Facebook, that temporarily suspended Junior’s account (and it was for spreading disinfo about COVID, which that video absolutely was). Meanwhile, on the other side of the aisle, Rep. Cicilline was positively livid that 20 million people still saw that video, and couldn’t believe that it took Facebook five full hours to decide to delete the video.

So, you had Republicans demanding these companies keep those videos up, and Democrats demanding they take the videos down faster. What exactly are these companies supposed to do?

Similarly, Rep. Jim Jordan made some conspiracy theory claims saying that Google tried to help Hillary Clinton win in 2016 (the fact that she did not might raise questions about how Jordan could then argue they have too much power, but…) and demanded that they promise not to “help Biden.” On the other side of the aisle, Rep. Jamie Raskin complained about how Facebook allowed Russians and others to swing the election to Trump, and demanded to know how Facebook would prevent that in the future.

So… basically both sides were saying that if their tools are used to influence elections, bad things might happen. It just depends on which side wins to see which side will want to do the punishing.

Nearly all of the Representatives spent most of their time grandstanding — rarely about issues related to antitrust — and frequently demonstrating their own technological incompetence. Rep. Greg Steube whined that his campaign emails were being filtered to spam, and argued that it was Gmail unfairly handicapping conservatives. His “evidence” for this was that it didn’t happen before he joined Congress last year, and that he’d never heard of it happening to Democrats (a few Democrats noted later that it does happen to them). Also, he said his own father found his campaign ads in spam, and so clearly it wasn’t because his father marked them as spam. Sundar Pichai had to explain to Rep. Steube that (1) they don’t spy on emails so they have no way of knowing that emails were between a father and son, and (2) that emails go to spam based on a variety of factors, including how other users rate them. In other words, Steube’s own campaign is (1) bad at email and (2) his constituents are probably trashing the emails. It’s not anti-conservative bias.

Rep. Ken Buck went on an unhinged rant, claiming that Google was in cahoots with communist China and against the US government.

On that front, Rep. Jim Jordan put on quite a show, repeatedly misrepresenting various content moderation decisions as “proof” of anti-conservative bias. Nearly every one of those examples he misrepresented. And then when a few other Reps. pointed out that he was resorting to fringe conspiracy theories he started shouting and had to be told repeatedly to stop interrupting (and to put on his mask). Later, at the end of the hearing, he went on a bizarre rant about “cancel culture” and demanded each of the four CEOs to state whether or not they thought cancel culture was good or bad. What that has to do with their companies, I do not know. What that has to do with antitrust, I have even less of an idea.

A general pattern, on both sides of the aisle was that a Representative would describe a news story or scenario regarding one of the platforms in a way that misrepresented what actually happened, and painted the companies in the worst possible light, and then would ask a “and have you stopped beating your wife?” type of question. Each of the four CEOs, when put on the spot like that, would say something along the lines of “I must respectfully disagree with the premise…” or “I don’t think that’s an accurate representation…” at which point (like clockwork) they were cut off by the Representative, with a stern look, and something along the lines of “so you won’t answer the question?!?” or “I don’t want to hear about that — I just want a yes or no!”

It was… ridiculous — in a totally bipartisan manner. Cicilline was just as bad as Jordan in completely misrepresenting things and pretending he’d “caught” these companies in some bad behavior that was not even remotely accurate. This is not to say the companies haven’t done questionable things, but neither Cicilline nor Jordan demonstrated any knowledge of what those things were, preferring to push out fringe conspiracy theories. Others pushing fringe wacko theories included Rep. Matt Gaetz on the Republican side (who was all over the map with just wrong things, including demanding that the platforms would support law enforcement) and Rep. Lucy McBath on the Democratic side, who seemed very, very confused about the nature of cookies on the internet. She also completely misrepresented a situation regarding how Apple handled a privacy situation, suggesting that protecting user’s privacy by blocking certain apps that had privacy issues was anti-competitive.

There were a few Representatives who weren’t totally crazy. On the Republican side, Rep. Kelly Armstrong asked some thoughtful questions about reverse warrants (not an antitrust issue, but an important 4th Amendment one) and about Amazon’s use of competitive data (but… he also used the debunked claim that Google tried to “defund” The Federalist, and used the story about bunches of DMCA notices going to Twitch to say that Twitch should be forced to pre-license all music, a la the EU Copyright Directive — which, of course, would harm competition, since only a few companies could actually afford to do that). On the Democratic side, Rep. Raskin rightly pointed out the hypocrisy of Republicans who support Citizens United, but were mad that companies might politically support candidates they don’t like (what that has to do with antitrust is beyond me, but it was a worthwhile point). Rep. Joe Neguse asked some good questions that were actually about competition, but for which there weren’t very clear answers.

All in all, some will say it was just another typical Congressional hearing in which Congress displays its technological ignorance. And that may be true. But it is disappointing. What could have been a useful and productive discussion with these four important CEOs was anything but. What could have been an actual exploration of questions around market power and consumer welfare… was not. It was all just a big performance. And that’s disappointing on multiple levels. It was a waste of time, and will be used to reinforce various narratives.

But, from this end, the only narrative it reinforced was that Congress is woefully ignorant about technology and how these companies operate. And they showed few signs of actually being curious in understanding the truth.

Filed Under: antitrust, bias, big tech, david cicilline, greg steube, house judiciary, jamie raskin, jeff bezos, jim jordan, jim sensenbrenner, joe neguse, kelly armstrong, ken buck, lucy mcbath, mark zuckerberg, matt gaetz, politics, sundar pichai, techlash, tim coo
Companies: amazon, apple, facebook, google

Google CEO Admits That It's Impossible To Moderate YouTube Perfectly; CNBC Blasts Him

from the wait,-but-why? dept

Over the weekend, Google CEO Sundar Pichai gave an interview to CNN in which he admitted to exactly what we’ve been screaming over and over again for a few years now: it’s literally impossible to do content moderation at scale perfectly. This is for a variety of reasons: first off, no one agrees what is the “correct” level of moderation. Ask 100 people and you will likely get 100 different answers (I know this, because we did this). What many people think must be mostly “black and white” choices actually has a tremendous amount of gray. Second, even if there were clear and easy choices to make (which there are not), at the scale of most major platforms, even a tiny error rate (of either false positives or false negatives) will still be a very large absolute number of mistakes.

So Pichai’s comments to CNN shouldn’t be seen as controversial, so much as they are explaining how large numbers work:

“It’s one of those things in which let’s say we are getting it right over 99% of the time. You’ll still be able to find examples. Our goal is to take that to a very, very small percentage, well below 1%,” he added.

This shouldn’t be that complex. YouTube’s most recent stats say that over 500 hours of content are uploaded to YouTube every minute. Assuming, conservatively, that the average YouTube video is 5 minutes (Comscore recently put the number at 4.4 minutes per video) that means around 6,000 videos uploaded every minute. That means about 8.6 million videos per day. And somewhere in the range of 250 million new videos in a month. Now, let’s say that Google is actually 99.99% “accurate” (again, a non-existent and impossible standard) in its content moderation efforts. That would still mean ~26,000 “mistakes” in a month. And, I’m sure, eventually some people could come along and find 100 to 200 of those mistakes and make a big story out of how “bad” Google/YouTube are at moderating. But, the issue is not so much the quality of moderation, but the large numbers.

Anyway, that all seems fairly straightforward, but of course, because it’s Google, nothing is straightforward, and CNBC decided to take this story and spin it hyperbolicly as Google CEO Sundar Pichai: YouTube is too big to fix. That, of course, is not what he’s saying at all. But, of course, it’s already being picked up on by various folks to prove that Google is obviously too big and needs to be broken up.

Of course, what no one will actually discuss is how you would solve this problem of the law of large numbers. You can break up Google, sure, but unless you think that consumers will suddenly shift so that not too many of them use any particular video platform, whatever leading video platforms there are will always have this general challenge. The issue is not that YouTube is “too big to fix,” but simply that any platform with that much content is going to make some moderation mistakes — and, with so much content, in absolute terms, even if the moderation efforts are pretty “accurate” you’ll still find a ton of those mistakes.

I’ve long argued that a better solution is for these companies to open up their platforms to allow user empowerment and competition at the filtering level, so that various 3rd parties could effectively “compete” to see who’s better at moderating (and to allow end users to opt-in to what kind of moderation they want), but that’s got nothing to do with a platform being “too big” or needing “fixing.” It’s a recognition that — as stated at the outset — there is no “right” way to moderate content, and no one will agree on what’s proper. In such a world, having a single standard will never make sense, so we might as well have many competing ones. But it’s hard to see how that’s a problem of being “too big.”

Filed Under: content moderation, content moderation at scale, errors, false negatives, false positives, large numbers, sundar pichai
Companies: google, youtube

Nice Work EU: You've Given Google An Excuse To Offer A Censored Search Engine In China

from the handing-authoritarian-states-easy-victories dept

We’ve already explained why we think Google is making exactly the wrong move in experimenting with a government-approved censored search engine in China, called Dragonfly. However, the company continues to move forward with this idea. CEO Sundar Pichai gave an interview with the NY Times, in which he defends this move by… arguing it’s the equivalent of the “Right to Be Forgotten” in the EU, with which Google is required to comply:

One of the things that?s not well understood, I think, is that we operate in many countries where there is censorship. When we follow ?right to be forgotten? laws, we are censoring search results because we?re complying with the law. I?m committed to serving users in China. Whatever form it takes, I actually don?t know the answer. It?s not even clear to me that search in China is the product we need to do today.

A few people, who I respect, have tried to argue that this analogy is unfair. Mathew Ingram has a story at the Columbia Journalism Review that rightly points out the differences between deleting content because the subject of that content complains vs. when the government wants things disappeared. Former Facebook Chief Information Security Officer Alex Stamos argued that the comparison is “amoral and mendacious.” He too agrees that there are problems with the RTBF in the EU, but China’s censorship is to a different degree:

The “right to be forgotten” is a form of censorship that has been abused by many individuals and it’s application extra-territorially should be resisted. However, China’s censorship regime is a tool to maintain the absolute control of the party-state and is in no way comparable.

I both agree and disagree with this statement. What China is doing is to a different degree. But the mechanisms and the concepts behind them are the same. Indeed, we’ve pointed out for years that any move towards internet censorship in the Western World is almost immediately seized upon by China to justify that country’s much more aggressive and egregious political censorship. Remember, back when the US was considering SOPA/PIPA, which would have censored whole websites on the basis of claims of copyright infringement, the Chinese government gleefully pointed out that the US was copying China’s approach to the internet, and pushing for a “Great Firewall” for “harmful” information. It’s just that, in the US’s case, that “harmful information” was infringing information that hurt the bottom line of a few entertainment companies, while in China, they saw it as anything that might lead to political unrest. But, as they made clear, it was the same thing: you guys want to keep “harmful” information offline, and so do we.

That push for SOPA/PIPA gave the Chinese cover to continue to censor the internet — and now the EU and its silly Right to be Forgotten is doing the same thing. So, yes, the style and degree of the censorship is not the same — but the nature of what it is and how it’s done continues to give massive cover to China in dismissing any complaints about its widespread censorship regime.

That said, it is reasonable to point out that Sundar Pichai should not be helping out the Chinese in furthering this argument on the pages of the NY Times… and I’d agree with you. But at least some of the blame must fall on the EU and other governments which have increasingly moved towards internet censorship regimes. Even if they’re done for a different purpose, authoritarian regimes will always seize on them to excuse their own such behavior.

Filed Under: censorship, china, dragonfly, eu, right to be forgotten, sundar pichai
Companies: google

Google Surprises Everyone By… Breaking Itself Up (Kinda)

from the huh? dept

For years, there have been efforts by various competitors and governments to try to break up Google. But now the company appears to have done it itself. Sort of. Taking basically everyone by surprise, Google announced that it has formed a new “holding company” called Alphabet, and made Google a wholly owned subsidiary of Alphabet, while at the same time carving out other businesses from Google and making them separate from Google, but still under the purview of Alphabet. The whole thing is… weird. There’s lots of speculation going on as to why, and no one seems to agree. Larry Page’s letter suggests it’s to allow the overall company to be more innovative — which actually is a legitimate possibility. Just this morning we noted that Google’s failure with Google+ shows how the company can sometimes lumber around things while startups are much more nimble. Splitting the company into totally separate entities (even if owned by the same holding company) at the very least has the possibility of forcing the separate units to focus on executing on their own businesses, without worrying about stepping on the toes of other businesses. But… it also loses the ability to cross-subsidize parts of the business.

Others have speculated that this was also a way to “reward” top execs like Sundar Pichai, who is now Google’s CEO — while Larry Page becomes CEO of Alphabet (and Sergey Brin is President of Google). Even if he’s still reporting to Larry, having “Google CEO” on the business card has to be seen as a promotion.

The only other thing that came to my mind was that this was some sort of reaction to all those lawsuits and investigations into possible anti-trust. Not that reorganizing the company is going to “fool” any regulator, but at the very least, it perhaps sets things up in a manner that if regulators try to break up Google, there are preset “fissures” that allow Google to “direct” the cuts more strategically.

Frankly, the whole thing seems to be leaving a lot of people scratching their heads (myself included). It may turn out to be nothing beyond just a different take on a corporate restructuring — or it may be a prelude to the company doing something much bigger that would fit much more readily into this holding company structure.

Oh, and in case you’re wondering, the company (for now at least) has taken the URL abc.xyz and it includes a weird little Easter egg, giving tribute to the fictional Google-like company in HBO’s Silicon Valley, Hooli.

Filed Under: holding company, larry page, restructuring, sundar pichai
Companies: alphabet, google