best practices – Techdirt (original) (raw)

Jim Jordan & Elon Musk Suppressed Speech; Don’t Let Them Pretend It’s A Win For Free Speech

from the that's-the-opposite-of-free-speech dept

Up is down, left is right, day is night. And now, to Jim Jordan and Elon Musk, clear, direct government censorship is, apparently, “free speech.”

This isn’t a huge surprise, but on Thursday, the World Federation of Advertising shut down GARM, the Global Alliance for Responsible Media, in response to legal threats from ExTwitter and Rumble, and a bullshit Congressional investigation led by Jim Jordan.

As we have detailed, GARM was setup following the mosque shootings in New Zealand, which was livestreamed. Brand advertisers were accused (arguably unfairly) of profiting off of such things, so they put together this alliance to share information about best practices on social media advertising for brand safety.

GARM was specifically a way for advertisers to set up those best practices, share them with each other, but also to share them with social media sites, to say “hey, this is the kind of trust & safety processes we expect if we’re going to advertise.”

I disagreed with GARM about lots of things, but in a free market, where there is free speech, they should absolutely be allowed to create best practices and to talk with platforms and advertisers and advocate for better trust & safety practices in order for brands to feel safe that their ads won’t show up next to dangerous content.

All of it was entirely voluntary. Advertisers didn’t have to abide by the standards, nor did platforms. This was literally just part of the marketplace of ideas. Some advertisers advocated for efforts to be made to protect their brand safety, and some platforms agreed while others, like Rumble, did not.

All GARM was at its core was advertisers using their own freedom of expression and rights of association to try to put some pressure on platforms to be better stewards, so that advertisers weren’t putting their brands at risk. You can (perhaps reasonably!) argue that they pushed too hard, or some of their requests were unreasonable, but it’s their free speech rights.

As we’ve detailed over the last month, ExTwitter had regularly used GARM’s standards to try to convince advertisers they were “safe” and officially “excitedly” rejoined GARM as a member just last month. A few days later, Jim Jordan’s House Judiciary Committee released a blisteringly stupid and misleading report, falsely claiming that GARM was engaged in antitrust-violating collusion to punish conservative media. None of that was ever true.

However, Elon announced that he would be suing GARM and hoped that criminal charges would be filed against GARM, perhaps not realizing his own organization had rejoined GARM a week earlier and touted that relationship in its effort to attract advertisers. Earlier this week, he carried through on that plan and sued GARM for alleged antitrust violations.

The lawsuit is absolutely ridiculous. It assumes that because GARM, at times, criticized Elon’s handling of trust & safety issues, that was a form of collusion that abused its monopoly position to get advertisers to stop advertising on ExTwitter.

It is one of the most entitled, spoiled brat kind of lawsuits you’ll ever see. Not only does it seem to suggest that not advertising on ExTwitter is an antitrust violation, it assumes that the only reason that advertisers would remove their ads from the site was not due to any actions by the company or Elon_,_ but rather that it must be because GARM organized a boycott (which, notably, none of the evidence shows they did). One thing is quite clear from all this: Elon seems incapable of recognizing that the consequences of his own actions fall on him. He insists it must be everyone else’s fault.

Indeed, the sense of entitlement shines through from those involved in this whole process.

For example, Rumble’s CEO Chris Pavlovski more or less admitted that if you turn him down when he asks companies to advertise, you would now get sued. The sheer, unadulterated entitlement on display here is incredible:

Image

Rumble had sued GARM alongside ExTwitter, using some of the same lawyers that Elon did. When tweeting out the details to prove that these advertisers should be added to his lawsuit, Pavlovski only showed perfectly friendly emails from companies saying “hey, look, advertising on your site won’t be good for our reputation, sorry.”

Image

That’s not illegal. It’s not collusion. It’s the marketplace of ideas saying “hey, we don’t want to associate with you.” But, according to Rumble, that alone deserves a lawsuit.

Anyway, the World Federation of Advertisers has apparently given in to this lawfare from Elon and Jim Jordan and announced on Thursday that they were shutting down GARM because of all of this.

In other words, Elon, Jordan, and others have used the power of the state, both in the form of lawsuits and congressional investigations, to browbeat advertisers into no longer speaking up about ways to keep social media sites safe for their brands.

This is the exact opposite of free speech. It’s literally using the power of the state to shut up companies which were expressing views that Elon and Jordan didn’t like.

And, so, of course, they and their fans are celebrating this state-backed censorship as a “win for free speech.” It’s ridiculously Orwellian.

Image

This is not a “win” for the First Amendment in any way. It is, in every way, the opposite. The House Judiciary Committee, under Jim Jordan, abused the power of the state to shut up companies from talking about which sites they felt were safe for brands or what those sites could do to be better.

And, of course, a bunch of other very foolish people repeated more of this kind of nonsense, including some of MAGA’s favorite journalists, who pretend to support free speech. Ben Shapiro called it an “important win for free speech principles,” which is just disconnected from reality.

Linda Yaccarino claims it proves that “no small group should be able to monopolize what gets monetized.” This makes no sense at all. No small group monopolized anything. They just tried to put in place some basic best practices to protect their brands and no one had to agree with them at all (and many didn’t).

And if Linda or Elon thinks this will magically make advertisers want to come back to ExTwitter, they’re even more delusional than I thought. Who would ever want to advertise on a platform that sued advertisers for leaving?

Filed Under: 1st amendment, advertising, antitrust, best practices, censorship, elon musk, entitlement, free speech, garm, jim jordan
Companies: garm, rumble, twitter, wfa, world federation of advertisers, x

How The EARN IT Act Is Significantly More Dangerous Than FOSTA

from the do-not-underestimate-the-danger dept

I’ve already explained the dangers of the EARN IT Act, which is supported by 19 Senators, who are misleading people with a “fact” sheet that is mostly full of myths. As Senator Wyden has explained, EARN IT will undoubtedly make the problem of child sexual abuse material (CSAM) worse, not better.

In my initial posts, I compared it to FOSTA, because EARN IT repeats the basics of the FOSTA playbook. But — and this is very important since EARN IT appears to have significant momentum in Congress — it’s not just FOSTA 2.0, it’s significantly more dangerous in multiple different ways that haven’t necessarily been highlighted in most discussions of the law.

First, let’s look at why FOSTA was already so problematic — and why many in Congress have raised concerns about the damage done by FOSTA or called for the outright repeal of FOSTA. FOSTA “worked” by creating a carveout from Section 230 for anything related to “sex trafficking.” As we’ve explained repeatedly, the false premise of the bill is that if Section 230 “doesn’t protect” certain types of content, that will magically force companies to “stop” the underlying activity.

Except, that’s wrong. What Section 230 does is provide immunity not just for the hosting of content, but for the decisions a company takes to deal with that content. By increasing the liability, you actually disincentivize websites from taking action against such content, because any action to deal with “sex trafficking” content on your platform can be turned around and used against you in court to show you had “knowledge” that your site was used for trafficking. The end result, then, is that many sites either shut down entirely or just put blanket bans on perfectly legal activity to avoid having to carefully review anything.

And, as we’ve seen, the impact of FOSTA was putting women in very real danger, especially sex workers. Whereas in the past they were able to take control of their own business via websites, FOSTA made that untenable and risky for the websites. This actually increased the amount of sex trafficking, because it opened up more opportunity for traffickers to step in and provide the services that sex workers had formerly used websites for to control their own lives. This put them at much greater risk of abuse and death. And, as some experts have highlighted, these were not unintended consequences. They were consequences that were widely known and expected from the bill.

On top of that, even though the DOJ warned Congress before the law was passed that it would make it more difficult to catch sex traffickers, Congress passed it anyway and patted each other on the back, claiming that they had successfully “fought sex trafficking.” Except, since then, every single report has said the opposite is true. Multiple police departments have explained that since FOSTA it has made it harder for law enforcement to track down sex traffickers, even as it’s made it easier for traffickers to operate.

Last year, the (required, but delivered late) analysis of FOSTA by the Government Accountability Office, found that the law made it more difficult to track down sex traffickers and did not seem to enable the DOJ to do anything it couldn’t (but didn’t!) do before. The DOJ just didn’t seem to need this law that Congress insisted it needed, and basically has not used it. Instead, what FOSTA has enabled in court is not an end to sex trafficking, but ambulance chasing lawyers suing companies over nonsense — companies like Salesforce and MailChimp, who are not engaging in sex trafficking, have had to fight FOSTA cases in court.

So, FOSTA is already a complete disaster by almost any measure. It has put women at risk. It has helped sex traffickers. It has made the job of law enforcement more difficult in trying to find and apprehend sex traffickers.

Already you should be wondering why anyone in Congress would be looking to repeat that mess all over again.

But, instead of just repeating it, they’re making it significantly worse. EARN IT has a few slight differences from FOSTA, each of which make the law much more dangerous. And, incredibly, it’s doing this without being able to point to a single case in which Section 230 got in the way of prosecution of CSAM.

The state law land mine:

Section 230 already exempts federal criminal law violations. With FOSTA there was a push to also exempt state criminal law. This has been a pointed desire of state Attorneys General going back at least a decade and in some cases further (notably: when EARN IT lead sponsor Richard Blumenthal was Attorney General of Connecticut he was among the AGs who asked for Section 230 to exempt state criminal law).

Some people argue that since federal criminal law is already exempt, what would be the big deal with state law exemptions — which only highlights who is ignorant of the nature of state criminal laws. Let’s just say that states have a habit of passing some incredibly ridiculous laws — and those laws can be impossible to parse (and can even be contradictory). As you may have noticed, many states have become less laboratories of democracy and much more the testing ground for totalitarianism.

Making internet companies potentially criminally liable based on a patchwork of 50+ state laws opens them up to all sorts of incredible mischief, especially when you’re dealing with state AGs whose incentives are, well, suspect.

CDT has detailed examples of conflicting state laws and how they would make it nearly impossible to comply:

For instance, in Arkansas it is illegal for an ?owner, operator or employee? of online services to ?knowingly fail? to report instances of child pornography on their network to ?a law enforcement official.? Because this law has apparently never been enforced (it was passed in 2001, five years after Section 230, which preempts it) it is not clear what ?knowingly? means. Does the offender have to know that a specific subscriber transmitted a specific piece of CSAM? Or is it a much broader concept of ?knowledge,? for example that some CSAM is present somewhere on their network? To whom, exactly, do these providers report CSAM? How would this law apply to service providers located outside of Arkansas, but which may have users in Arkansas?

Maryland enables law enforcement to request online services take down alleged CSAM, and if the service provider doesn?t comply, law enforcement can obtain a court order to have it taken down without the court confirming the content is actually CSAM. Some states simply have incredibly broad statutes criminalizing the transmission of CSAM, such as Florida: ?any person in this state who knew or reasonably should have known that he or she was transmitting child pornography . . . to another person in this state or in another jurisdiction commits a felony of the third degree.?

Finally, some states have laws that prohibit the distribution of ?obscene? materials to minors without requiring knowledge of the character of the material or to whom the material is transmitted. For example, Georgia makes it illegal ?to make available [obscene material] by allowing access to information stored in a computer? if the defendant has a ?good reason to know the character of the material? and ?should have known? the user is a minor. State prosecutors could argue that these laws are ?regarding? the ?solicitation? of CSAM on the theory that many abusers send obscene material to their child victims as part of their abuse.

Some early versions had a similar carve-out for state criminal laws, but after similar concerns were raised with Congress, it was modified so that it only applied to state criminal laws if it was also a violation of federal law. EARN IT has no such condition. In other words, EARN IT opens up the opportunity for significantly more mischief for both state legislatures and state Attorneys General to modify the law in dangerous ways.. and then enable state AGs to go after the companies for criminal violations. Given the current power of the “techlash” to attract grandstanding AGs who wish to abuse their power to shakedown internet companies for headlines, all sorts of nonsense is likely to be unleashed by this unbounded state law clause.

The encryption decoy:

I discussed this a bit in my original post, but it’s worth spending some time on this as well. When EARN IT was first introduced, the entire tech industry realized that it was clearly designed to try to completely undermine end-to-end encryption (a goal of law enforcement for quite a while). Realizing that those concerns were getting too much negative attention for the bill, a “deal” was worked out to add Senator Pat Leahy’s amendment which appeared to say that the use of encryption shouldn’t be used as evidence of a violation of the law. However, in a House companion bill that came out a few months later, that language was modified in ways that looked slight, but actually undermined the encryption carve out entirely. From Riana Pfefferkorn, who called out this nonsense two years ago:

To recap, Leahy?s amendment attempts (albeit imperfectly) to foreclose tech providers from liability for online child sexual exploitation offenses ?because the provider?: (1) uses strong encryption, (2) can?t decrypt data, or (3) doesn?t take an action that would weaken its encryption. It specifies that providers ?shall not be deemed to be in violation of [federal law]? and ?shall not otherwise be subject to any [state criminal charge] ? or any [civil] claim? due to any of those three grounds. Again, I explained here why that?s not super robust language: for one thing, it would prompt litigation over whether potential liability is ?because of? the provider?s use of encryption (if so, the case is barred) or ?because of? some other reason (if so, no bar).

That?s a problem in the House version too (found at pp. 16-17), which waters Leahy?s language down to even weaker sauce. For one thing, it takes out Leahy?s section header, ?Cybersecurity protections do not give rise to liability,? and changes it to the more anodyne ?Encryption technologies.? True, section headers don?t actually have any legal force, but still, this makes it clear that the House bill does not intend to bar liability for using strong encryption, as Leahy?s version ostensibly was supposed to do. Instead, it merely says those three grounds shall not ?serve as an independent basis for liability.? The House version also adds language not found in the Leahy amendment that expressly clarifies that courts can consider otherwise-admissible evidence of those three grounds.

What does this mean? It means that a provider?s encryption functionality can still be used to hold the provider liable for child sexual exploitation offenses that occur on the encrypted service ? just not as a stand-alone claim. As an example, WhatsApp messages are end-to-end encrypted (E2EE), and WhatsApp lacks the information needed to decrypt them. Under the House EARN IT bill, those features could be used as evidence to support a court finding that WhatsApp was negligent or reckless in transmitting child sex abuse material (CSAM) on its service in violation of state law (both of which are a lower mens rea requirement than the ?actual knowledge? standard under federal law). Plus, I also read this House language to mean that if WhatsApp got convicted in a criminal CSAM case, the court could potentially consider WhatsApp?s encryption when evaluating aggravating factors at sentencing (depending on the applicable sentencing laws or guidelines in the jurisdiction).

In short, so long as the criminal charge or civil claim against WhatsApp has some ?independent basis? besides its encryption design (i.e., its use of E2EE, its inability to decrypt messages, and its choice not to backdoor its own encryption), that design is otherwise fair game to use against WhatsApp in the case. That was also a problem with the Leahy amendment, as said. The House version just makes it even clearer that EARN IT doesn?t really protect encryption at all. And, as with the Leahy amendment, the foreseeable result is that EARN IT will discourage encryption, not protect it. The specter of protracted litigation under federal law and/or potentially dozens of state CSAM laws with variable mens rea requirements could scare providers into changing, weakening, or removing their encryption in order to avoid liability. That, of course, would do a grave disservice to cybersecurity ? which is probably just one more reason why the House version did away with the phrase ?cybersecurity protections? in that section header.

So, take a wild guess which version is in this new EARN IT? Yup. It’s the House version. Which, as Riana describes, means that if this bill becomes law encryption becomes a liability for every website.

FOSTA was bad, but at least it didn’t also undermine the most important technology for protecting our data and communications.

The “voluntary” best practices committee tripwire:

Another difference between FOSTA and EARN IT is that EARN IT includes this very, very strange best practices committee, called the “National Commission on Online Child Sexual Exploitation Prevention” or NCOSEP. I’m going to assume the similarity in acronym to the organization NCOSE (The National Center on Sexual Exploitation — formerly Morality in Media — which has been beating the drum for this law as part of a plan to outlaw all pornography) is on purpose.

In the original version of EARN IT, this commission wouldn’t just come up with “best practices,” but Section 230 protections would then be only available to companies that followed those best practices. That puts a tremendous amount of power in the hands of the 19 Commissioners, many of which are designated to law enforcement folks, who don’t have the greatest history in caring one bit about the public’s rights or privacy. The Commission is also heavily weighted against those who understand content moderation and technology. The Commission would include five law enforcement members (the Attorney General, plus four others, including at least two prosecutors) and four “survivors of online child sexual exploitation”, but only two civil liberties experts and only two computer science or encryption experts.

In other words, the commission is heavily biased towards moral panic, ignoring privacy rights, and the limits of technology.

Defenders of this note that this Commission is effectively powerless. The best practices that it would come up with don’t hold any additional power in theory. But the reality is that we know such a set of best practices, coming from a government commission, will undoubtedly be used over and over again in court to argue that this or that company — by somehow not following every such best practice — is somehow “negligent” or otherwise malicious in intent. And judges buy that kind of argument all the time (even when best practices come from private organizations, not the government).

So the best practices are likely to be legally meaningful in reality, even as the law’s backers insist they’re not. Of course, this raises the separate question: if the Commission’s best practices are meaningless, why are they even in the bill? But since they’ll certainly be used in court, that means they’ll have great power, and the majority of the Commission will be made up by people who have no experience with the challenges and impossibility of content moderation at scale, no experience with encryption, no experience with the dynamic and rapidly evolving nature of fighting content like CSAM — and are going to come up with “best practices” while the actual experts in technology and content moderation are in the minority on the panel.

That is yet another recipe for disaster that goes way beyond FOSTA.

The surveillance mousetrap:

Undermining encryption would already be a disaster for privacy and security, but this bill goes even further in its attack on privacy. While it’s not explicitly laid out in the bill, the myths and facts document that Blumenthal & Graham are sending around reveals — repeatedly — that they think that the way to protect yourself against the liability regime this bill imposes is to scan everything. That is, this is really a surveillance bill in disguise.

Repeatedly in the document, the Senators claim that surveillance scanning tools are “simple [and] readily accessible” and suggest over and over again that its only companies who don’t spy on every bit of data that would have anything to worry about under this bill.

It’s kind of incredible that this comes just a few months after there was a huge public uproar about Apple’s plans to scan people’s private data. Experts highlighted how such automated scanning was extremely dangerous and open to abuse and serious privacy concerns. Apple eventually backed down.

But it’s clear from Senators Blumenthal & Graham’s “myths and facts” document that they think any company that doesn’t try to surveil everything should face criminal liability.

And that becomes an even bigger threat when you realize how much of our private lives and data have now moved into the cloud. Whereas it wasn’t that long ago that we’d store our digital secrets on local machines, these days, more and more people store more and more of their information in the cloud or on devices with continuous internet access. And Blumenthal and Graham have laid bare that if companies do not scan their cloud storage and devices they have access to, they should face liability under this bill.

So, beyond the threat of crazy state laws, beyond the threat to encryption, beyond the threat from the wacky biased Commission, this bill also suggests the only way to avoid criminal liability is to spy on every user.

So, yes, more people have now recognized that FOSTA was a dangerous disaster that literally has gotten people killed. But EARN IT is way, way worse. This isn’t just a new version of FOSTA. This is a much bigger, much more dangerous, much more problematic bill that should never be allowed to become law — but has tremendous momentum to become law in a very short period of time.

Filed Under: best practices, earn it, encryption, fosta, lindsey graham, richard blumenthal, scanning, section 230, state criminal law, surveillance

Beware Of Facebook CEOs Bearing Section 230 Reform Proposals

from the good-for-facebook,-not-good-for-the-world dept

As you may know, tomorrow Congress is having yet another hearing with the CEOs of Google, Facebook, and Twitter, in which various grandstanding politicians will seek to rake Mark Zuckerberg, Jack Dorsey, and Sundar Pichai over the coals regarding things that those grandstanding politicians think Facebook, Twitter, and Google “got wrong” in their moderation practices. Some of the politicians will argue that these sites left up too much content, while others will argue they took down too much — and either way they will demand to know “why” individual content moderation decisions were made differently than they, the grandstanding politicians, wanted them to be made. We’ve already highlighted one approach that the CEOs could take in their testimony, though that is unlikely to actually happen. This whole dog and pony show seems all about no one being able to recognize one simple fact: that it’s literally impossible to have a perfectly moderated platform at the scale of humankind.

That said, one thing to note about these hearings is that each time, Facebook’s CEO Mark Zuckerberg inches closer to pushing Facebook’s vision for rethinking internet regulations around Section 230. Facebook, somewhat famously, was the company that caved on FOSTA, and bit by bit, Facebook has effectively lead the charge in undermining Section 230 (even as so many very wrong people keep insisting we need to change 230 to “punish” Facebook). That’s not true. Facebook is now perhaps the leading voice for changing 230, because the company knows that it can survive without it. Others? Not so much. Last February, Zuckerberg made it clear that Facebook was on board with the plan to undermine 230. Last fall, during another of these Congressional hearings, he more emphatically supported reforms to 230.

And, for tomorrow’s hearing, he’s driving the knife further into 230’s back by outlining a plan to further cut away at 230. The relevant bit from his testimony is here:

One area that I hope Congress will take on is thoughtful reform of Section 230 of the Communications Decency Act.

Over the past quarter-century, Section 230 has created the conditions for the Internet to thrive, for platforms to empower billions of people to express themselves online, and for the United States to become a global leader in innovation. The principles of Section 230 are as relevant today as they were in 1996, but the Internet has changed dramatically. I believe that Section 230 would benefit from thoughtful changes to make it work better for people, but identifying a way forward is challenging given the chorus of people arguing?sometimes for contradictory reasons?that the law is doing more harm than good.

Although they may have very different reasons for wanting reform, people of all political persuasions want to know that companies are taking responsibility for combatting unlawful content and activity on their platforms. And they want to know that when platforms remove harmful content, they are doing so fairly and transparently.

We believe Congress should consider making platforms? intermediary liability protection for certain types of unlawful content conditional on companies? ability to meet best practices to combat the spread of this content. Instead of being granted immunity, platforms should be required to demonstrate that they have systems in place for identifying unlawful content and removing it. Platforms should not be held liable if a particular piece of content evades its detection?that would be impractical for platforms with billions of posts per day?but they should be required to have adequate systems in place to address unlawful content.

Definitions of an adequate system could be proportionate to platform size and set by a third-party. That body should work to ensure that the practices are fair and clear for companies to understand and implement, and that best practices don?t include unrelated issues like encryption or privacy changes that deserve a full debate in their own right.

In addition to concerns about unlawful content, Congress should act to bring more transparency, accountability, and oversight to the processes by which companies make and enforce their rules about content that is harmful but legal. While this approach would not provide a clear answer to where to draw the line on difficult questions of harmful content, it would improve trust in and accountability of the systems and address concerns about the opacity of process and decision-making within companies.

As reform ideas go, this is certainly less ridiculous and braindead than nearly every bill introduced so far. It attempts to deal with the largest concerns that most people have — what happens when illegal, or even “lawful but awful,” activity is happening on websites and those websites have “no incentive” to do anything about it (or, worse, incentive to leave it up). It also responds to some of the concerns about a lack of transparency. Finally, to some extent it makes a nod at the idea that the largest companies can handle some of this burden, while other companies cannot — and it makes it clear that it does not support anything that would weaken encryption.

But that doesn’t mean it’s a good idea. In some ways, this is the flip side of the discussion that Mark Zuckerberg had many years ago regarding how “open” Facebook should be regarding third party apps built on the back of Facebook’s social graph. In a now infamous email, Mark told someone that one particular plan “may be good for the world, but it’s not good for us.” I’d argue that this 230 reform plan that Zuckerberg lays out “may be good for Facebook, but not good for the world.”

But it involves some thought, nuance, and predictions of how this plays out to understand why.

First, let’s go back to the simple question of what problem are we actually trying to solve for. Based on the framing of the panel — and of Zuckerberg’s testimony — it certainly sounds like there’s a huge problem of companies not having any incentive to clean up the garbage on the internet. We’ve certainly heard many people claim that, but it’s just not true. It’s only true if you think that the only incentives in the world are the laws of the land you’re in. But that’s not true and has never been true. Websites do a ton of moderation/trust & safety work not because of what legal structure is in place but because (1) it’s good for business, and (2) very few people want to be enabling cesspools of hate and garbage.

If you don’t clean up garbage on your website, your users get mad and go away. Or, in other cases, your advertisers go away. There are plenty of market incentives to make companies take charge. And of course, not every website is great at it, but that’s always been a market opportunity — and lots of new sites and services pop up to create “friendlier” places on the internet in an attempt to deal with those kinds of failures. And, indeed, lots of companies have to keep changing and iterating in their moderation practices to deal with the fact that the world keeps changing.

Indeed, if you read through the rest of Zuckerberg’s testimony, it’s one example after another of things that the company has already done to clean up messes on the platform. And each one describes putting huge resources in terms of money, technology, and people to combat some form of disinformation or other problematic content. Four separate times, Zuckerberg describes programs that Facebook has created to deal with those kinds of things as “industry-leading.” But those programs are incredibly costly. He talks about how Facebook now has 35,000 people working in “safety and security,” which is more than triple the 10,000 people in that role five years ago.

So, these proposals to create a “best practices” framework, judged by some third party, in which you only get to keep your 230 protections if you meet those best practices, won’t change anything for Facebook. Facebook will argue that its practices are the best practices. That’s effectively what Zuckerberg is saying in this testimony. But that will harm everyone else who can’t match that. Most companies aren’t going to be able to do this, for example:

Four years ago, we developed automated techniques to detect content related to terrorist organizations such as ISIS, al Qaeda, and their affiliates. We?ve since expanded these techniques to detect and remove content related to other terrorist and hate groups. We are now able to detect and review text embedded in images and videos, and we?ve built media-matching technology to find content that?s identical or near-identical to photos, videos, text, and audio that we?ve already removed. Our work on hate groups focused initially on those that posed the greatest threat of violence at the time; we?ve now expanded this to detect more groups tied to different hate-based and violent extremist ideologies. In addition to building new tools, we?ve also adapted strategies from our counterterrorism work, such as leveraging off-platform signals to identify dangerous content on Facebook and implementing procedures to audit the accuracy of our AI?s decisions over time.

And, yes, he talks about making those rules “proportionate to platform size” but there’s a whole lot of trickiness in making that work in practice. Size of what, exactly? Userbase? Revenue? How do you determine and where do you set the limits? As we wrote recently in describing our “test suite” of internet companies for any new internet regulation, there are so many different types of companies, dealing with so many different markets, that it wouldn’t make any sense to apply a single set of rules or best practices across each one. Because each one is very, very different. How do you apply similar “best practices” on a site like Wikipedia — where all the users themselves do the moderation — to a site like Notion, in which people are setting up their own database/project management setups, some of which may be shared with others. Or how do you set up the same best practices that will work in fan fiction communities that will also apply to something like Cameo?

And, even the “size” part can be problematic. In practice, it creates so many wacky incentives. The classic example of this is in France, where stringent labor laws kick in only for companies at 50 employees. So, in practice, there are a huge number of French companies that have 49 employees. If you create thresholds, you get weird incentives. Companies will seek to limit their own growth in unnatural ways just to avoid the burden, or if they’re going to face the burden, they may make a bunch of awkward decisions in figuring out how to “comply.”

And the end result is just going to be a lot of awkwardness and silly, wasteful lawsuits for companies arguing that they somehow fail to meet “best practices.” At worst, you end up with an incredible level of homogenization. Platforms will feel the need to simply adopt identical content moderation policies to ones who have already been adjudicated. It may create market opportunities for extractive third party “compliance” companies who promise to run your content moderation practices in the identical way to Facebook, since those will be deemed “industry-leading” of course.

The politics of this obviously make sense for Facebook. It’s not difficult to understand how Zuckerberg gets to this point. Congress is putting tremendous pressure on him and continually attacking the company’s perceived (and certainly, sometimes real) failings. So, for him, the framing is clear: set up some rules to deal with the fake problem that so many insist is real, of there being “no incentive” for companies to do anything to deal with disinformation and other garbage, knowing full well that (1) Facebook’s own practices will likely define “best practices” or (2) that Facebook will have enough political clout to make sure that any third party body that determines these “best practices” is thoroughly captured so as to make sure that Facebook skates by. But all those other platforms? Good luck. It will create a huge mess as everyone tries to sort out what “tier” they’re in, and what they have to do to avoid legal liability — when they’re all already trying all sorts of different approaches to deal with disinformation online.

Indeed, one final problem with this “solution” is that you don’t deal with disinformation by homogenization. Disinformation and disinformation practices continually evolve and change over time. The amazing and wonderful thing that we’re seeing in the space right now is that tons of companies are trying very different approaches to dealing with it, and learning from those different approaches. That experimentation and variety is how everyone learns and adapts and gets to better results in the long run, rather than saying that a single “best practices” setup will work. Indeed, zeroing in on a single best practices approach, if anything, could make disinformation worse by helping those with bad intent figure out how to best game the system. The bad actors can adapt, while this approach could tie the hands of those trying to fight back.

Indeed, that alone is the very brilliance of Section 230’s own structure. It recognizes that the combination of market forces (users and advertisers getting upset about garbage on the websites) and the ability to experiment with a wide variety of approaches, is how best to fight back against the garbage. By letting each website figure out what works best for their own community.

As I started writing this piece, Sundar Pichai’s testimony for tomorrow was also released. And it makes this key point about how 230, as is, is how to best deal with misinformation and extremism online. In many ways, Pichai’s testimony is similar to Zuckerberg’s. It details all these different (often expensive and resource intensive) steps Google has taken to fight disinformation. But when it gets to the part about 230, Pichai’s stance is the polar opposite of Zuckerberg’s. Pichai notes that they were able to do all of these things because of 230, and changing that would put many of these efforts at risk:

These are just some of the tangible steps we?ve taken to support high quality journalism and protect our users online, while preserving people?s right to express themselves freely. Our ability to provide access to a wide range of information and viewpoints, while also being able to remove harmful content like misinformation, is made possible because of legal frameworks like Section 230 of the Communications Decency Act.

Section 230 is foundational to the open web: it allows platforms and websites, big and small, across the entire internet, to responsibly manage content to keep users safe and promote access to information and free expression. Without Section 230, platforms would either over-filter content or not be able to filter content at all. In the fight against misinformation, Section 230 allows companies to take decisive action on harmful misinformation and keep up with bad actors who work hard to circumvent their policies.

Thanks to Section 230, consumers and businesses of all kinds benefit from unprecedented access to information and a vibrant digital economy. Today, more people have the opportunity to create content, start a business online, and have a voice than ever before. At the same time, it is clear that there is so much more work to be done to address harmful content and behavior, both online and offline.

Regulation has an important role to play in ensuring that we protect what is great about the open web, while addressing harm and improving accountability. We are, however, concerned that many recent proposals to change Section 230?including calls to repeal it altogether?would not serve that objective well. In fact, they would have unintended consequences?harming both free expression and the ability of platforms to take responsible action to protect users in the face of constantly evolving challenges.

We might better achieve our shared objectives by focusing on ensuring transparent, fair, and effective processes for addressing harmful content and behavior. Solutions might include developing content policies that are clear and accessible, notifying people when their content is removed and giving them ways to appeal content decisions, and sharing how systems designed for addressing harmful content are working over time. With this in mind, we are committed not only to doing our part on our services, but also to improving transparency across our industry.

That’s standing up for the law that helped enable the open internet, not tossing it under the bus because it’s politically convenient. It won’t make politicians happy. But it’s the right thing to say — because it’s true.

Filed Under: adaptability, best practices, content moderation, mark zuckerberg, section 230, sundar pichai, transparency
Companies: facebook, google

Content Moderation Best Practices for Startups

from the framework-for-the-internet-of-tomorrow dept

To say content moderation has become a hot topic over the past few years would be an understatement. The conversation has quickly shifted from how to best deal with pesky trolls and spammers ?—? straight into the world of intensely serious topics like genocide and destabilization of democracies.

While this discussion often centers around global platforms like Facebook and Twitter, even the smallest of communities can struggle with content moderation. Just a limited number of toxic members can have an outsize effect on a community’s behavioral norms.

That’s why the issue of content moderation needs to be treated as a priority for all digital communities, large and small. As evidenced by its leap from lower-order concern to front-page news, content moderation is deserving of more attention and care than most are giving it today. As I see it, it’s a first-class engineering problem that calls for a first-class solution. In practical terms, that means providing:

  1. accessible, flexible policies and procedures that account for the shades of gray moderators see day to day; and
  2. technology that makes those policies and procedures feasible, affordable, and effective.

Fortunately, this doesn’t have to be a daunting task. I’ve spent years having conversations with platforms that are homes to tens to hundreds of millions of monthly active users, along with advisors spanning the commercial, academic, and non-profit sectors. From these conversations, I’ve created this collection of content moderation and community building best practices for platforms of all sizes.

Content policies

  1. Use understandable policies.

This applies to both the policies you publish externally and the more detailed, execution-focused version of these policies that help your moderators make informed and consistent decisions. While the decisions and trade-offs underlying these policies are likely complex, once resolution is reached the policies themselves need to be expressed in simple terms so that users can easily understand community guidelines and moderators can more easily recognize violations.

When the rules aren’t clear, two problems arise: (i) moderators may have to rely on gut instincts rather than process, which can lead to inconsistency; and (ii) users lose trust because policies appear arbitrary. Consider providing examples of acceptable and unacceptable behaviors to help both users and moderators see the application of your policies in action (many examples will be more clarifying than just a few). Again, this is not to say that creating policies is an easy process, there will be many edge cases that make this process challenging. We touch more on this below.

  1. Publicize policies and changes.

Don’t pull the rug out from under your users. Post policies in an easy-to-find place, and notify users when they change. How to accomplish the latter will depend on your audience, but you should make a good faith effort to reach them. For some, this may mean emailing; for others, a post pinned to the top of a message board will suffice.

  1. Build policies on top of data.

When your policies are called into question, you want to be able to present a thoughtful approach to their creation and maintenance. Policies based on intuition or haphazard responses to problems will likely cause more issues in the long run. Grounding your content policies on solid facts will make your community a healthier, more equitable place for users.

  1. Iterate.

Times change, and what works when you start your community won’t necessarily work as it grows. For instance, new vocabulary may come into play, and slurs can be reappropriated by marginalized groups as counterspeech. This can be a great opportunity to solicit feedback from your community to both inform changes and more deeply engage users. Keep in mind that change need not be disruptive ?—? communities can absorb lots of small, incremental changes or clarifications to policies.

Harassment and abuse detection

  1. Be proactive.

Addressing abusive content after it’s been posted generally only serves to highlight flaws and omissions in your policies, and puts the onus of moderation on users. Proactive moderation can make use of automated initial detection and human moderators working in concert. Automated systems can flag potentially abusive content, after which human moderators with a more nuanced understanding of your community can jump in to make a final call.

  1. Factor in context.

Words or phrases that are harmful in one setting, may not be in another. Simple mechanisms like word filters and pattern matching are inadequate for this task, as they tend to under-censor harmful content and over-censor non-abusive content. Having policies and systems that can negotiate these kinds of nuances is critical to maintaining a platform’s health.

  1. Create a scalable foundation.

Relying on human moderation and sparse policies may work when your goal is to get up and running, but can create problems down the road. As communities grow, the complexity of expression and behavior grows. Establishing policies that can handle increased scale and complexity over time can save time and money ?— ?and prevent harassment ?— ?in the long term.

  1. Brace for abuse.

There’s always the danger of persistent bad actors poisoning the well for an entire community. They may repeatedly test keyword dictionaries to find gaps, or manipulate naive machine learning-based systems to “pollute the well.” Investing in industrial-grade detection tooling early on is the most effective way to head off these kinds of attacks.

  1. Assess effectiveness.

No system is infallible, so you’ll need to build regular evaluations of your moderation system into your processes. Doing so will help you understand whether a given type of content is being identified correctly or incorrectly? —? or missed entirely. That last part is perhaps the biggest problem you’ll face. I recommend using production data to build evaluation sets, allowing you to track performance over time.

Moderation actions

  1. Act swiftly.

Time is of the essence. ?The longer an offensive post remains, the more harm can come to your users and your community’s reputation. Inaction or delayed response can create the perception that your platform tolerates hateful or harassing content, which can lead to a deterioration of user trust.

  1. Give the benefit of the doubt.

From time to time, even “good” community members may unintentionally post hurtful content. That’s why it’s important to provide ample notice of disciplinary actions like suspensions. Doing so will allow well-intentioned users to course-correct, and, in the case of malicious users, provide a solid basis for more aggressive measures in the future.

  1. Embrace transparency.

One of the biggest risks in taking action against a community member is the chance you’ll come across as capricious or unjustified. Regularly reporting anonymized, aggregated moderation actions will foster a feeling of safety among your user base.

  1. Prepare for edge cases.

Just as you can’t always anticipate new terminology, there will likely be incidents your policies don’t clearly cover. One recommendation for handling these types of hiccups is a process that triggers the use of an arbiter that holds final authority.

Another method is to imagine the content or behavior to be 10,000 times as common as it is today. The action you would take in that scenario can inform the action you take today. Regardless of the system you develop, be sure to document all conversations, debates, and decisions. And once you’ve reached a decision, formalize it by updating your content policy.

  1. Respond appropriately.

Typically, only a small portion of toxic content comes from persistent, determined bad actors. The majority of incidents are due to regular users having an off-day. That’s why it’s important to not apply draconian measures like permanent bans at the drop of a hat. Lighter measures like email or in-app warnings, content removal, and temporary bans send a clear signal about unacceptable behavior while allowing users to learn from their mistakes.

  1. Target remedies.

Depending on the depth of your community, a violation may be limited to a subgroup within a larger group. Be sure to focus on the problematic subgroup to avoid disrupting the higher-level group.

  1. Create an appeals process.

In order to establish and build trust, it’s important to create an equitable structure that allows users to appeal when they believe they’ve been wrongly moderated. As with other parts of your policies, transparency plays a big role. The more effort you put into explaining and publicizing your appeals policy up front, the safer and stronger your community will be in the long run.

  1. Protect moderators.

While online moderation is a relatively new field, the stresses it causes are very real. Focusing on the worst parts of a platform can be taxing psychologically and emotionally. Support for your moderators in the form of removing daily quotas, enforcing break times, and providing counseling is good for your community?—?and the ethical thing to do.

And if you’re considering opening a direct channel for users to communicate with Trust & Safety agents, be aware of the risks. While it can help dissipate heightened user reactions, protecting moderators here is also critical. Use shared, monitored inboxes for inbound messages and anonymized handles for employee accounts. Use data to understand which moderators are exposed to certain categories or critical levels of abusive content. Lastly, provide employees with personal online privacy-protecting solutions such as DeleteMe.

Measurement

  1. Maintain logs.

Paper trails serve as invaluable reference material. Be sure to keep complete records of flagged content including the content under consideration, associated user or forum data, justification for the flag, moderation decisions, and post mortem notes, when available. This information can help inform future moderation debates and identify inconsistencies in the application of your policies.

  1. Use metrics.

Moderation is possibly the single most impactful determinant of a community user’s experience. Measurement of its effectiveness should be subject to the same rigor you’d apply to any other part of your product. By evaluating your moderation process with both quantitative and qualitative data, you’ll gain insight into user engagement, community health, and the impact of toxic behavior.

  1. Use feedback loops.

A final decision on a content incident need not be the end of the line. Don’t let the data you’ve collected through the process go to waste. Make it a part of regular re-evaluations and updates of content policies to not only save effort on similar incidents, but also to reinforce consistency.

Most importantly, though, your number one content moderation concern should be strategic in nature. As important as all of these recommendations are for maintaining a healthy community, they’re nothing without an overarching vision. Before you define your policies, think through what your community is, who it serves, and how you’d like it to grow. A strong sense of purpose will help guide you through the decisions that don’t have obvious answers?—?and, of course, help attract the audience you want.

This collection of best practices is by no means the be-all and end-all of content moderation, but rather a starting point. This industry is constantly evolving and we’ll all need to work together to keep best practices at the frontier. If you have any comments or suggestions, feel free to share on this Gitlab repo.

Let’s help make the internet a safer, more respectful place for everyone.

Taylor Rhyne is co-founder and Chief Operating Officer of Sentropy, an internet security company building machine learning products to detect and fight online abuse. Rhyne was previously an Engineering Project Manager at Apple on the Siri team where he helped develop and deploy advanced Natural Language Understanding initiatives.

Filed Under: best practices, content moderation, transparency

Content Moderation Knowledge Sharing Shouldn't Be A Backdoor To Cross-Platform Censorship

from the too-big-of-a-problem-to-tackle-alone dept

Ten thousand moderators at YouTube. Fifteen thousand moderators at Facebook. Billions of users, millions of decisions a day. These are the kinds of numbers that dominate most discussions of content moderation today. But we should also be talking about 10, 5, or even 1: the numbers of moderators at sites like Automattic (WordPress), Pinterest, Medium, and JustPasteIt—sites that host millions of user-generated posts but have far fewer resources than the social media giants.

There are a plethora of smaller services on the web that host videos, images, blogs, discussion fora, product reviews, comments sections, and private file storage. And they face many of the same difficult decisions about the user-generated content (UGC) they host, be it removing child sexual abuse material (CSAM), fighting terrorist abuse of their services, addressing hate speech and harassment, or responding to allegations of copyright infringement. While they may not see the same scale of abuse that Facebook or YouTube does, they also have vastly smaller teams. Even Twitter, often spoken of in the same breath as a “social media giant,” has an order of magnitude fewer moderators at around 1,500.

One response to this resource disparity has been to focus on knowledge and technology sharing across different sites. Smaller sites, the theory goes, can benefit from the lessons learned (and the R&D dollars spent) by the biggest companies as they’ve tried to tackle the practical challenges of content moderation. These challenges include both responding to illegal material and enforcing content policies that govern lawful-but-awful (and mere lawful-but-off-topic) posts.

Some of the earliest efforts at cross-platform information-sharing tackled spam and malware such as the Mail Abuse Prevention System (MAPS) — which maintains blacklists of IP addresses associated with sending spam. Employees at different companies have also informally shared information about emerging trends and threats, and the recently launched Trust & Safety Professional Association is intended to provide people working in content moderation with access to “best practices” and “knowledge sharing” across the field.

There have also been organized efforts to share specific technical approaches to blocking content across different services, namely, hash-matching tools that enable an operator to compare uploaded files to a pre-existing list of content. Microsoft, for example, made its PhotoDNA tool freely available to other sites to use in detecting previously reported images of CSAM. Facebook adopted the tool in May 2011, and by 2016 it was being used by over 50 companies.

Hash-sharing also sits at the center of the Global Internet Forum to Counter Terrorism (GIFCT), an industry-led initiative that includes knowledge-sharing and capacity-building across the industry as one of its 4 main goals. GIFCT works with Tech Against Terrorism, a public-private partnership launched by the UN Counter-Terrrorism Executive Directorate, to “shar[e] best practices and tools between the GIFCT companies and small tech companies and startups.” Thirteen companies (including GIFCT founding companies Facebook, Google, Microsoft, and Twitter) now participate in the hash-sharing consortium.

There are many potential upsides to sharing tools, techniques, and information about threats across different sites. Content moderation is still a relatively new field, and it requires content hosts to consider an enormous range of issues, from the unimaginably atrocious to the benignly absurd. Smaller sites face resource constraints in the number of staff they can devote to moderation, and thus in the range of language fluency, subject matter expertise, and cultural backgrounds that they can apply to the task. They may not have access to — or the resources to develop — technology that can facilitate moderation.

When people who work in moderation share their best practices, and especially their failures, it can help small moderation teams avoid pitfalls and prevent abuse on their sites. And cross-site information-sharing is likely essential to combating cross-site abuse. As scholar evelyn douek discusses (with a strong note of caution) in her Content Cartels paper, there’s currently a focus among major services in sharing information about “coordinated inauthentic behavior” and election interference.

There are also potential downsides to sites coordinating their approaches to content moderation. If sites are sharing their practices for defining prohibited content, it risks creating a de facto standard of acceptable speech across the Internet. This undermines site operators’ ability to set the specific content standards that best enable their communities to thrive — one of the key ways that the Internet can support people’s freedom of expression. And company-to-company technology transfer can give smaller players a leg up, but if that technology comes with a specific definition of “acceptable speech” baked in, it can end up homogenizing the speech available online.

Cross-site knowledge-sharing could also suppress the diversity of approaches to content moderation, especially if knowledge-sharing is viewed as a one-way street, from giant companies to small ones. Smaller services can and do experiment with different ways of grappling with UGC that don’t necessarily rely on a centralized content moderation team, such as Reddit’s moderation powers for subreddits, Wikipedia’s extensive community-run moderation system, or Periscope’s use of “juries” of users to help moderate comments on live video streams. And differences in the business model and core functionality of a site can significantly affect the kind of moderation that actually works for them.

There’s also the risk that policymakers will take nascent “industry best practices” and convert them into new legal mandates. That risk is especially high in the current legislative environment, as policymakers on both sides of the Atlantic are actively debating all sorts of revisions and additions to intermediary liability frameworks.

Early versions of the EU’s Terrorist Content Regulation, for example, would have required intermediaries to adopt “proactive measures” to detect and remove terrorist propaganda, and pointed to the GIFCT’s hash database as an example of what that could look like (CDT joined a coalition of 16 human rights organizations recently in highlighting a number of concerns about the structure of GIFCT and the opacity of the hash database). And the EARN-IT Act in the US is aimed at effectively requiring intermediaries to use tools like PhotoDNA—and not to implement end-to-end encryption.

Potential policymaker overreach is not a reason for content moderators to stop talking to and learning from each other. But it does mean that knowledge-sharing initiatives, especially formalized ones like the GIFCT, need to be attuned to the risks of cross-site censorship and eliminating diversity among online fora. These initiatives should proceed with a clear articulation of what they are able to accomplish (useful exchange of problem-solving strategies, issue-spotting, and instructive failures) and also what they aren’t (creating one standard for prohibited — much less illegal— speech that can be operationalized across the entire Internet).

Crucially, this information exchange needs to be a two-way street. The resource constraints faced by smaller platforms can also lead to innovative ways to tackle abuse and specific techniques that work well for specific communities and use-cases. Different approaches should be explored and examined for their merit, not viewed with suspicion as a deviation from the “standard” way of moderating. Any recommendations and best practices should be flexible enough to be incorporated into different services’ unique approaches to content moderation, rather than act as a forcing function to standardize towards one top-down, centralized model. As much as there is to be gained from sharing knowledge, insights, and technology across different services, there’s no-one-size-fits-all approach to content moderation.

Emma Llansó is the Director of CDT’s Free Expression Project, which works to promote law and policy that support Internet users’ free expression rights in the United States and around the world. Emma also serves on the Board of the Global Network Initiative, a multistakeholder organization that works to advance individuals’ privacy and free expression rights in the ICT sector around the world. She is also a member of the multistakeholder Freedom Online Coalition Advisory Network, which provides advice to FOC member governments aimed at advancing human rights online.

Filed Under: best practices, censorship, content moderation, cross-platform, gifct, hashes, knowledge sharing, maps

Senate Waters Down EARN IT At The Last Minute; Gives Civil Liberties Groups No Time To Point Out The Many Remaining Problems

from the this-is-still-a-bad-bill dept

As expected, the EARN IT Act is set to be marked up this week, and today (a day before the markup) Senators Graham and Blumenthal announced a “manager’s amendment” that basically rewrites the entire bill. It has some resemblance to the original bill, in that this bill will also create a giant “national commission on online child sexual exploitation prevention” to “develop recommended best practices” that various websites can use to “prevent, reduce, and respond to the online sexual exploitation of children,” but then has removed the whole “earn it” part of the “EARN IT” Act in that there seems to be no legal consequences for any site not following these “best practices” (yet). In the original bill, not following the best practices would lose sites their Section 230 protections. Now… not following them is just… not following them. The Commission just gets to shout into the wind.

Of course, we’ve seen mission creep on things like this before, where “best practices” later get encoded into law, so there remain significant concerns about how this all plays out in the long run, even if they’ve removed some of the bite from this version.

Instead, the major “change” with this version of EARN IT, is that it basically replicates FOSTA in creating a specific “carve out” for child sexual abuse material (CSAM, or the artist formerly known as “child porn”). It’s almost an exact replica of FOSTA, except instead of “sex trafficking and prostitution” they say the same thing about 230 not impacting laws regarding CSAM. This is… weird? And pointless? It’s not like there is some long list of cases regarding CSAM where Section 230 got in the way. There are no sites anyone can point to as “hiding behind Section 230” in order to encourage such content. This is all… performative. And, if anything, we’re already seeing people realize that FOSTA did nothing to stop sex trafficking, but did have massive unintended consequences.

That said, there are still massive problems with this bill, and that includes significant constitutional concerns. First off, it remains unclear why the government needs to set up this commission. The companies have spent years working with various stakeholders to build out a set of voluntary best practices that have been implemented and have been effective in finding and stopping a huge amount of CSAM. Of course, there remains a lot more out there, and users get ever sneakier in trying to produce and share such content — but a big part of the problem seems to be that the government is so focused on blaming tech platforms for CSAM that they do little to nothing to stop the people who are actually creating and sharing the material. That’s why Senator Wyden tried to call law enforcement’s bluff over all of this by putting out a competing bill that basically pushes law enforcement to do its job, which it has mostly been ignoring.

On the encryption front: much of the early concern was that this commission (with Attorney General Bill Barr’s hand heavily leaning on the scales) would say that offering end-to-end encryption was not a “best practice” and thus could lead to sites that offered such communication tools losing 230 protections for other parts of their site. This version of EARN IT removes that specific concern… but it’s still a threat to encryption, though in a roundabout way. Specifically, in that FOSTA-like carve out, the bill would allow states to enforce federal criminal laws regarding CSAM, and would allow states to set their own laws for what standard counts as the standard necessary to show that a site “knowingly” aided in the “advertisement, promotion, presentation, distribution or solicitation” of CSAM.

And… you could certainly see some states move (perhaps with a nudge from Bill Barr or some other law enforcement) to say that offering end-to-end encryption trips the knowledge standard on something like “distribution.” It’s roundabout, but it remains a threat to encryption.

Then there are the constitutional concerns. A bunch of people had raised significant 4th Amendment concerns in that if the government was determining the standards for fighting CSAM, that would turn the platforms into “state actors” for the purpose of fighting CSAM — meaning that 4th Amendment standards would apply to what the companies themselves could do to hunt down and stop those passing around CSAM. That would make it significantly harder to actually track down the stuff. With the rewritten bill, this again is not as clear, and there remain concerns about the interaction with state law. Under this law, a site can be held liable for CSAM if it was “reckless” and there are reasons to believe that state laws might suggest that it’s reckless not to do monitoring for CSAM — which could put us right back into that state actor 4th Amendment issue.

These are not all of the problems with the bill, but frankly, the new version is just… weird? It’s like they had that original “earn” 230 idea worked out, and were convinced that couldn’t actually work, but were too wedded to the general idea to try to craft a law that actually works. So they just kinda chucked it all and said “recreate FOSTA” despite that not making any sense.

Oh, and they spring this on everybody the day before they mark it up, giving most experts almost no time to review and analyze. This is not how good lawmaking is done. But what do you expect these days?

Filed Under: 4th amendment, best practices, child porn, commission, csam, earn it, encryption, section 230, states

The EARN IT Act Creates A New Moderator's Dilemma

from the moderate-perfectly-or-else dept

Last month, a bipartisan group of U.S. senators unveiled the much discussed EARN IT Act, which would require tech platforms to comply with recommended best practices designed to combat the spread of child sexual abuse material (CSAM) or no longer avail themselves of Section 230 protections. While these efforts are commendable, the bill would cause significant problems.

Most notably, the legislation would create a Commission led by the Attorney General with the authority to draw up a list of recommended best practices. Many have rightly explained that AG Barr will likely use this new authority to prohibit end-to-end encryption as a best practice. However, less discussed is the recklessness standard the bill adopts. This bill would drastically reduce free speech online because it eliminates the traditional moderator’s dilemma and instead creates a new one: either comply with the recommended best practices, or open the legal floodgates.

Prior to the passage of the Communications Decency Act in 1996, under common law intermediary liability, platforms could only be held liable if they had knowledge of the infringing content. This meant that if a platform couldn’t survive litigation costs, they could simply choose not to moderate at all. While not always a desirable outcome, this did provide legal certainty for smaller companies and start-ups that they wouldn’t be litigated into bankruptcy. This dilemma was eventually resolved thanks to Section 230 protections, which prevent companies from having to make that choice.

However, the EARN IT Act changes that equation in two key ways. First, it amends Section 230 by allowing civil and state criminal suits against companies who do not adhere to the recommended best practices. Second, for the underlying Federal crime (which Section 230 doesn’t affect), the bill would change the scienter requirement from actual knowledge to recklessness. What does this mean in practice? Currently, under existing Federal law, platforms must have actual knowledge of CSAM on their service before any legal requirement goes into effect. So if, for example, a user posts material that could be considered CSAM but the platform is not aware of it, then they can’t be guilty of illegally transporting CSAM. Platforms must remove and report content when it is identified to them, but they are not held liable for any and all content on the website. However, a recklessness standard turns this dynamic on its head.

What actions are “reckless” is ultimately up to the jurisdiction, but the model penal code can provide a general idea of what it entails: a person acts recklessly when he or she “consciously disregards a substantial and unjustifiable risk that the material element exists or will result from his conduct.” But what’s worse, the bill opens the platform’s actions to civil cases. Federal criminal enforcement normally targets the really bad actors, and companies that comply with reporting requirements will generally be immune from liability. However with these changes, if a user posts material that could potentially be considered CSAM, despite no knowledge on the part of the platform, civil litigants could argue that the moderation and detection practices of the companies, or lack thereof, constituted a conscious disregard of the risk that CSAM will be shared by users.

When the law introduces ambiguity into liability, companies tend to err on the side of caution. In this case, that means the removal of potentially infringing content to ensure they cannot be brought before a court. For example, in the copyright context, a Digital Millennium Copyright Act safe-harbor exists for internet service providers (ISPs) who “reasonably implement” policies for terminating repeat infringers on their service in “appropriate circumstances.” However, courts have refused to apply that safe-harbor when a company didn’t terminate enough subscribers. This uncertainty about whether a safe-harbor applies will undoubtedly lead ISPs to act on more complaints, ensuring they cannot be liable for the infringement. Is it “reckless” for a company not to investigate postings from an IP address if other postings from that IP address were CSAM? What if the IP address belongs to a public library with hundreds of daily users?

This ambiguity will likely force platforms to moderate user content and over-remove legitimate content to ensure they cannot be held liable. Large firms that have the resources to moderate more heavily and that can survive an increase in lawsuits may start to invest the majority of moderation resources into CSAM out of an abundance of caution. As a result, this would leave less resources to target and remove other problematic content such as terrorist recruitment or hate speech. Mid-sized firms may end up over-removing user content that in any way features a child or limit posting to trusted sources, insulating them from potential lawsuits that could cripple the business. And small firms, who likely can’t survive an increase in litigation could ban user content entirely, ensuring nothing on the website hasn’t been posted without vetting. These consequences, and the general burden on the First Amendment, are exactly the type of harms that drove courts to adopt a knowledge standard for online intermediary liability, ensuring that the free flow of information was not unduly limited.

Yet, the EARN IT Act ignores this. Instead, the bill assumes that companies will simply adhere to the best practices and therefore retain Section 230 immunity, avoiding these bad outcomes. After all, who wouldn’t want to comply with best practices? Instead, this could force companies to choose between vital privacy protections like end-to-end encryption or litigation. The fact is there are better ways to combat the spread of CSAM online which don’t require platforms to remove key privacy features for user.

As it stands now, the EARN IT Act solves the moderator’s dilemma by creating a new one: comply, or else.

Jeffrey Westling is a technology and innovation policy fellow at the R Street Institute, a free-market think tank based in Washington, D.C.

Filed Under: attorney general, best practices, cda 230, earn it, earn it act, encryption, moderator's dilemma, recklessness, section 230

The MPAA's Plan To Piss Off Young Moviegoers And Make Them Less Interested In Going To Theaters

from the do-these-guys-never-think-anything-through? dept

Given how important teenagers and those in their 20s are to the movie industry, you’d think one day they’d learn to stop being complete assholes to that demographic. For example, you’d think that they’d realize that young folks today really, really like their smartphones, and one of the main things they do with those smartphones is snap pictures or videos of just about anything and everything and share it with their friends via whichever platform they prefer, be it SnapChat, WhatsApp, Instagram, Vine, Facebook, Twitter or whatever else they might be using. It’s just what they do — and they seem to be doing it more and more often. Yet, the MPAA wants to make sure that if kids do this, theaters should call the police to have them arrested as quickly as possible.

The thing is, the MPAA should know that this is a recipe for disaster. In 2007, Jhannet Sejas went to see Transformers, and filmed 20-seconds to send to her brother to get him excited to go see the movie. The result? Police were called, she was arrested and threatened with jailtime. She was eventually pressured into pleading guilty to avoid jailtime. Samantha Tumpach wasn’t quite so lucky. She, along with her sister and her friends, went out to the movies in 2009 to celebrate her sister’s birthday. Since they were all having fun, she decided to film some of the group while they were watching the movie. Once again, police were called and she was arrested and spent two nights in jail. After widespread public outcry, prosecutors dropped the charges.

Given those high profile cases, combined with the fact that smartphones have become more ubiquitous, and the pastime of taking photos and videos has become ever more popular, you’d think that maybe, just maybe, someone at the MPAA would think to teach theater owners to be a bit more lenient about the kid just taking a photo or filming a couple seconds of a video. But that’s not how the MPAA operates. Its goal in life seems to be to think up ways of how it must have been wronged, and its weird and stupid obsession with movies captured by people filming in the theaters is really quite ridiculous.

The MPAA has now released its latest “best practices” for theaters, and it’s basically exactly what you should do if you want to piss off the demographic of folks who actually go to theaters. You can see the whole thing here if you want to see exactly what not to do.

And the MPAA is Obnoxious

The MPAA recommends theaters institute a “zero tolerance” policy, which appears to mean calling in the police if anyone so much as raises a smartphone. Here are a few snippets:

The MPAA recommends that theaters adopt a Zero Tolerance policy that prohibits the video or audio recording and the taking of photographs of any portion of a movie.

Theater managers should immediately alert law enforcement authorities whenever they suspect prohibited activity is taking place. Do not assume that a cell phone or digital camera is being used to take still photographs and not a full-length video recording. Let the proper authorities determine what laws may have been violated and what enforcement action should be taken.

Theater management should determine whether a theater employee or any other competent authority is empowered to confiscate recording devices, interrupt or interfere with the camcording, and/or ask the patron to leave the auditorium.

Even better, the MPAA reminds theaters that they should tell employees about their “TAKE ACTION! REWARD,” in which employees who capture an evil pirate in action get a whopping $500. In order to get the award, one of the requirements is “immediate notification to the police.” The theaters have to have posters, like the one above, on display if they want their employees to get the cash, so expect to see that kind of crap in theaters everywhere. And expect that employees seeking to cash in on that TAKE ACTION! REWARD to be calling the cops all the freaking time, because some kid raises his iPhone to take a quick picture of his buddies or something cool on screen.

Could the MPAA really be so out of touch and so completely oblivious that they think this is a good idea? Do they not employ anyone who has spent any time around teens and folks in their 20s? Do they honestly think that most police officers don’t have better things to do than rush to the local theater every 15 minutes because some employee is trying to get his $500 and the way to do that is to turn in the kids having fun and trying to share the experience (not the movie itself)? And, most importantly, does no one at the MPAA think that maybe, just maybe, turning theater employees into complete assholes will make fewer people want to go see movies?

Of course they don’t. That’s because the MPAA is made up of lawyers, like this guy, who are obsessed with one thing, and one thing only: “evil pirates who must be stopped.” It really seems like when the movie industry does well, it’s in spite of the MPAA. What a disastrous organization, working against the industry’s actual interests.

Filed Under: anti-piracy, best practices, camcording, mpaa, theaters
Companies: mpaa

Are Industry Best Practices Enough To Protect Net Neutrality?

from the no-sticks-anymore,-just-carrots dept

For the supporters of net neutrality, an Obama White House, Genachowski FCC and Democratic Congress seemed to be the magic combination to ensure an open, non-discriminatory Internet. However, one of the key proponents, Representative Boucher, has recently suggested that he is switching tactics, “scrapping the idea of pursuing legislation mandating an openly accessible Internet in favor of negotiations with stakeholders aimed at reaching a comprehensive accord.” An agreement upon industry best practices could, in theory, be a good way to protect net neutrality, but there are causes for concern.

As Techdirt contributor Tim Lee pointed out in his paper on net neutrality, the unintended consequences of legislation may be costly and inefficient. So, voluntary agreements could create a flexible, realistic approach to protecting an important principle. Something similar happened with the Global Network Initiative that brought together Google, Microsoft and Yahoo!, along with academics and human rights organizations, to agree to a set of principles and enforcement mechanisms to protect and promote free expression and privacy around the globe. But the motivating factor of this agreement was the threat of legislation following very humiliating Congressional hearings on American Internet companies’ dealings in China. By creating a voluntary set of best practices, the Global Network Initiative sidestepped the unintended consequences of poorly drafted legislation. The ISPs could do similarly, but by publicly stating his change of tactics, Boucher may have removed the motivating factor.

Another key to any agreement would be competition in the ISP marketplace. In Norway, where they recently created a similar agreement between ISPs and consumer protection agencies to mandate non-discrimination of networks and endpoints, the ISPs are in a competitive sector. Because ISPs there recognize the competitive advantage of staying neutral, there is a force pushing them in that direction. In the United States, the driving force was largely the threat of legislation, and hopefully that is still there as Boucher guides the ISPs towards his comprehensive accord.

Filed Under: best practices, legislation, net neutrality, norway, rick boucher