Biden's Top Tech Advisor Trots Out Dangerous Ideas For 'Reforming' Section 230 (original) (raw)
from the this-is-a-problem dept
It is now broadly recognized that Joe Biden doesn’t like Section 230 and has repeatedly shown he doesn’t understand what it does. Multiple people keep insisting to me, however, that once he becomes president, his actual tech policy experts will understand the law better, and move Biden away from his nonsensical claim that he wishes to “repeal” the law.
In a move that is not very encouraging, Biden’s top tech policy advisor, Bruce Reed, along with Common Sense Media’s Jim Steyer, have published a bizarre and misleading “but think of the children!” attack on Section 230 that misunderstands the law, misunderstands how it impacts kids, and which suggests incredibly dangerous changes to Section 230. If this is the kind of policy recommendations we’re to expect over the next four years, the need to defend Section 230 is going to remain pretty much the same as it’s been over the last few years.
Let’s break down the piece and its myriad problems.
Mark Zuckerberg makes no apology for being one of the least-responsible chief executives of our time. Yet at the risk of defending the indefensible, as Zuckerberg is wont to do, we must concede that given the way federal courts have interpreted telecommunications law, some of Facebook’s highest crimes are now considered legal.
Uh, wait. No. There’s a very sketchy sleight-of-word right here in the opening, claiming that “Facebook’s highest crimes are now considered legal.” That is wrong. Any law that Facebook violates, it is still held liable for. The point of Section 230 is that Facebook (and any website) should not be held liable for any laws that its users violate. Reed and Steyer seek to elide this very important distinction in a pure “blame the messenger” way.
It may not have been against the law to livestream the massacre of 51 people at mosques in Christchurch, New Zealand or the suicide of a 12-year-old girl in the state of Georgia. Courts have cleared the company of any legal responsibility for violent attacks spawned by Facebook accounts tied to Hamas. It’s not illegal for Facebook posts to foment attacks on refugees in Europe or try to end democracy as we know it in America.
This is more of the same. The Hamas claim is particularly bogus. The lawsuit in that case involved some plaintiffs who were harmed by Hamas… and decided that the right legal remedy was to sue Facebook because some Hamas members used Facebook. There was no attempt to even show that the injuries the plaintiffs faced had anything to do with Hamas using Facebook. The cases were tossed because Section 230 did exactly the right thing: note that the legal liability should be on the parties actually responsible. We don’t blame AT&T when a terrorist makes a phone call. We don’t blame Ford because a terrorist drives a Ford car. We shouldn’t blame Facebook just because a terrorist uses Facebook.
This is fairly basic stuff, and it is shameful for Reed and Steyer to misrepresent things in such a way that is designed to obfuscate the actual details of the legal issues at play, while purely pulling at heartstrings. But the heartstring-pulling was just beginning, because this whole piece shifts into the typical “but think of the children!” pandering quite quickly.
Since Section 230 of the 1996 Communications Decency Act was passed, it has been a get-out-of-jail-free card for companies like Facebook and executives like Zuckerberg. That 26-word provision hurts our kids and is doing possibly irreparable damage to our democracy. Unless we change it, the internet will become an even more dangerous place for young people, while Facebook and other tech platforms will reap ever-greater profits from the blanket immunity that their industry enjoys.
Of course, it hasn’t been a get out of jail card for any of those companies. The law has never barred federal criminal prosecutions, as federal crimes are exempt from the statute. Almost every Section 230 case has been about civil disputes. It’s also shameful that Reed and Steyer seem to mix-up the differences between civil and criminal law.
Also, I’d contest the argument that it’s Section 230 that has made the internet a dangerous place for kids or democracy. Section 230 has enabled many, many forums and spaces for young people to congregate and communicate — many of which have been incredibly important. It’s where many LGBTQ+ kids have found like minded people to discover they’re not alone. It’s where kids who are interested in niche areas or specific communities have found others with similar views. All of that is possible because of Section 230.
Yes, there is bullying online, and that’s a problem, but Section 230 has also enabled tremendous variation and competition in how different websites respond to that, with many creating quite clever ideas in how to deal with the downsides of purely open communication. Changing Section 230 will likely remove that freedom of experimentation.
It wasn’t supposed to be this way. According to former California Rep. Chris Cox, who wrote Section 230 with Oregon’s Sen. Ron Wyden, “The original purpose of this law was to help clean up the internet, not to facilitate people doing bad things on the internet.” In the 1990s, after a New York court ruled that the online service provider Prodigy could be held liable in the same way as a newspaper publisher because it had established standards for allowable content, Cox and Wyden wrote Section 230 to protect “Good Samaritan” companies like Prodigy that tried to do the right thing by removing content that violated their guidelines.
But through subsequent court rulings, the provision has turned into a bulletproof shield for social media platforms that do little or nothing to enforce established standards.
This is just flat out wrong, and it’s embarrassing that Reed and Steyer are repeating this out and out myth. You will find no sites out there, least of all Facebook (the main bogeyman named in this article) “that do little or nothing to enforce established standards.” Facebook employs tens of thousands of content moderators, and has a truly elaborate system for reviewing and modifying its ever changing standards, which it tries to enforce.
We can agree that the companies may fail to catch everything, but that’s not because they’re not trying. It’s because it’s impossible. That was the very basis of 230: recognizing that an open platform is literally impossible to fully police, and 230 would enable sites to try different systems for policing it. What Reed and Steyer are really saying is that they don’t like how Facebook has chosen to police its platform. Which is a reasonable argument to make, but it’s not because of 230. It seems to be because Steyer and Reed are ignorant of what Facebook has actually done.
Facebook and other platforms have saved countless billions thanks to this free pass. But kids and society are paying the price. Silicon Valley has succeeded in turning the internet into an online Wild West ? nasty, brutal, and lawless ? where the innocent are most at risk.
Bullshit. Again, Facebook employs tens of thousands of moderators and actually takes a fairly heavy hand in its moderation practices. To say that this is a “Wild West” is to express near total ignorance about how content moderation actually works at Facebook. Facebook spends more on moderation that Twitter makes in revenue. To say that it’s “saving billions” thanks to this “free pass” is to basically say that you don’t know what you’re talking about.
The smartphone and the internet are revolutionary inventions, but in the absence of rules and responsibilities, they threaten the greatest invention of the modern world: a protected childhood.
This is “but think of the children” moral panicking. Yes, we should be concerned about how children use social media, but Facebook, like most other sites doesn’t allow users to have accounts if they’re under 13-years old, and the problem being discussed is not about 230, but rather about teaching children how to be more discerning digital citizens when they’re online. And this is important, because it’s a skill they’ll need to learn. Trying to shield them from absolutely everything — rather than giving them the skills to navigate it — is a dangerous approach that will leave kids unprepared for life on the internet.
But Reed and Steyer are full in on the “think of the children” moral panic… so much that they (and I only wish I was joking) compare children using social media… to child labor and child trafficking:
Since the 19th century, economic and technological progress enabled societies to ban child labor and child trafficking, eliminate deadly and debilitating childhood diseases, guarantee universal education and better safeguard young children from exposure to violence and other damaging behaviors. Technology has tremendous potential to continue that progress. But through shrewd use of the irresponsibility cloak of Section 230, some in Big Tech have turned the social media revolution into a decidedly mixed blessing.
Oh come on. Those things are not the same. This entire piece is a masterclass in extrapolating a few worst case scenarios and insisting that they’re happening much more frequently than they really are. Eventually the piece finally gets to its suggestion on “what to do about it.” And the answer is… destroy Section 230 in a way that won’t actually help.
But treating platforms as publishers doesn’t undermine the First Amendment. On the contrary, publishers have flourished under the First Amendment. They have centuries of experience in moderating content, and the free press was doing just fine until Facebook came along.
That… completely misses the point. Publishers handle things because they review every bit of content that goes out in their publication. The reason why we have 230 treat sites that host 3rd party content different than publishers who are publishing their own content is because the two things are not the same. And if websites had to review every bit of user content, like publishers do, then… we’d have many fewer spaces online where people can communicate. It would stifle speech online massively.
The tech industry’s right to do whatever it wants without consequence is its soft underbelly, not its secret sauce.
But it’s NOT a “right to do whatever it wants without consequence.” Not even remotely. The sites themselves cannot break the law. The sites have very, very strong motivations to moderate — including pressure from their own users (because if they don’t do the right thing, their users will go elsewhere), the press, and (especially) from advertisers. We’ve seen just in the past few months that advertisers pulling their ads from Facebook has been an effective tool in getting Facebook to rethink its policies.
The idea that because 230 is there, Facebook and other sites do nothing is a myth. It’s a myth that Reed and Steyer are exploiting to make you think that you have to “save the children.” It’s bullshit and they should be ashamed to peddle myths. But they lean hard into these myths:
Instead of acknowledging Facebook’s role in the 2016 election debacle, he slow-walked and covered it up. Instead of putting up real guardrails against hate speech, violence, and conspiracy videos, he has hired low-wage content moderators by the thousands as human crash dummies to monitor the flow. Without that all-purpose Section 230 shield, Facebook and other platforms would have to take responsibility for the havoc they unleash and learn to fix things, not just break them.
This is… not an accurate portrayal of anything. It’s true that Zuckerberg was initially reluctant to believe that it had a role in 2016 (and there are still legitimate questions as to how much of an impact Facebook actually had or whether it was just a convenient scapegoat for a poorly-run Hillary Clinton campaign). But by 2017, Facebook had found religion and completely revamped its moderation processes regarding election content. Yes, it did hire thousands of content moderators. But it’s bizarre that Reed and Steyer finally admit this way down in the article after paragraphs upon paragraphs insisting that Facebook does no moderation, doesn’t care, and doesn’t need to do anything.
But more to the point, if they don’t want Facebook to hire all those content moderators, but do want Facebook to stop all the bad stuff online… how the hell do they think Facebook can do that? The answer to them is the same as “wave a magic wand.” They say to take away Facebook’s 230 protections, like that will magically solve stuff. It won’t.
It would mean much greater taking down of content, including content from marginalized voices. It would mean Facebook would likely have to hire many more of those content moderators to review much more content. And, most importantly, it means that no competitor could ever be built to compete with Facebook because it would be the only company that could afford to take on such compliance costs.
And, the article gets worse. Reed and Steyer point to FOSTA as an example of how to reform 230. Really.
o the simplest way to address unlimited liability is to start limiting it. In 2018, Congress took a small step in that direction by passing the Stop Enabling Sex Traffickers Act and the Allow States and Victims to Fight Online Sex Trafficking Act. Those laws amended Section 230 to take away safe harbor protection from providers that knowingly facilitated sex trafficking.
Right, and what was the result? It certainly didn’t do what the people promoting it expected. Craigslist shut down its dating section, clearing the field for Facebook to launch its own dating site. In other words, it gave more power to Facebook.
More importantly, it has been used to harm sex workers putting many lives at risk, and shutting down places where adults could discuss sex, all while making it harder for police to find sex traffickers. The end result has actually been an increase rather than a decrease in ads for sex online.
In other words, citing FOSTA as a “good example” of how to amend Section 230 suggests whoever is citing it doesn’t know what they’re talking about.
Congress could continue to chip away by denying platform immunity for other specific wrongs like revenge porn. Better yet, it could make platform responsibility a prerequisite for any limits on liability. Boston University law professor Danielle Citron and Brookings Institution scholar Benjamin Wittes have proposed conditioning immunity on whether a platform has taken reasonable efforts to moderate content.
We’ve debunked this silly, silly proposal before. There are almost no sites that don’t do moderation. They all have “taken reasonable efforts” to moderate, except for perhaps the most extreme. Yet this whole article was about Facebook and YouTube — both of which could easily show that they’ve “taken reasonable efforts” to moderate content online.
So, if this is their suggestion… it would literally do nothing to help the “problems” they insisted were there for YouTube and Facebook. And, instead, what would happen is smaller sites would never get a chance to exist, because Facebook and YouTube would set the “standard” for how you deal with content moderation — just like how the EU has now set YouTube’s expensive ContentID as “the standard” for any site dealing with copyright-covered content.
So this proposal does nothing to change Facebook or YouTube’s policies, but locks them in as the dominant players. How is that a good idea?
But Reed and Steyer suggest maybe going further:
Washington would be better off throwing out Section 230 and starting over. The Wild West wasn’t tamed by hiring a sheriff and gathering a posse. The internet won’t be either. It will take a sweeping change in ethics and culture, enforced by providers and regulators. Instead of defaulting to shield those who most profit, the United States should shield those most vulnerable to harm, starting with kids. The “polluter pays” principle that we use to mitigate environmental damage can help achieve the same in the online environment. Simply put, platforms should be held accountable for any content that generates revenue. If they sell ads that run alongside harmful content, they should be considered complicit in the harm. Likewise, if their algorithms promote harmful content, they should be held accountable for helping redress the harm. In the long run, the only real way to moderate content is to moderate the business model.
Um. That would kill the open internet. Completely. Dead. And it’s a stupid fucking suggestion. The “pollution” they are discussing here is 1st Amendment protected speech. This is why thinking of it as analogous to pollution is so dangerous. They are advocating for government rules that will stifle free speech. Massively. And, again, the few companies that can do something are the biggest ones already. It would destroy smaller sites. And it would destroy the ability for you or me to talk online.
There’s more in the article, but it’s all bad. That this is coming from Biden’s top tech advisor is downright scary. It is as destructive as it is ignorant.
Filed Under: bruce reed, content moderation, intermediary liability, jim steyer, joe biden, liability, moral panic, responsibility, section 230, think of the children
Companies: facebook, youtube