mandated transparency – Techdirt (original) (raw)
ExTwitter Unfortunately Loses Round One In Challenging Problematic Content Moderation Law
from the well,-that's-unfortunate dept
Back in September we praised Elon Musk for deciding to challenge California’s new social media transparency law, AB 587. As we had discussed while the bill was being debated, while it’s framed as a transparency bill, it has all sorts of problems. It would (1) enable the California government officials (including local officials) to effectively put pressure on social media companies regarding how they moderate by enabling litigation for somehow failing to live up to a terms of service, (2) make it way more difficult for social media companies to deal with bad actors by limiting how often they can change their terms of service, and (3) hand bad and malicious actors a road map for being able to claim they’re respecting the rules, while clearly abusing them.
Yet, the largest social media companies (including Meta and Google) apparently are happy with the law, because they know it creates another moat for themselves. They can deal with the compliance requirements of the law, but they know that smaller competitors cannot. And, because of that, it wasn’t clear if anyone would actually challenge the law.
A few Twitter users sued last year, but with a very silly lawyer, and had the case thrown out because none of the plaintiffs had standing. But in the fall, ExTwitter filed suit to block the law from going into effect, using esteemed 1st Amendment lawyer Floyd Abrams (though, Abrams has had a series of really bad takes on the 1st Amendment and tech over the past decade or so).
The complaint still seemed solid, and Elon deserved kudos for standing up for the 1st Amendment here, especially given the larger tech companies’ unwillingness to challenge the law.
Unfortunately, though, the initial part of the lawsuit — seeking a preliminary injunction barring the law to go into effect — has failed. Judge William Shubb has sided with California against ExTwitter, saying that Elon’s company has failed to show a likelihood of success in the case.
The ruling relies heavily on a near total misreading of the Zauderer case, regarding whether or not compelled commercial speech was allowed under the 1st Amendment. As we discussed with Professor Eric Goldman a while back, reading Zauderer, you see that the case was ruled on narrow grounds, saying you could mandate transparency if it was about the text in advertisements, required disclosure of purely factual information, the information disclosed would be uncontroversial, and required the disclosure to be about the terms of an advertiser’s service. If all those conditions are met, the law might still be found unconstitutional if the disclosure requirements are not related to preventing consumer deception or if the disclosure requirements are unduly burdensome.
As professor Goldman has compellingly argued, laws requiring social media companies reveal to government officials their moderation policies meet basically none of the Zauderer conditions. It’s not about advertising. It’s not purely factual information. The disclosures can be extremely controversial. The disclosures are not about any advertiser’s services. And, on top of that, it has nothing to do with preventing consumer deception and the requirements can be unduly burdensome.
A New York Court threw out a similar law, recognizing that Zauderer shouldn’t be stretched this far.
Unfortunately, Shubb goes the other way, and argues that Zauderer makes this kind of mandatory disclosure compatible with the 1st Amendment. He does so by rewriting the Zauderer test, leaving out some of the important conditions, and then mis-applying the test:
Considered as such, the terms of service requirement appears to satisfy the test set forth by the Supreme Court in Zauderer v. Office of Disciplinary Counsel of Supreme Court of Ohio, 471 U.S. 626 (1985), for determining whether governmentally compelled commercial disclosure is constitutionally permissible under the First Amendment. The information required to be contained in the terms of service appears to be (1) “purely factual and uncontroversial,” (2) “not unjustified or unduly burdensome,” and (3) “reasonably related to a substantial government interest.”
The court admits that the compelled speech here is different, but seems to think it’s okay, citing both the 5th and 11th Circuits in the NetChoice cases (who both also applied the Zauderer test incorrectly — which is why we pointed out this part of the otherwise strong 11th Circuit decision was going to be a problem):
The reports to the Attorney General compelled by AB 587 do not so easily fit the traditional definition of commercial speech, however. The compelled disclosures are not advertisements, and social media companies have no particular economic motivation to provide them. Nevertheless, the Fifth and Eleventh Circuits recently applied Zauderer in analyzing the constitutionality of strikingly similar statutory provisions requiring social media companies to disclose information going well beyond what is typically considered “terms of service.”
Even so, this application of the facts to the misconstrued Zauderer test… just seems wrong?
Following the lead of the Fifth and Eleventh Circuits, and applying Zauderer to AB 587’s reporting requirement as well, the court concludes that the Attorney General has met his burden of establishing that that the reporting requirement also satisfies Zauderer. The reports required by AB 587 are purely factual. The reporting requirement merely requires social media companies to identify their existing content moderation policies, if any, related to the specified categories. See Cal. Bus. & Prof. Code § 22677. The statistics required if a company does choose to utilize the listed categories are factual, as they constitute objective data concerning the company’s actions. The required disclosures are also uncontroversial. The mere fact that the reports may be “tied in some way to a controversial issue” does not make the reports themselves controversial.
But… that’s not even remotely accurate on multiple accounts. It is not “purely factual information,” that is required to be disclosed. The disclosure is about the highly subjective and constantly changing processes by which social media sites choose to moderate. Beyond covering way more than merely factual information, it’s also extraordinarily controversial.
And that’s not just because they’re often tied to controversial issues, but rather because users of social media are constantly “rules litigating” moderation decisions, and insisting that websites should or should not moderate in certain ways. The entire point of this law is to try to pressure websites to moderate in a certain way (which alone should show the Constitutional infirmities in the law). In this case, it’s California trying to force websites to remove “hate speech” by demanding they reveal their hate speech policies.
Now, assuming most of you don’t like hate speech, you might not see this as all that controversial, but if that’s allowed, what’s to stop other states from requiring the same thing regarding how companies deal with other issues, like LGBTQ content. Or criticism of the police.
But, the court here insists that this is all uncontroversial.
And worse, it ignores that the Zauderer test is limited only to issues of consumer deception.
The California bill has fuck all to do with consumer deception. It is entirely about pressuring websites in how they moderate.
Also, Shubb shrugs off the idea that this law might be unduly burdensome:
While the reporting requirement does appear to place a substantial compliance burden on social medial companies, it does not appear that the requirement is unjustified or unduly burdensome within the context of First Amendment law.
The Court also (again, incorrectly in my opinion) rejects ExTwitter’s reasonable argument that Section 230 pre-empts this. Section 230 explicitly exempts any state law that seeks to limit a website’s independence in making moderation decisions, and thus this law should be pre-empted as such. Not so, says the court:
AB 587 is not preempted. Plaintiff argues that “[i]f X Corp. takes actions in good faith to moderate content that is ‘obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable,’ without making the disclosures required by AB 587, it will be subject to liability,” thereby contravening section 230. (Pl.’s Mem. (Docket No. 20) at 72.) This interpretation is unsupported by the plain language of the statute. AB 587 only contemplates liability for failing to make the required disclosures about a company’s terms of service and statistics about content moderation activities, or materially omitting or misrepresenting the required information. See Cal. Bus. & Prof. Code § 22678(2). It does not provide for any potential liability stemming from a company’s content moderation activities per se. The law therefore is not inconsistent with section 230(c) and does not interfere with companies’ ability to “self-regulate offensive third party content without fear of liability.” See Doe, 824 F.3d at 852. Accordingly, section 230 does not preempt AB 587.
Again, this strikes me as fundamentally wrong. The whole point of the law is to force websites to moderate in a certain way, and to limit how they can moderate in many scenarios, thus creating liability for moderation decisions regarding whether or not those decisions match the policies disclosed to government officials under the law. That seems squarely within the pre-emption provisions of Section 230.
This is a disappointing ruling, though it is only at stage one in this case. One hopes that Elon will appeal the decision and hopefully the 9th Circuit has a better take on the matter.
Indeed, I’d almost hope that this case were one that makes it to the Supreme Court, given the makeup of the Justices on the Supreme Court today and the (false, but whatever) belief that Elon has enabled “more free speech” on ExTwitter. It seems like this might be a case where the conservative Justices might finally understand why these kinds of transparency laws are problematic, by seeing how California is using them (as opposed to the Florida and Texas laws it’s reviewing currently, where that wing of the Supreme Court is more likely willing to side with those states and their goals).
Filed Under: 1st amendment, ab 587, california, elon musk, mandated transparency, rob bonta, section 230, terms of service, transparency, william shubb, zauderer
Companies: twitter, x
Senators Warren & Graham Want To Create New Online Speech Police Commission
from the this-bill-causes-me-psychological-harm-and-emotional-distress dept
The regulation will continue until internet freedom improves, apparently. Last year we wrote about Senator Michael Bennet pushing a terrible “Digital Platform Commission” to be the new internet speech police, and now we have the bipartisan free speech hating duo of Senators Elizabeth Warren and Lindsey Graham with their proposal for a Digital Consumer Protection Commission.
The full bill is 158 pages, which allows it to pack in a very long list of terrible, terrible ideas. There are a few good ideas in there (banning noncompetes and no poach agreements). But there are an awful lot of terrible-to-unconstitutional ideas in there.
I’m not going through the whole thing or this post would take a month, but will highlight some of the lowlights.
A brand new federal commission!
First, it’s setting up an entirely new commission with five commissioners, handling work that… is mostly already the purview of the FTC and (to a lesser extent) the DOJ. Oddly, the bill also seeks to give more power to both the FTC and the DOJ, pretty much guaranteeing more bureaucratic clashing.
The areas in which the Commission will have new authority, though, beyond what the FTC and DOJ could do, is almost entirely around policing speech. It would get to designate some websites as “dominant platforms” if they’re big enough: basically 50 million US users (or 100k business users), a marketcap or revenue over $550 billion. Bizarrely, the bill claims it will violate the law to take any action “to intentionally avoid having the platform meet the qualifications for designation as a dominant platform,” which basically means if you TRY NOT TO BE DOMINANT, you could be seen as violating the law. Great drafting, guys.
Literally, much of this law is screaming “we don’t want dominant platforms,” but then says it will punish you for not becoming a dominant platform. That’s because the real goal of this law is not stopping dominant platform, but allowing the government to control the platforms that people use.
Editorial pressure & control pretending to be about “transparency” and “due process.”
In terms of the actual requirements under the law. It would require “dominant” platforms publicly reveal their editorial decision making process, which seems like a clear 1st Amendment violation. Just imagine if a Democratic administration demanded Fox News publicly declare its editorial guidelines, or a GOP administration requiring that of MSNBC. In both cases, people would be rightly outraged at this clear intrusion on 1st Amendment editorial practices. But, for whatever reason people think it’s fine when it’s social media companies. Here it says such companies have to:
make publicly available, through clear and conspicuous disclosure, the dominant platform’s terms of service, which shall include the criteria the operator employs in content moderation practices.
We’ve talked about this before. Transparency is good, and companies should be more transparent, but when you mandate it under law it creates real problems, and likely less transparency. First off, it turns a useful feature into a “compliance” feature, which means that every bit of transparency has to go through careful legal review, and companies will seek to only do exactly what is required under the law, rather than providing more useful details.
But, again, “content moderation practices” and “criteria” are the kind of thing that need to change rapidly, because malicious actors are adapting rapidly. But, this law is written under the ignorant and foolish belief that content moderation criteria are static things that never change. And, again, since such changes will now need to go through legal and compliance reviews, that means that it gives much more time and room for bad actors to operate, while websites have to review with their lawyers if they can actually make any move to change a policy to stop a bad actor.
It also requires a clear “notice” and appeals process for any content moderation decision. This is another thing that makes sense in some circumstances, but not in many others, most notably spam. Now, this new rule might not matter too much to the companies likely to be declared “dominant” given that the DSA has similar requirements, but at least the DSA exempts spam. Warren and Graham are so clueless that they’re literally requiring websites to inform spammers that they’re onto them.
By my reading, the bill will require Google to inform spam websites when it downranks them for spam. The definition of a platform explicitly includes search engines, then requires a “notice” and “appeal” process for any effort to “limit the reach” of content or to “deprioritize” it. The exceptions are in cases of “imminent harm”, if it relates to terrorism or criminal activity, or if law enforcement says not to send the notice. So, search engine spammers, rejoice. Google would have to now give you a notice if it downranks you, with clear instructions on how it makes those decisions, so you can try to get around them.
And, it gets dumber: if any user of a “dominant platform” believes that the platform is not following its own terms of service, you can (1) demand they comply, (2) request a mandated appeal, which requires under law that the platform tell you within 7 days “with particularity the reasonable factual basis for the decision,” and (3) file a legal complaint with the Commission itself to try to have the Commission force the website to change its mind.
This bill was clearly written by very, very, very ignorant people who have never spent a day handling trust & safety complaints. The vast, vast majority of trust & safety issues are around spam or other malicious users trying to game your system. Having to go through all of these steps, and giving the malicious actors the ability to completely tie up your moderation systems handling reviews and (even worse) having to respond to investigations from the Commission itself, is going to be abused so badly.
Also, this section says that the Commission “shall establish a standardized policy that operators of dominant platforms can adopt regarding content moderation and appeals of content-moderation decisions.” This is, to put it mildly, stupid beyond all belief.
Seriously. Take whichever staffer wrote this nonsense and put them on a trust & safety team for a month.
Trust & safety content moderation practices are constantly changing. They need to constantly change. If you establish official under-the-law “best practices” then every website is going to adopt exactly those practices, because that’s how you avoid legal liability. But that means that websites are much slower to react to malicious users who get around those rules, because going beyond best practices is now a legal liability.
On top of that, it means an end to experimentation. The trust & safety field is constantly evolving. Different sites take different approaches, as it’s often necessary based on the community they cater to, or the kinds of content they show. What kinds of “best practices” could anyone come up with that apply equally to Twitter and Reddit and Wikipedia and GitHub? The answer is you can’t. Each one is really different.
But this bill ignores that and assumes, ignorantly, that they’re all the same, and there’s one magic set of “best practices” that a government bureaucracy, which will likely be made up of people with very little trust & safety experience, gets to determine.
Data portability & interop, but without dealing with any of the tradeoffs
There’s a section on data portability and interoperability — which is of great interest to me — but makes the same silly mistakes nearly every other regulatory mandate for those two things does. Again, I want there to be more data portability and interoperability, but when you mandate it, you create a variety of other problems, such as questions regarding privacy. I mean, the same people (Elizabeth Warren) complaining about how we need to mandate interoperability, also complained about Cambridge Analytica getting access to user data.
So, under this bill, you’re likely to get a lot more Cambridge Analytica situations, because denying access to such companies might violate the portability and interop requirements. The bill “solves” this by saying “but don’t undermine end-user data protection.” Basically the bill is “you must unlock all your doors, but don’t read this to mean that you should let burglars rob your house.”
And then leaves it up to the platforms to figure out how you stop the burglars without having locks at your disposal.
There are ways to do interoperability right, but a government mandate from on high, without taking into account a whole wide variety of tradeoffs… is not it. But it is what we get from the clueless duo of Warren and Graham.
Speech suppression masquerading as ‘privacy’ reform.
We’ve said it before, and I’m sure we’ll say it many more times in the future: we need a federal privacy bill. And, in theory, this bill has privacy reform. But this is not the privacy reform we need. At all. As with way too many privacy reform bills, the nature of this one is to give the very companies we constantly accuse of being bad about privacy more power and more control.
It includes a “duty of loyalty” such that any service has to be designed such that it does not “conflict” with “the best interests of a person” regarding their data. This is another one of those things, like so much in this bill, that sounds good in theory, until you have to understand the actual details, which the bill (as it does in the section above) handwaves away all the problematic questions and tradeoffs.
Who determines what’s in the “best interests” of the person in this situation? The person themselves? The company? The government? Each of those has very real problems, none of which the bill weighs, because that would take actual work and actual understanding, which seems like something none of the staffers who wrote this bill cared to do.
Then, there’s a separate “duty of care” that is even worse. We’ve explained for years that this European concept of a “duty of care” has always been a “friendly sounding” way to attack free speech. And, that’s quite clear in this bill. The duty of care requires that a website not use algorithms or user data in a way that “is likely to cause… psychological injuries that would be highly offensive to a reasonable person.”
What?
This bill causes me psychological injuries. And most “highly offensive” speech is also highly… protected by the 1st Amendment. But this bill says that a website has to magically stop such “psychological injuries.”
This is basically “stop all ‘bad’ content from flowing online, but we won’t define bad content’, we’ll just blame you after anything bad happens.” It’s literally the same mechanism that the Great Firewall of China used in its earliest versions, in which the state would tell ISPs “we’ll fine you if any bad content is spread via your network” but didn’t tell them what counted as bad.
The end result, of course, is vast over-suppression of speech to avoid any possibility of liability.
This is not a privacy bill. It’s a speech suppression bill.
And it gets worse. There’s a “duty of mitigation,” as well, which requires any website to “mitigate the heighted risk of physical, emotional, developmental, or material harms posed by materials on, or engagement with, any platform…”
So, you have to “mitigate” the potential “emotional” harms. But, of course, there are all sorts of “emotional” harms that are perfectly legal. Under this bill, apparently, not any more.
This is yet another attempt at Disneyfying the internet, and pretending that if we just don’t let anyone talk about controversial topics that, magically, they go away. It’s head in the sand regulation from lawmakers who don’t want to solve hard problems. They just want to hide them and then blame social media companies should any of them reveal the real social problems.
If all of this wasn’t already an attack on the 1st Amendment… the bill includes a “right to be forgotten.”
“A person shall have the right to… delete all personal data of the user that is stored by a covered entity.”
Once again, if you were talking about basic data in other contexts, this could make sense. But we already know exactly how this works in practice because the EU has such a right to be forgotten and it’s a fucking disaster. It is regularly used to hide news stories. Or, by Russian oligarchs to try to silence reporters detailing the sources of their wealth.
Do Warren or Graham include exceptions for journalism or other protected speech? Of course not. That would make sense, and remember the goal of this bill is not to make sense, but to grandstand and pretend they’re “cracking down on big tech.”
Giving China the moral high-ground when it forces foreign companies to give up data to local Chinese firms.
For years, China has implemented a program by which non-Chinese companies that want to operate in China must partner with a Chinese owned company, which effectively gives China much more access to data and the ability to control and punish those foreign companies. It’s the kind of thing the US should be condemning.
Instead, Elizabeth Warren and Lindsey Graham apparently see it as an admirable idea worth copying.
The bill requires that any “dominant platform” that is operating in the US must be based in the US, or own a subsidiary in the US. It’s basically the “no foreign internet company may be successful” act. Except, of course, this will only justify such actions not just in China but everywhere else as well. Tons of countries are going to point to the US in requiring that US companies set up local subsidiaries as well, putting data and employees at much greater risk.
As part of this section, it also says that if more than 10% of the owners or operators of a dominant platform are “citizens of a foreign adversary” then the operator has to keep a bunch of information in the US. This is basically the “TikTok provision,” without recognizing that it will also be used to justify countries like India, Turkey, Brazil, Russia and more when they demand that US companies keep data in their countries, where the government will then demand access to it.
Please apply for your 1st Amendment license
I wish I were joking, but the bill requires designated “dominant” platforms to get a license to operate, meaning that the government can choose to suspend that license. You know, like Donald Trump threatened to do (even though this was not a thing) to TV stations that made fun of him.
That line alone should suggest why this is problematic. Requiring a special license (with the inherent threat to have that license removed) for businesses primarily engaged in speech is a massive 1st Amendment red flag. It has been allowed in broadcast businesses that use scarce spectrum solely because of the scarcity of the spectrum, where you can only have one TV or radio station operating on a certain frequency.
But, that makes no sense on the internet.
But, Warren and Graham love it. Especially because it lets the licensing authority created by this bill “rescind” or “revoke” the license if they feel that the platform has “engaged in… egregious… misconduct.” I’m sure that won’t be abused at all [insert rolling eye emoji].
If your social media platform license gets revoked, then you “shall not be treated as a corporation” and you “may not operate in the United States.”
This is… authoritarian dictator level bullshit.
Anyway, that’s just some of the many problems with the bill. Amazingly, a bunch of organizations are eagerly endorsing the bill. I’m not convinced any of them actually read it.
The Digital Consumer Protection Commission Act is endorsed by Accountable Tech, the American Economic Liberties Project, the Center for American Progress, Color of Change, Common Sense Media, the Open Markets Institute, Public Citizen, and Raven.
I find Public Citizen’s endorsement of the bill particularly problematic, given how hard Public Citizen’s litigation group has fought to protect free speech online. The others are just disappointing, but not as surprising, as nearly all of them have gone off the deep end in believing any regulation of “big tech” must be good because “big tech is bad.”
The failure of all of these organizations to actually consider the real impact of what they’re endorsing says a lot about the leadership of all of those orgs, and none of it good.
Filed Under: 1st amendment, competitionm, content moderation, data portability, digital consumer protection commission, duty of care, elizabeth warren, interoperability, lindsey graham, mandated transparency, privacy, speech police, transparency
Transparency Is Important; Mandated Transparency Is Dangerous And Will Stifle Innovation And Competition
from the a-distinction dept
While much of yesterday’s Senate Commerce Committee hearing was focused on the pointless grievances and grandstanding of sitting Senators, there was a bit of actual news made by Mark Zuckerberg and Jack Dorsey. As we discussed earlier this week, Zuckerberg agreed for the first time that he was in support of Section 230 reform, though he declined in his opening remarks to specify the nature of the reforms he supported. And while the original draft of Jack Dorsey’s opening testimony suggested full support of 230, in the given remarks he also suggested that Twitter would support changes to Section 230 focused on getting companies to be more transparent. Later in the hearing, during one of the extraordinarily rare moments when a Senator actually asked the CEOs how they would change 230, Zuckerberg also focused on transparency reports, before immediately noting that Facebook already issued transparency reports.
In other words, it appears that the “compromise” the internet companies are looking to throw to a greedy Congress regarding Section 230 reform is “transparency.” I’ve heard from a variety of policymakers over the last few months who also seem focused on this transparency issue as a “narrow” way to reform 230 without mucking up everything else, so it seems like mandating content moderation transparency may become “a thing.”
Mandating transparency, however, would be a dangerous move that would stifle both innovation and competition.
Cathy Gellis has covered this in detail in the past, and I addressed it in my comments to the FCC about Section 230. But it seems like we should be a little clearer:
Transparency is important. Mandated transparency is dangerous.
We’ve been celebrating lots of internet companies and their transparency reports going back to Google’s decision nearly a decade ago to start releasing such reports. Over time, every large internet company (and many medium ones) has joined the bandwagon. Indeed, after significant public pressure, even the notoriously secretive giant telcos started issuing transparency reports as well (though they often did so in a secretive manner that actually hid important details).
So, at the very least, it certainly looks like public pressure, good business practices, and pressure from peers in the industry have already pushed the companies into releasing such reports. On top of that, many of the internet companies seem to try to outdo each other in being more transparent than their peers on these reports — which again is a good thing. The transparency reports are coming and we should celebrate that.
At the very least, though, this suggests that Congress doesn’t need to mandate this, as it’s already happening.
But, you might say, then why should we worry about mandates for transparency reports? Many, many reasons. First off, while transparency reports are valuable, in some cases, we’ve seen governments and government officials using them as tools to celebrate censorship. Governments are not using them to better understand the challenges of content moderation, but rather as tools to see where more censorship should be targeted. That’s a problem.
Furthermore, creating a “baseline” for transparency reports creates two very large issues that could damage competition and innovation. First, it creates a clear compliance cost, which can be quite burdensome for new and smaller websites. Facebook, Google and Twitter can devote people to creating transparency reports. Smaller sites cannot. And while you could, in theory, craft a mandate that has some size thresholds, historically that leads to gaming and other tricks.
Perhaps more importantly, though, a mandate with baseline transparency thresholds locks in certain “rules” for content moderation and creates real harm to innovative and different ideas. While most people seem to think of content moderation along the lines of how Facebook, YouTube, and Twitter handle it — with large (often outsourced) content moderation teams and giant sets of policies — there are many, many other models out there as well. Reddit is a decently large company. Yet it handles content moderation by pushing it out to volunteer moderators who run each subreddit and get to make their own content moderation rules. Would each subreddit have to release its own report? Would Reddit itself have to track how each individual subreddit is moderated and include all of that in its report?
Or how about Wikipedia? That’s one of the largest sites on the internet, and all of its content moderation practices are already incredibly transparent, since every single edit shows in each page’s history — often including a note about the reasoning. And, again, rather than being done by staff, every Wikipedia edit is done by volunteers. But should Wikipedia have to file a “standardized” report as well about how and why each of those moderation decisions were made?
And those are just two examples of large sites with different models. The more you look, the more alternative moderation models you can find — and many of them would not fit neatly into any “standards” for a transparency report. Instead, what you’d get is a hamfisted setup that more or less forces all different sites into a single (Facebook/YouTube/Twitter) style of content moderation and transparency. And that’s very bad for innovation in the space.
Indeed, as someone who is quite hopeful for a future where the content moderation layer is entirely separated from the corporate layer of various social media sites, I worry that mandated transparency rules would make that much, much more difficult to implement. Many of the proposals I’ve seen to build more distributed/decentralized protocol-based solutions for social media would not (and often could not) be fit into a “standardized” model of content moderation.
And thus, creating rules that mandate such transparency reporting for companies based on the manner in which those three large companies currently release transparency reports would only serve to push others into that same model, creating significant compliance costs for those smaller entities, while greatly limiting their ability to experiment with new and different styles of moderation.
Filed Under: competition, innovation, jack dorsey, mandated transparency, mark zuckerberg, section 230, transparency, transparency report
Companies: facebook, twitter