free expression – Techdirt (original) (raw)

Hey Ricky Schroder: Porn Is Protected By The 1st Amendment

from the only-in-the-Fifth-Circuit dept

Do you all remember Ricky Schroder? He is a former child actor who became prominent thanks to TV series like Silver Spoons and NYPD Blue. While I could reminisce about old TV for hours, it is worth noting that Ricky Schroder has become a darling for the far-right. You might remember him from his greatest hits of yelling at a Costco worker in Los Angeles for refusing him access to the store in 2021 for not wearing a facemask, per the local mask requirements created to protect against the spread of COVID-19. In recent months, Schroder entered right-wing political advocacy and lobbying by establishing the so-called ‘Council on Pornography Reform.’ Conservative news outlet The Western Journal was the most recent media to cover Schroder’s work to “protect the kids” and push legal textualism.

Without really noticing at the time (my bad), the Council on Pornography Reform and a coalition of other terrible far-right and anti-pornography groups filed an amicus brief with the Fifth Circuit Court of Appeals at the end of September. Currently, adult industry companies and the industry’s advocacy organization, the Free Speech Coalition, are fighting an appeal brought by the state of Texas dealing with the state’s age verification and porn labeling law, House Bill (HB) 1181. The plaintiffs, made up of the parent companies of the largest pornography brands in the world, were able to convince Senior U.S. District Judge David Alan Ezra of the Western District of Texas to block HB 1181 before it entered into force on September 1. Mike Masnick, Ari Cohn, Corbin K. Barthold, others, and myself have written for Techdirt and other outlets (Cohn’s column on the Ezra ruling at The Daily Beast), all commenting that Judge Ezra’s decision that corresponds to existing case law and the intent of Justice William J. Brennan Jr.’s interpretation of the First Amendment to cover a broad “freedom of expression.” Pornography is included. However, the Fifth Circuit, the appeals court equivalent of the short bus, issued an administrative stay on the preliminary injunction Ezra produced, allowing House Bill 1181 to be enforced while litigation plays out. The administrative stay was granted after then-acting Texas Attorney General Angela Colmenero appealed the Fifth Circuit to hear arguments about overturning the Ezra injunction.

The appeal was granted, and oral arguments were heard on October 4. A decision from the court has yet to be handed down. According to PACER, two amicus briefs were filed. The brief I first noticed was, naturally, the amicus submitted by famed First Amendment counsel Robert Corn-Revere on behalf of the American Civil Liberties Union, TechFreedom, Electronic Frontier Foundation, Foundation for Individual Rights & Expression, Center for Democracy & Technology, and Media Coalition Foundation. These groups urged the three-judge panel to uphold the lower court’s injunction and to affirm that House Bill 1181 violates the First Amendment of adult entertainment websites and adult users of these sites logging on from Texas IP addresses. But the second amicus brief was filed by counsel representing Schroder’s group, the Council on Pornography Reform, and a slate of other far-right astroturfing groups. These groups include Michael Flynn’s America’s Future, the Public Advocate of the United States, and a few conservative textualist groups with terrible or no websites. Note that Public Advocate, founded by Eugene Delgaudio, has been classified as a hate group by the nonprofit Southern Poverty Law Center for general and anti-LGBTQ+ hate.

The Western Journal reported on October 16 that the amicus brief is a step to “protect children” from pornography and what these groups view as obscene. Central to their argument is saying that the U.S. Constitution, let alone the First Amendment, only protects “political speech.” This is an obnoxious argument used by conservatives for decades to try and limit interpretations of the First Amendment to cover forms of expression that are literally outlined in the amendment and do not protect what they would argue are implied rights of expression. To wit, the amicus brief refers to Justice Brennan’s definition of “freedom of expression” as “devoid of meaning.” They also apply an antiquated definition of obscenity that predates the Miller test to guide the analysis of the Fifth Circuit. Counsel for the anti-porn groups literally makes up terms to refer to utilizing a less-nuanced textualist review of the law. The call for the Fifth Circuit to overturn Ezra’s ruling and apply a so-called “textually faithful” analysis of the law. But what’s more disconcerting is a reliance on case law and previous decisions affirming a right to expression, implied or not, as a protected element of the First Amendment. As long as no law has been broken, the standard for classifying something as illegal or even “obscene” is entirely subjective. Obscenity case law is by no means simple or consistent. Nevertheless, U.S. courts in recent decades have decided that obscenity applies primarily to content that is classified as child sexual abuse material, other types of content depicting the exploitation of real-life human children, non-consensual intimate images (NCII; also known as ‘revenge porn’), and other depictions of sexual violence and exploitation.

The Miller test is the standard used in most obscenity cases, in part, because it requires a judge or jury to determine whether specific material meets the threshold of being obscene. Obscenity is in the eye of the beholder. Since First Amendment concerns for the vast majority of people often do supersede the concerns of a tiny minority, obscenity in any interpretation of the First Amendment has to be legitimately illegal, heinous content. Why do you think the obscenity provisions built into laws like the Communications Decency Act, the Child Online Protection Act, and the Child Pornography Prevention Act were all struck down by the Supreme Court for First Amendment violations? These statutes infringed on the First Amendment rights of the vast majority of users of the internet. At that, it is worth reminding you all that the court found that CPPA was broadly violating the First Amendment rights of adult performers and producers who create virtual taboo and fetish material. The Supreme Court in Ashcroft v. Free Speech Coalition affirmed this view.

Beyond the obvious ideological undertones of the anti-pornography groups (they reference the Holy Bible in the amicus brief), it shows a willful ignorance of First Amendment case law that anyone–an 8th-grade civics student, a first-year law student, a doctor, a bus driver–could beat in a court of law. You’d hope that an actor who once played a cop on TV could figure that out. I implore the Fifth Circuit Court of Appeals to not fall for these flawed arguments.

Michael McGrady covers the tech side of the online porn business, among other things. He is the contributing editor for politics and legal at AVN.com.

Filed Under: 1st amendment, 5th circuit, adult content, free expression, free speech, porn, ricky schroder
Companies: council on pornography reform, free speech coalition

OnlyFans Throws The Open Internet Under The Bus

from the only-regulatory-moats dept

It’s always disappointing when an internet company that should know better decides to throw the open internet it relies on under the bus.

You would think that a site like OnlyFans would know better. You expect this sorta thing from Meta or Google or Netflix, which have reached a size where they’re more willing to compromise with open internet principles in order to help build themselves a politically convenient compliance nightmare for smaller competitors.

But you would have thought OnlyFans was still new enough that it wouldn’t join those pulling up the ladder behind them. After all, it’s run into its own struggles with what happens when moralizing politicians try to stifle the open internet.

Apparently, though, the company doesn’t care much to support the open internet.

The Economist recently had a big story about attempts to regulate speech online. The piece is not a bad summary of how politicians everywhere are trying to become the speech police. There’s some talk of Section 230, the various dumb state laws about content moderation, the DSA in the EU, attempts in Turkey and Brazil to clamp down on online speech, and much more.

However, what caught my eye was the discussion about the UK’s Online Safety Bill, a very problematic bill that we’ve spoken about plenty of times. And, the Economist actually got a quote from OnlyFans seeming to endorse the age verification aspects of the bill:

The most controversial part of Britain’s bill, a requirement that platforms identify content that is “legal but harmful” (eg, material that encourages eating disorders) has been dropped where adults are concerned. But there remains a duty to limit its availability to children, which in turn implies the need for widespread age checks. Tech firms say they can guess users’ ages from things like their search history and mouse movements, but that a strict duty to verify users’ age would threaten anonymity.

Some suspect that their real objection is the price. “I don’t think ‘It costs money and is hard’ is an excuse,” says Keily Blair, chief operations officer of OnlyFans, a porn-centric platform which checks the age of its users and doesn’t see why others shouldn’t do the same. Yet some platforms are adamant: the Wikimedia Foundation, which runs Wikipedia, says it has no intention of verifying users’ age.

Look, if you want to do age verification, that’s on you, but making it mandatory is a nightmare for the open internet. First, as noted, it destroys anonymity. Second, it puts more user data at risk, for no good reason (to verify ages you have to collect sensitive data). Third, even if it is about the expense, tons of websites can’t afford that nonsense, which will serve no purpose and won’t actually keep anyone safe.

The fact that OnlyFans voluntarily decides to verify ages has a lot more to do with OnlyFans’ business model, content, and target audience. But it’s no excuse for saying that everyone else should have to deal with the same compliance nightmare despite very different products and audiences.

Apparently, this willingness to throw the open internet under the bus isn’t new. That quote seemed so out of place that I went looking, and apparently the company came out fully in favor of the Online Safety Bill last fall.

Blair hopes the Online Safety Bill, which imposes a “duty of care” on social media platforms, will bring her rivals up to the same standard the company believes it upholds.

“We want everyone to be as safe as we are. Anything that pushes people in that direction is a good thing for society,” she says. But now the bill has been pushed back, companies may be slower to act. “I’m disappointed because some people need a stick to make changes. Unfortunately, the law often is that stick.”

There’s an astounding lack of understanding about basic policy issues here, and ones that seem likely to come back to bite OnlyFans. What a “duty of care” actually means is the requirement to litigate any time anything bad happens to anyone on your site. Because each time something bad happens someone will sue, and sites will have to spend a ridiculous about of time, money, and resources to explain why they were appropriate in their “care.” Even if a site thinks it will win, it still creates a massive mess of nonsense and wasted time and money.

Later in that same article, Blair also made it clear that she has no clue how freedom of expression actually works, which is quite stunning given the content that OnlyFans regularly hosts on its own site:

What did she make of those accusations that the legislation would suppress freedom of speech? “Freedom of expression and online safety aren’t a binary choice,” she says. “The reality is that freedom of expression has always been curtailed by the law. There’s always been boundaries in place from a legal standpoint to protect around what we think is acceptable in a modern society to say and not. That’s why we have rules around hate speech.

“People often say things and do things on the internet that they would only do behind a keyboard,” she adds. “People feel emboldened to behave in certain ways sometimes. It’s right to have the same protection online as you do walking down the street.”

It’s unclear here if OnlyFans’ execs are just ignorant, foolish, believe that they can withstand the litigation onslaught while others can’t… or some combination of all three. Or maybe they see themselves as a regulatory target and think they’ll get a better deal by playing nice with regulators. But, nonetheless, it’s still disappointing that a site that has benefited so much from the open internet and freedom of expression has decided to support throwing it all away.

Filed Under: age verification, free expression, keily blair, online safety bill, online speech, open internet, uk
Companies: onlyfans

from the be-smarter-than-SCOTUS dept

Technically we’ve posted this analysis before, when we posted our entire amicus brief submitted to the Supreme Court in the Andy Warhol Foundation v. Goldsmith case, along with a summary of what we had written in it. But that summary also included other arguments, and a very condensed version of this one, that the First Amendment requires copyright law to be interpreted in a way that doesn’t harm future free expression. It is an idea important enough to be worth more attention – especially given that it seems the Supreme Court itself overlooked it.

So we are unpacking what we had submitted to the Court here for everyone to be able to easily read. Although written in a style particularly palatable for jurists (in particular, the cites to cases are treated differently, with the case name following the statement it supports, and the specific language from the case presented in a trailing parenthetical, rather than a separate blockquote, but just skip over them if it feels too kludgy to read), this brief section is really no different from any post we write here, where we make a point, explain it, and cite to authority that supports it. And even though it was written with the aim of reversing the Second Circuit’s decision, the same analysis will remain applicable for every other case in the future, where an interpretation of copyright law threatens to say no to future free expression.

Over the decades and centuries copyright law in America has often changed form, sometimes dramatically and in raw statutory substance, such as in the shift from the 1909 copyright law to the 1976 version, and sometimes via seminal interpretations by the Supreme Court or other courts. But in any of its many forms copyright law has always had to comply with two Constitutional requirements.

First, Congress’s power to legislate is inherently limited to areas articulated by the Constitution as places where it is appropriate for it to act. See United States v. Morrison, 529 U.S. 598, 607 (2000) (citing Marbury v. Madison, 1 Cranch 137, 176 (1803) (“Every law enacted by Congress must be based on one or more of its powers enumerated in the Constitution. ‘The powers of the legislature are defined and limited; and that those limits may not be mistaken, or forgotten, the constitution is written.’ ”)). If Congress acts in a way that is not consistent with that grant of legislative authority, then its legislation is unconstitutional. Id. at 602.

The federal authority to implement a system of copyrights is found in the Progress Clause, which empowers Congress to further the progress of sciences and useful arts through systems of limited monopolies, such as copyright. U.S. CONST. art. I, § 8, cl. 8 (“To promote the progress of science and useful arts, by securing for limited times to authors and inventors the exclusive right to their respective writings and discoveries.”). But if Congress produces a law that does not further this Constitutional objective to promote the progress of science and useful arts, then that resulting law cannot be rooted in this authority, even if it may bear on those systems of limited monopolies. The condition for this particular grant of legislative power is that exercising it will promote progress, and it is an important predicate for the exercise of it. Were it not, then that language conditioning that power would not have needed to be included in this Constitutional clause otherwise empowering Congress. See Montclair v. Ramsdell, 107 U.S. 147, 152 (1883) (“It is, however, a cardinal principle of statutory construction that we must ‘give effect, if possible, to every clause and word of a statute.’ ”); Marbury v. Madison, 1 Cranch 137, 174 (1803) (“It cannot be presumed that any clause in the constitution is intended to be without effect; and, therefore, such a construction is inadmissible, unless the words require it.”).

But because it is an important predicate constraining Congress’s ability to implement a copyright law, it means that Congress cannot simply label anything it wants to do legislatively as copyright-related to automatically make it a product of this grant of legislative authority. If it could then Congress could easily pass a “copyright” law with all sorts of random provisions not even tangentially related to promoting the progress of sciences and useful arts, including those affecting areas of regulation left to the states by the Tenth Amendment. U.S. CONST. amend. X (“The powers not delegated to the United States by the Constitution [ . . . ] are reserved to the States respectively, or to the people.”). See Gonzales v. Raich, 545 U.S. 1, 52 (2005) (O’Connor, J., dissenting) (“[Congress’s] authority must be used in a manner consistent with the notion of enumerated powers—a structural principle that is as much part of the Constitution as the Tenth Amendment’s explicit textual command.”). While this Court has found Congress to have wide latitude to decide how best to promote the progress of science and useful arts in its legislation, Eldred v. Ashcroft, 537 U.S. 186, 211-13 (2003), it did not and could not grant Congress the power to do the exact opposite of promoting progress with its legislation. Thus, statutory terms that do not advance the progress of sciences and the useful arts are inherently unsound Constitutionally, because it is beyond Congress’s authority to do something ostensibly involving copyright law that does not meet that objective, or, worse, directly undermines it.

Congress’s hands are also further tied by the First Amendment, which prohibits making a law that impinges on free expression. U.S. CONST. amend. I (“Congress shall make no law [ . . . ] abridging the freedom of speech”). So, again, if the effect of legislation that Congress passes is that free expression has been impinged, then that legislation would be unconstitutional on that basis as well.

Crucially, however, in this case no issue is taken with Congress’s legislative drafting, which incorporated in the 1976 Copyright Act that is still in effect language expressly protecting fair use. 17 U.S.C. § 107. As this Court has found, fair use helps vindicate the First Amendment values promoting discourse within copyright law. Golan v. Holder, 132 S.Ct. 873, 890 (2012); Eldred, 537 U.S. at 219-20. It also helps vindicate the goals and purposes of the Progress Clause itself, given how it helps promote the creation of future new works. Campbell v. Acuff-Rose Music, Inc., 510 U.S. 569, 577 (1994) (“The fair use doctrine thus permits [and requires] courts to avoid rigid application of the copyright statute when, on occasion, it would stifle the very creativity which that law is designed to foster.”).

The issue with this case is that the decision by the Second Circuit […] has interpreted this statutory language in a way that now deprives it of its inherent constitutionality. See Eldred, 537 U.S. at 212 (“We have also stressed, however, that it is generally for Congress, not the courts, to decide how best to pursue the Copyright Clause’s objectives.”). Rather than fostering more expression, this interpretation outright chills it by imposing liability upon subsequent expression that follows-on an earlier work, as nearly all works do, one way or another. Campbell, 510 U.S. at 575 (1994) (“Every book in literature, science and art, borrows, and must necessarily borrow, and use much which was well known and used before.”) (citing Emerson v. Davies, 8 F. Cas. 615, 619 (No. 4,436) (C.C.D. Mass. 1845)). Such an interpretation puts the statute in conflict with both the First Amendment and the goals and purposes of copyright law articulated in the Constitution. Campbell, 510 U.S. at 575 (“From the infancy of copyright protection, some opportunity for fair use of copyrighted materials has been thought necessary to fulfill copyright’s very purpose, ‘[t]o promote the Progress of Science and useful Arts.’). Only here it is the Second Circuit that has rendered the current copyright statute now unconstitutional, and not Congress.

Courts, however, cannot unilaterally change the effective meaning of statutory text. Bostock v. Clayton County, Georgia, 140 S.Ct. 1731, 1738 (2020) (“If judges could add to, remodel, update, or detract from old statutory terms inspired only by extratextual sources and our own imaginations, we would risk amending statutes outside the legislative process reserved for the people’s representatives.”). And they especially cannot be permitted to change it in a way that alters its constitutionality. See id. (“[W]e would deny the people the right to continue relying on the original meaning of the law they have counted on to settle their rights and obligations.”). See also id. at 1753 (“[T]he same judicial humility that requires us to refrain from adding to statutes requires us to refrain from diminishing them.”).

For this reason, the decision must be overturned, and in a way that makes clear that the measure of constitutionality for copyright law in any form, regardless of whether the parameters of that law have been crafted by Congress or by the courts, is that it does not chill expression, as this decision, thanks to its reasoning, does in measurable effect.

Filed Under: 1st amendment, andy warhol, copyright, free expression, lynn goldsmith, supreme court
Companies: andy warhol foundation

Getting Kicked Off Social Media For Breaking Its Rules Is Nothing Like Being Sent To A Prison Camp For Retweeting Criticism Of A Dictator

from the push-back,-don't-emulate dept

It’s become frustrating how often people insist that losing this or that social media account is “censorship” and an “attack on free speech.” Not only is it not that, it makes a mockery of those who face real censorship and real attacks on free speech. The Washington Post recently put out an amazing feature about people who have been jailed or sent away to re-education camps for simply reposting something on social media. It’s titled “They clicked once. Then came the dark prisons.

The authoritarian rulers were not idle. They planned to take back the public square, and now they are doing it. According to Freedom on the Net 2022, published by Freedom House, between June 2021 and May 2022, authorities in 40 countries blocked social, political or religious content online, an all-time high. Social media has made people feel as though they can speak openly, but technological tools also allow autocrats to target individuals. Social media users leave traces: words, locations, contacts, network links. Protesters are betrayed by the phones in their pockets. Regimes criminalized free speech and expression on social media, prohibiting “insulting the president” (Belarus), “picking quarrels and provoking trouble” (China), “discrediting the military” (Russia) or “public disorder” (Cuba).

Ms. Perednya’s case is chilling. She was an honors student at Belarus’s Mogilev State University. Three days after Russia’s invasion of Ukraine, she reposted, in a chat on Telegram, another person’s harsh criticism of Mr. Putin and Mr. Lukashenko, calling for street protests and saying Belarus’s army should not enter the conflict.

She was arrested the next day while getting off a bus to attend classes. Judges have twice upheld her 6½-year sentence on charges of “causing damage to the national interests of Belarus” and “insulting the president.”

That is chilling free speech. That is censorship. You losing your account for harassing someone is not.

There are a bunch of stories in the piece, each more harrowing then the next.

After a wave of protest against covid-19 restrictions in late November, Doa, a 28-year-old tech worker in Beijing, told The Post that she and a friend were at a night demonstration briefly, keeping away from police and people filming with their phones. “I worked before in the social media industry. … I know how those things can be used by police,” she said. “They still found me. I’m still wondering how that is possible.” She added: “All I can think of is that they knew my phone’s location.” Two days later, police called her mother, claiming Doa had participated in “illegal riots” and would soon be detained. “I don’t know why they did it that way. I think it creates fear,” Doa said. A few hours later, the police called her directly, and she was summoned to a police station in northern Beijing, where her phone was confiscated and she underwent a series of interrogations over roughly nine hours. The group Chinese Human Rights Defenders estimates that more than 100 people have been detained for the November protests.

The piece calls on democratic nations to do something about all of this.

But as authoritarian regimes evolve and adapt to such measures, protesters will require new methods and tools to help them keep their causes alive — before the prison door clangs shut. It is a job not only for democratic governments, but for citizens, universities, nongovernmental organizations, civic groups and, especially, technology companies to figure out how to help in places such as Belarus and Hong Kong, where a powerful state has thrown hundreds of demonstrators into prison without a second thought, or to find new ways to keep protest alive in surveillance-heavy dystopias such as China.

Free nations should also use whatever diplomatic leverage they have. When the United States and other democracies have contact with these regimes, they should raise political prisoners’ cases, making the autocrats squirm by giving them lists and names — and imposing penalties. The Global Magnitsky Act offers a mechanism for singling out the perpetrators, going beyond broad sanctions on countries and aiming visa bans and asset freezes at individuals who control the systems that seize so many innocent prisoners. The dictators should hear, loud and clear, that brutish behavior will not be excused or ignored.

Except, what the piece leaves out is that, rather than do any of that, it seems that the political class in many of these “free nations” are looking on in envy. We’ve pointed out how various nations, such as the UK with its Online Safety Bill, and the US with a wide variety of bills, are actually taking pages directly from these authoritarian regimes, claiming that there can be new laws that require censorship in the name of “public health” or “to protect the children.” From pretty much all political parties, we’re seeing an embrace of using the power of regulations to make citizens less free to use the internet.

The many, many stories in the WaPo feature are worth thinking about, but the suggestion that the US government or other governments in so-called “free” nations aren’t moving in the same direction is naïve. We keep hearing talk about the need to “verify” everyone online, or to end anonymity. But that’s exactly what these authoritarian countries are doing to track and identify those saying what they don’t like.

And then we see the UK trying to require sites take down “legal, but harmful” content, or US Senators proposing bills that would make social media companies liable for anything the government declares to be “medical misinfo” and you realize how we’re putting in place the identical infrastructure, enabling a future leader to treat the citizens of these supposedly “free” nations identically to what’s happening in the places called out in the WaPo piece.

If anything, reading that piece should make it clear that these supposedly free nations should be pushing back against those types of laws, highlighting how similar laws are being abused to silence dissent. Fight for those locked up in other countries, but don’t hand those dictators and authoritarians the ammunition to point right back at our own laws, allowing them to claim they’re just doing the same things we are.

Filed Under: authoritarian, censorship, dictators, free expression, free speech, internet

I Explained To A Court How California’s ‘Kid’s Code’ Is Both Impossible To Comply With & An Attack On Our Expression

from the the-wrong-approach dept

Last year, Techdirt was one of only a very few sites where you could find out information on California’s AB 2273, officially the “California Age Appropriate Design Code” or “Kid’s code.” As with so many bills that talk about “protecting the children,” everyone we talked to said they were afraid to speak up, because they worried that they’d be branded as being against child safety. Indeed, I even had some people within some larger tech companies reach out to me suggesting it was dangerous to speak out against the bill.

But the law is ridiculous. Last August, I explained how it was literally impossible to comply with the bill, questioned why California lawmakers were willing to pass a law written by a British Baroness (who is also a Hollywood filmmaker) with little to no understanding of how any of this actually works, and highlighted how the age verification requirements would be a privacy nightmare putting more kids at risk, rather than protecting them. Eric Goldman also pointed out the dark irony, that while the Kid’s Code claims that it was put in place to prevent internet companies from conducting radical experiments on children, the bill itself is an incredibly radical experiment in trying to reshape the internet. Of course, the bill was signed into law last fall.

In December, NetChoice, which brought the challenges to Texas and Florida’s bad internet laws, sued to block the law. Last week, they filed for a preliminary injunction to block the law from going into effect. Even though the law doesn’t officially take effect until the summer of 2024, any website would need to start doing a ton of work to get ready. With the filing, there were a series of declarations filed from various website owners to highlight the many, many problems this law will create for sites (especially smaller sites). Among those declarations was the one I filed highlighting how this law is impossible to comply with, would invade the privacy of the Techdirt community, and act as an unconstitutional restriction on speech. But we’ll get to that.

First up, the motion for the injunction. It’s worth reading the whole thing as it details the myriad ways in which this law is unconstitutional. It violates the 1st Amendment by creating prior restraint in multiple ways. The law is both extremely vague and overly broad. It regulates speech based on its content (again violating the 1st Amendment). It also violates the Commerce Clause as a California law that would impact those well outside of the state. Finally, existing federal law, both COPPA and Section 230 pre-empt the law. I won’t go through it all, but all of those are clearly laid out in the motion.

But what I appreciate most is that it opens up with a hypothetical that should illustrate just how obviously unconstitutional the law is:

Imagine a law that required bookstores, before offering books and services to the public, to assess whether those books and services could “potentially harm” their youngest patrons; develop plans to “mitigate or eliminate” any such risks; and provide those assessments to the state on demand. Under this law, bookstores could only carry books the state deemed “appropriate” for young children unless they verified the age of each patron at the door. Absent such age verification, employees could not ask customers about the types of books they preferred or whether they had enjoyed specific titles—let alone recommend a book based on customers’ expressed interests—without a “compelling” reason that doing so was in the “best interests” of children. And the law would require bookstores to enforce their store rules and content standards to the state’s satisfaction, eliminating the bookstores’ discretion as to how those rules should be applied. Penalties for violations could easily bankrupt even large bookstores. Such a scheme would plainly violate fundamental constitutional protections.

California has enacted just such a measure: The California Age Appropriate Design Code Act (AB 2273). Although billed as a “data protection” regulation to protect minors, AB 2273 is the most extensive attempt by any state to censor speech since the birth of the internet. It does this even though the State has conceded that an open, vibrant internet is indispensable to American life. AB 2273 enacts a system of prior restraint over protected speech using undefined, vague terms, and creates a regime of proxy censorship, forcing online services to restrict speech in ways the State could never do directly. The law violates the First Amendment and the Commerce Clause, and is preempted by the Children’s Online Privacy Protection Act (COPPA), 15 U.S.C. §§ 6501 et seq., and Section 230 of the Communications Decency Act, 47 U.S.C. § 230. Because AB 2273 forces online providers to act now to redesign services, irrespective of its formal effective date, it will cause imminent irreparable harm. The Court should enjoin the statute.

As for my own filing, it was important for me to make clear that a law like AB 2273 is a direct attack on Techdirt and its users’ expression.

Techdirt understands that AB 2273 will require covered businesses to evaluate and mitigate the risk that “potentially harmful content” will reach children, with children defined to equally cover every age from 0 to 18 despite the substantial differences in developmental readiness and ability to engage in the world around them throughout that nearly two-decade age range. This entire endeavor results in the State directly interfering with my company’s and my expressive rights by limiting to whom and how we can communicate to others. I publish Techdirt with the deliberate intention to share my views (and those of other authors) with the public. This law will inhibit my ability to do so in concrete and measurable ways.

In addition to its overreaching impact, the law’s prohibitions also create chilling ambiguity, such as in its use of the word “harm.” In the context of the issues that Techdirt covers on a daily basis, there is no feasible way that Techdirt can determine whether any number of its articles could, in one way or another, expose a child to “potentially harmful” content, however the State defines that phrase according to the political climate of the moment. For example, Techdirt covers a broad array of hot-button topics, including reporting on combating police brutality (sometimes with accompanying images and videos), online child sexual abuse, bullying, digital sexual harassment, and law enforcement interrogations of minors—all of which could theoretically be deemed by the State to be “potentially harmful” to children. Moreover, Techdirt’s articles are known for their irreverent and snarky tone, and frequently use curse words in their content and taglines. It would be impossible to know whether this choice of language constitutes “potentially harmful content” given the absence of any clear definition of the term in AB 2273. Screening Techdirt’s forum for “potentially harmful” content—and requiring Techdirt to self-report the ways its content and operations could hypothetically “harm” children—will thus cause Techdirt to avoid publishing or hosting content that could even remotely invite controversy, undermining Techdirt’s ability to foster lively and uninhibited debate on a wide range of topics of its choosing. Moreover, not only would Techdirt’s prospective expression be chilled, but the retroactive application of AB 2273 would result in Techdirt needing to censor its previous expression, and to an enormous degree. The sheer number of posts and comments published on Techdirt makes the self-assessment needed to comply with the law’s ill-defined rules functionally impossible, requiring an enormous allocation of resources that Techdirt is unable to dedicate.

Also, the age verification requirements would fundamentally put the privacy of all of our readers at risk by forcing us to collect data we do not want about our users, and which we’ve gone to great lengths to make sure is not collected.

Redesigning our publication to verify the ages of our readers would also compromise our deliberate practice to minimize how much data we collect and retain about our readers to both limit our obligations that would arise from the handling of such data as well as preserve trust with our readers and undermine our relationship with our readers of any age, including teenagers, by subjecting them to technologies that are at best, unreliable, and at worst, highly privacy-intrusive (such as facial recognition). Moreover, because a sizeable portion of Techdirt’s readership consists of casual readers who access the site for information and news, any requirement that forces users to submit extensive personal information simply to access Techdirt’s content risks driving away these readers and shrinking Techdirt’s audience.

I have no idea how the courts are going to treat this law. Again, it does feel like many in the industry have decided to embrace and support this kind of regulation. I’ve heard from too many people inside the industry who have said not to speak up about it. But it’s such a fundamentally dangerous bill, with an approach that we’re starting to see show up in other states, that it was too important not to speak up.

Filed Under: 1st amendment, aadc, ab 2273, age appropriate design code, age verification, facial scanning, free expression, kids code, privacy
Companies: netchoice

One chapter of my Walled Culture book (free download available in various formats) looks at how the bad ideas embodied in the EU’s appalling Copyright Directive – the worst copyright law so far – are being taken up elsewhere. One I didn’t include, because its story is still unfolding, is Canada’s Bill C-18: “An Act respecting online communications platforms that make news content available to persons in Canada”. Here’s the key idea, which will be familiar enough to readers of this blog:

The Bill introduces a new bargaining framework intended to support news businesses to secure fair compensation when their news content is made available by dominant digital news intermediaries and generates economic gain.

In other words, it’s a link tax, designed to make big digital platforms like Google and Facebook pay for the privilege of sending traffic to newspaper publishers. The full depressing story of the copyright industry’s greed is retold in Walled Culture. But a fresh perspective on this latest link tax comes from one of Canada’s top copyright experts, Professor Michael Geist. He has been writing blog posts about Bill C-18 and another terrible proposed copyright law, Bill C-11, on his blog for a while. They are well worth reading for anyone who wants to follow what is going on in Canada and in copyright generally. Geist has recently written a great post about Bill C-18, entitled “Why Bill C-18’s Mandated Payment for Links is a Threat to Freedom of Expression in Canada“:

The study into the Online News Act continues this week as the government and Bill C-18 supporters continue to insist that the bill does not involve payment for links. These claims are deceptive and plainly wrong from even a cursory reading of the bill. Simply put, there is no bigger concern with this bill. This post explains why link payments are in, why the government knows they are in, and why the approach creates serious risks to the free flow of information online and freedom of expression in Canada.

Geist explains how the Canadian government is being dishonest by trying to suggest the bill is not really about forcing platforms to pay for links, just forcing them to compensate news publishers in some way for using those links. Geist also points out how C-18 would require links to news material from big publishers to be paid for, but not those from small media outlets. That in itself reveals this bill is about rewarding a few corporations at the expense of smaller publishers. Also troubling is the fact that “the bill effectively says that whether compensation is due also depends on where the expression occurs since it mandates that certain venues pay to allow their users to express themselves.” Geist rightly points out that this would set a terrible precedent:

Once government decides that some platforms must pay to permit their users to engage in certain expression, the same principle can be applied to other policy objectives. For example, the Canadian organization Journalists for Human Rights has argued that misinformation is akin to information pollution and that platforms should pay a fee for hosting such expression much like the Bill C-18 model. The same policies can also be expanded to other areas deemed worthy of government support. Think health information or educational materials are important and that those sectors could use some additional support? Why not require payments for those links from platforms. Indeed, once the principle is established that links may require payment, the entire foundation for sharing information online is placed at risk and the essential equality of freedom of expression compromised.

That sums up neatly why the whole link tax idea is so pernicious. It seeks to privilege certain material over other kinds, and would turn the fundamentally egalitarian glue of the World Wide Web – links – into something that must be paid for in many cases, destroying much of its power.

Follow me @glynmoody on Twitter, or Mastodon. Reposted from the Walled Culture blog.

Filed Under: c-18, canada, copyright, eu, free expression, free speech, link tax

from the because-what-else-would-they-do? dept

Like other EU Member States, Finland is grappling with the problem of how to implement the EU Copyright Directive’s Article 17 (upload filters) in national legislation. A fascinating post by Samuli Melart in the Journal of Intellectual Property Law & Practice reveals yet another attempt by the copyright industry to make a bad law even worse. As Melart explains, the Finnish Ministry of Education and Culture has come up with not one, but two attempts at transposition, with diametrically opposing approaches. The first version:

sought to transpose Article 17 by entirely rewriting its provisions. This was meant to rectify conceptual ambiguities and to mitigate fundamental right risks to the users of these [online content sharing service providers].

This version was an honest effort to deal with the contradictions at the heart of the Article 17 – which demands that online platforms should block infringing material but not legal material, and without specifying how that might be done at scale. This attempt to produce a balanced law seems to have been met with howls of anger from the copyright industry, which apparently got to work lobbying the Finnish government:

the responsible minister led two round table meetings with stakeholders concerning the feedback on the first draft. Apparently, participants mostly comprised of representatives of the rightholder side.

This led to the second version of Article 17, which:

retracted from rewriting Article 17 and instead switched to transposing it closer to its original wording following Danish and Swedish models. The freedom of expression emphasis and user right considerations of the first draft were largely removed and replaced with hollow reiterations of the Directive recitals.

According to Melart’s article, the first version was strongly influenced by the view of Advocate General Henrik Saugmandsgaard Øe, who suggested that “sharing service providers must only detect and block content that is ‘identical’ or ‘equivalent’ to the protected subject matter identified by the rightholders”. The second version rejected this approach.

The copyright industry is not content with helping to push through the worst copyright law in recent memory, but even at this late stage is trying to make it more unbalanced. Also notable is the almost complete absence of any input from members of the public during this process, or any serious attempt to protect their fundamental rights – a selfishness that is so typical of the copyright world.

The hope now must be that in the light of this week’s CJEU ruling on upload filters, the Finnish legislative process will come up with a text that is much closer to the first version produced by the Ministry than to the second, if the country wants to comply with the top EU court’s judgment.

Follow me @glynmoody on Twitter, Diaspora, or Mastodon.

Originally posted to the Walled Culture blog.

Filed Under: article 17, copyright, copyright directive, finland, free expression, free speech, upload filters

Kazakhstan Cops Protect Citizens' Free Speech Rights By Arresting A Protester Holding A Blank Sign

from the i'm-sorry-i-cannot-help-you-make-sense-of-this dept

Kazakhstan police unintentionally helped a protester prove his point. To protest the lack of free speech protections in the country, Aslan Sagutdinov engaged in a physical representation of a thought experiment.

To test the limits of his right to peacefully demonstrate in Kazakhstan, Aslan Sagutdinov, 22, stood in a public square holding a blank sign, predicting he would be detained.

He was right.

Sometimes it sucks to be right. Sagutdinov hoped to point out the “idiocy” of his country and its laws. Protesting nothing in particular, he was arrested by police and taken to the station. So far, there’s been nothing reported as to which charges, if any, he’ll be facing. But it’s too late for the cops and his idiotic country. The point has already been made.

The police argued — via an official statement — that order must be maintained or something. According to the police, officers had “received a report” of an “unknown male” holding a blank placard and drawing a small crowd of curious onlookers. Rather than align themselves with the content of Sagutdinov’s placard and do nothing, officers chose to something. And that “something” was to drive their irony-proof squad car to the scene and detain the protester.

The official explanation does not make the country look any less idiotic.

The police statement maintained that the authorities “were acting within the boundaries of the law.”

And then, because it couldn’t possibly drill Sagutdinov’s point home any harder, the police released another statement asserting the protester was wrong because he was right.

Bolatbek Beldibekov, the head of the local police department’s press service, told the newspaper Uralskaya Nedelya that the offense was not that he demonstrated with the blank placard. Rather, he said, Mr. Sagutdinov ran afoul of the law by making the political statement that “there is no democracy and free speech in Kazakhstan” in a public place.

Feel free to take as much time as you need to wrap your head around that statement.

Apparently, protesters in Kazakhstan have a Constitutional right to “peacefully” engage in “rallies, demonstrations, street processions, and pickets.” But those rights are more like privileges and come with several caveats attached. Multiple court decisions and amendments have watered down this right from a given to a theoretical by allowing individual government agencies to decide whether or not they’ll allow protests in front of their buildings or whether “peaceful processions” will actually be allowed to proceed from one place to another.

The protections are also badly-written, allowing the government to determine almost anything citizens believe would be protected expression to be unprotected and subject to criminal charges. Here are just a few of the many, many problems of this so-called right, as explained by Kazakhstan human rights activists.

The Law, together with decisions of local representative agencies limits the places for holding assemblies of citizens and public associations. In a series of cities, are established strictly out-of-the way places, as a rule, located on the outskirts of the city. Higher officials and local authorities, and also some political organizations, for the holding of assemblies, have the unfounded exclusive right to use squares in the city center, in comparison with citizens and their associations, which is discrimination. In addition to the element of discrimination, this is a violation of the essence of freedom of assembly. In fact, there can be no reasonable substantiation, from the viewpoint of international standards, to bind the realization of freedom of assembly to one location. Moreover, not all forms of assembly can be held in such conditions, since pickets, demonstrations or processions virtually cannot be contained to one place in the city.

[…]

The Law does not give clear definitions of types of peaceful assembly, which violates the principles of legal predictability and specificity. Any cluster of people in such a situation could be potentially termed an assembly in the sense of the Law, and correspondingly, illegal, if there was no permission given by an executive agency of the government. In other words, citizens seeking to lay flowers on a memorial or carrying a petition to the authorities, participants of flash mobs, courtyard meetings of apartment residents, etc may be held to administrative accountability. In addition, the Law does not contain a distinction between who is considered a participant in an assembly and who is not. This makes it possible to hold accountable anyone found in the location where an assembly is held.

This is why someone holding a blank sign can be arrested for protesting nothing. The sanctity of the whatever-the-fuck must be maintained by the immediate subduing of dissenting voices, even when it isn’t immediately clear what they’re dissenting from. The government has made an ass of itself and confirmed what many citizens already feared: their right to protest isn’t being protected by their government.

Filed Under: free expression, free speech, kazakhstan, protests

Facebook's 'Please Regulate Us' Tour Heads To France

from the we'll-see-how-that-goes dept

On Friday, Mark Zuckerberg went to France, just in time for the French government to release a vague and broad proposal to regulate social media networks. Similar to Zuckerberg’s pleas to Congress to ramp up its regulation of the company (and because he knows that any pushback on regulations will likely be slammed by the world of Facebook-haters), Zuckerberg tried to embrace the plans.

“It’s going to be hard for us, there are going to be things in there we disagree with, that’s natural,” Zuckerberg said. “But in order for people to trust the internet overall and over time, there needs to be the right regulation put in place.”

He also said that he was “encouraged and optimistic about the regulatory framework that will be put in place.”

What is that regulatory framework? Well, it’s pretty vague. It also has PowerPoint artwork that looks like it was designed decades ago by someone who has no business being anywhere near PowerPoint:

To its credit, the plan does recognize that “freedom of expression” is a key value that needs to be protected, as well as freedom for innovation, but then also says those need to be balanced with a protection from harm. The key issue, as we’ve seen in other such plans is that it creates what people are referring to as a “duty of care” for social media — requiring the company to “protect” users and allows regulators to somehow step in if they feel the company isn’t succeeding (as if that won’t be abused).

The plan also sets up a regulator who will be tasked with overseeing how social media platforms operate. There is also some hand-waving, suggesting that these rules will only apply to platforms of a certain size, which lets them argue that it won’t impact or discourage startups, without recognizing how it might alter the overall market as companies seek to avoid whatever threshold rules put them into the “regulated” category. Also, much of the plan does focus on increasing transparency, which is a good thing, but how that gets worked out in practice is a really big question.

The issue in all of this is the same as we’ve discussed before: Facebook can deal with these rules. It’s not clear if other companies can. In effect, the rules might lock in Facebook and this particular paradigm of centralized, siloed social media as what must exist going forward. And that’s a problem. Also, trusting regulators to handle these issues in a reasonable way should raise some eyebrows. For people who hate Donald Trump, how would you feel if he were in charge of regulating what sort of “duty of care” Facebook had to take concerning allowing or disallowing certain speech? Or if you like Trump, then how would you feel if, say, Hillary Clinton or AOC were in charge of such things?

In short, who the regulator is can have a pretty massive impact here, and there seems to be little in these proposals to consider that. It’s not surprising that Facebook seems resigned to “support” these kinds of proposals. The company is such a target right now that any pushback would probably lead to even worse rules. And, as mentioned, the company is well aware that it can probably weather any such rules, while any potential competitors will probably be hit much harder by them.

Filed Under: duty of care, france, free expression, free speech, innovation, intermediary liability, mark zuckerberg, regulations, responsibility, social media
Companies: facebook

Censorship By Weaponizing Free Speech: Rethinking How The Marketplace Of Ideas Works

from the challenges-of-our-time dept

It should be no surprise that I’m an unabashed supporter of free speech. Usually essays that start that way are then followed with a “but…” and that “but…” undermines everything in that opening sentence. This is not such an essay. However, I am going to talk about some interesting challenges that have been facing our concepts of free speech over the past few years — often in regards to how free speech and the internet interact. Back in 2015, at our Copia Summit we had a panel that tried to lay out some of these challenges, which acknowledged that our traditional concepts of free speech don’t fully work in the internet age.

There are those who argue that internet platforms should never do any moderation at all, and that they should just let all content flow. And while that may be compelling at a first pass, thinking beyond that proves that’s unworkable for a very basic reason: spam. Almost everyone (outside of spammers, I guess) would argue that it makes sense to filter out/moderate/delete spam. It serves no useful purpose. It clutters inboxes/comments/forums with off-topic and annoying messages. So, as Dave Willner mentions in that talk back in 2015, once you’ve admitted that spam can be filtered, you’ve admitted that some moderation is appropriate for any functioning forum to exist. Then you get to the actual challenges of when and how that moderation should occur. And that’s where things get really tricky. Because I think we all agree that when platforms do try to moderate speech… they tend to be really bad at it. And that leads to all sorts of stories that we like to cover of social media companies banning people for dumb reasons. But sometimes it crosses over into the absurd or dangerous — like YouTube deleting channels that were documenting war crimes, because it’s difficult to distinguish war crimes from terrorist propaganda (and, sometimes, they can be one and the same).

An even worse situation, obviously, is when governments take it upon themselves to mandate moderation. Such regimes are almost exclusively used in ways to censor speech that should be protected — as Germany is now learning with its terrible and ridiculous new social media censorship law.

But it’s not that difficult to understand why people have been increasingly clamoring for these kinds of solutions — either having platforms moderate more aggressively or demanding regulations that require them to do so. And it’s because there’s a ton of really, really crappy things happening on these platforms. And, as you know, there’s always the xkcd free speech point that the concept of free speech is about protecting people from government action, not requiring everyone to suffer through whatever nonsense someone wants to scream.

But, it is becoming clear that we need to think carefully about how we truly encourage free speech. Beyond the spam point above, another argument that has resonated with me over the years is that some platforms have enabled such levels of trolling (or, perhaps to be kinder, “vehement arguing”) that they actually lead to less free speech in that they scare off or silence those who also have valuable contributions to add to various discussions. And that, in turn, raises at least some questions about the idea of the “marketplace of ideas” model of understanding free speech. I’ve long been a supporter of this viewpoint — that the best way to combat so-called “bad speech” is with “more speech.” And, you then believe that the best/smartest/most important ideas rise to the top and stomp out the bad ideas. But what if the good ideas don’t even have a chance? What if they’re silenced before they even are spoken by the way these things are set up? That, too, would be an unfortunate result for free speech and the “marketplace of ideas”.

In the past couple of months, two very interesting pieces have been written on this that are pushing my thinking much further as well. The first is a Yale Law Journal piece by Nabiha Syed entitled Real Talk About Fake News: Towards a Better Theory for Platform Governance. Next week, we’ll have Syed on our podcast to talk about this paper, but in it she points out that there are limitations and problems with the idea of the “marketplace of ideas” working the way many of us have assumed it should work. She also notes that other frameworks for thinking about free speech appear to have similar deficiencies when we are in an online world. In particular, the nature of the internet — in which the scale and speed and ability to amplify a message are so incredibly different than basically at any other time in history — is that it enables a sort of “weaponizing” of these concepts.

That is, those who wish to abuse the concept of the marketplace of ideas by aggressively pushing misleading or deliberately misguided concepts are able to do so in a manner that short-circuits our concept of the marketplace of ideas — all while claiming to support it.

The second piece, which is absolutely worth reading and thinking about carefully, is Zeynep Tufekci’s Wired piece entitled It’s the (Democracy-Poisoning) Golden Age of Free Speech. I was worried — from the title — that this might be the standard rant I’ve been reading about free speech somehow being “dangerous” that has become tragically popular over the past few years. But (and this is not surprising, given Tufekci’s previous careful consideration of these issues for years) it’s a truly thought provoking piece, in some ways building upon the framework that Syed laid out in her piece, noting how some factions are, in effect, weaponizing the very concept of the “marketplace of ideas” to insist they support it, while undermining the very premise behind it (that “good” speech outweighs the bad).

In particular, she notes that while the previous scarcity was the ability to amplify speech, the current scarcity is attention — and thus, the ability to flood the zone with bad/wrong/dangerous speech can literally act as a denial of service on the supposedly corrective “good speech.” She notes that the way censorship used to work was by stifling the message. Traditional censorship is blocking the ability to get the message out. But modern censorship actually leverages the platforms of free speech to drown out other messages.

The most effective forms of censorship today involve meddling with trust and attention, not muzzling speech itself. As a result, they don’t look much like the old forms of censorship at all. They look like viral or coordinated harassment campaigns, which harness the dynamics of viral outrage to impose an unbearable and disproportionate cost on the act of speaking out. They look like epidemics of disinformation, meant to undercut the credibility of valid information sources. They look like bot-fueled campaigns of trolling and distraction, or piecemeal leaks of hacked materials, meant to swamp the attention of traditional media.

These tactics usually don’t break any laws or set off any First Amendment alarm bells. But they all serve the same purpose that the old forms of censorship did: They are the best available tools to stop ideas from spreading and gaining purchase. They can also make the big platforms a terrible place to interact with other people.

There’s a truth to that which needs to be reckoned with. As someone who has regularly talked about the marketplace of ideas and how “more speech” is the best way to respond to “bad speech,” Tufekci highlights where those concepts break down:

Many more of the most noble old ideas about free speech simply don’t compute in the age of social media. John Stuart Mill’s notion that a “marketplace of ideas” will elevate the truth is flatly belied by the virality of fake news. And the famous American saying that “the best cure for bad speech is more speech”—a paraphrase of Supreme Court justice Louis Brandeis—loses all its meaning when speech is at once mass but also nonpublic. How do you respond to what you cannot see? How can you cure the effects of “bad” speech with more speech when you have no means to target the same audience that received the original message?

As she notes, this is “not a call for nostalgia.” It is quite clear that these platforms also have tremendous and incredibly important benefits. They have given voice to the formerly voiceless. There are, certainly, areas where the marketplace of ideas functions, and the ability to debate and have discourse actually does work. Indeed, I’d argue that it probably happens much more often than people realize. But it’s difficult to deny that some have weaponized these concepts in a manner designed to flood the marketplace of ideas and drown out the good ideas, or to strategically use the “more speech” response to actually amplify and reinforce the “bad speech” rather than correct it.

And that’s something we need to reckon with.

It’s also an area where I don’t think there are necessarily easy solutions — but having this discussion is important. I still think that companies will be bad at moderation. And I still think government mandates will make the problems significantly worse, not better. And I very much worry that solutions may actually do more harm than good in some cases — especially in dragging down or silencing important, but marginalized, voices. I also think it’s dangerous that many people immediately jump to the platforms as the obvious place to put all responsibility here. There needs to be responsibility as well on the parts of the end users — to be more critical, to have more media literacy.

And, of course, I think that there is a space for technology to potentially help solve some of these issues as well. As I’ve discussed in the past, greater transparency can help, as would putting more control into the hands of end users, rather than relying on the platforms to make these decisions.

But it is an area that raises some very real — and very different — challenges, especially for those of us who find free speech and free expression to be an essential and core value. What do we do when that free speech is being weaponized against free speech itself? How do you respond? Do you need to weaponize in response and flood back the “bad speech” or does that just create an arms race? What other ways are there to deal with this?

This is a discussion that was started a while back, but is increasingly important — and I expect that we’ll be writing a lot more about it in the near future.

Filed Under: attention, censorship, denial of service, free expression, free speech, marketplace of ideas, nabiha syed, zeynep tufekci