pornography – Techdirt (original) (raw)

Protect Yourself From Sen. Mike Lee’s Anti-Porn PROTECT Act

If you work for a living, do you feel coerced into doing your job? According to Senator Mike Lee, if you have anything to do with pornography, and need to earn money in the industry, it must be coercion at play.

While the world continues to be fooled by the Kids Online Safety Act’s false promises of a child-proof internet made entirely out of Roblox gift cards, Sen. Mike Lee of Utah is pimping out his latest proposal: the Preventing Rampant Online Technological and Criminal Trafficking (PROTECT) Act.

According to Lee, the act is meant to hold large technology companies accountable for rampant cases of image-based sexual abuse on the internet. While the intentions may sound reasonable, the actual act is an unenforceable hodgepodge of bad ideas.

This isn’t surprising. Mike Lee is known for his idealistic, do-nothing internet safety bills. Lee has, for example, tried pushing his so-called Interstate Obscenity Definition Act, which would define a national standard for obscenity, without the Miller test, in the spirit of the antiquated, unconstitutional Comstock laws.

He also introduced the SCREEN Act, which is his attempt to implement national age verification requirements. He’s a bleeding heart for the “protect the kids” crowds that are essentially anti-porn, pro-censorship advocates.

The PROTECT Act takes some of the worst elements of Lee’s previous bills and wraps them in a new censorship package.

The bill requires web platforms to verify the ages of individuals who appear in sexually explicit imagery. This is presented as a measure to counter child sexual abuse material (CSAM) and non-consensual intimate imagery (NCII) online.

The U.S. Department of Justice already enforces strict federal obscenity laws. In the adult entertainment industry, producers of consensual, legal pornography must verify the age of participants and retain those records through a custodian of records. That custodian is usually a lawyer, senior executive, or c-suite member, like a CEO. Not keeping or falsifying the records could lead to civil and criminal penalties, including violations of child sexual abuse laws.

If Sen. Lee gets his way with the PROTECT Act, this legal standard would apply to virtually every web platform.

This includes platforms owned by Meta, like Facebook and Instagram. A press release from Sen. Lee’s office on January 31 features an excerpt of a hearing between Meta CEO Mark Zuckerberg and himself to illustrate big tech’s shortcomings.

But one issue in the bill that drew my attention is the section on “coerced consent.”

This term defines consent to engage in sexual activity due to coercion, but with a wildly broad definition of “coercion.” To put it simply, if the act becomes law, the act of paying a porn performer or adult content creator is a crime within certain parameters and conditions.

As worded, the bill would invalidate consensual sex work:

“[C]oerced consent” means purported consent obtained from a person— (A) through fraud, duress, misrepresentation, undue influence, or nondisclosure; (B) who lacks capacity; or (C) though exploiting or leveraging the person’s—(i) immigration status;(ii) pregnancy;(iii) disability;(iv) addiction;(v) juvenile status; or (vi) economic circumstances.

Under this language, “economic circumstances” could legally invalidate consent to appear in a legal porn scene. If a performer needed the money from an adult content production for paying for rent, groceries, health care coverage, or child care fees, under Lee’s law, that could mean they could not give consent. Any consent due to such “economic circumstances” could be deemed coercion.

This definition completely outlaws consensual and legal pornography production, which is otherwise protected under the First Amendment.

The bill also invalidates consent based on immigration status, pregnancy, disability, addiction, or juvenile status. Current law already bans those under 18 from appearing in commercial pornography. Depicting an underage individual is CSAM and considered a sex crime. Minors already cannot legally consent, especially in imagery.

Coerced consent doctrine complicates the already clear standard of coercion versus consent, including non-consensual imagery. This is clearly anti-pornography lawmaking, pretending to be about coercion.

This is obvious in just looking over who supports the PROTECT Act, including the socially conservative American Principles Project, affiliated with the Heritage Foundation’s Project 2025 coalition and the far-right campaign to outlaw legal porn completely.

The other group that endorsed the bill is the National Center on Sexual Exploitation. The center sells itself as non-partisan and non-religious, but is notorious for backing Christian nationalists and anti-porn policies, labeling magazines like Cosmopolitan as “pornographic.” The PROTECT Act is a pipe dream.

Michael McGrady covers the legal and tech side of the online porn business, among other topics.

Filed Under: adult content, coerced consent, consent, mike lee, obscenity, pornography, protect act

A Reagan Judge, The First Amendment, And The Eternal War Against Pornography

from the age-verification-and-free-speech dept

Using “Protect the children!” as their rallying cry, red states are enacting digital pornography restrictions. Texas’s effort, H.B. 1181, requires commercial pornographic websites—and others, as we’ll see shortly—to verify that their users are adults, and to display state-drafted warnings about pornography’s alleged health dangers. In late August, a federal district judge blocked the law from taking effect. The U.S. Court of Appeals for the Fifth Circuit expedited Texas’s appeal, and it just held oral argument. This law, or one of the others like it, seems destined for the Supreme Court.

So continues what the Washington Post, in the headline of a 1989 op-ed by the columnist Nat Henthoff, once called “the eternal war against pornography.”

It’s true that the First Amendment does not protect obscenity—which the Supreme Court defines as “prurient” and “patently offensive” material devoid of “serious literary, artistic, political, or scientific value.” Like many past anti-porn crusaders, however, Texas’s legislators blew past those confines. H.B. 1181 targets material that is obscene to minors. Because “virtually all salacious material” is “prurient, offensive, and without value” to young children, the district judge observed, H.B. 1181 covers “sex education [content] for high school seniors,” “prurient R-rated movies,” and much else besides. Texas’s attorneys claim that the state is going after “teen bondage gangbang” films, but the law they’re defending sweeps in paintings like Manet’s Olympia (1863):

Incidentally, this portrait appears—along with other nudes—in a recent Supreme Court opinion. And now, of course, it appears on this website. Time to verify users’ ages (with government IDs or face scans) and post the state’s ridiculous “warnings”? Not quite: the site does not satisfy H.B. 1181’s “one-third . . . sexual material” content threshold. Still, that standard is vague. (What about a website that displays a collection of such paintings?) And in any event, that this webpage is not now governed by H.B. 1181 only confirms the law’s arbitrary scope.

H.B. 1181 flouts Supreme Court decisions on obscenity, internet freedom, and online age verification. This fact was not lost on the district judge, who noted that Texas had raised several of its arguments “largely for the purposes” of setting up “Supreme Court review.” If this case reaches it, the Supreme Court can strike down H.B. 1181 simply by faithfully applying any or all of several precedents.

But the Court should go further, by elaborating on the threat these badly crafted laws pose to free expression.

When it next considers an anti-porn law, the Court will hear a lot about its own rulings. But other opinions grapple with such laws—and one of them, in particular, is worth remembering. Authored by Frank Easterbrook, perhaps the greatest jurist appointed by Ronald Reagan, American Booksellers Association v. Hudnut (7th Cir. 1985) addresses pornography and the First Amendment head on.

At issue was an Indianapolis ordinance that banned the “graphic sexually explicit subordination of women.” Interestingly, this law was inspired by two intellectuals of the left, Catharine MacKinnon and Andrea Dworkin. They maintained (as Easterbrook put it) that “pornography influences attitudes”—that “depictions of subordination tend to perpetuate subordination,” including “affront and lower pay at work, insult and injury at home, battery and rape on the streets.” (You can hear, in today’s debates about kids and social media, echoes of this dire rhetoric.)

Although he quibbled with the empirical studies behind this claim, Easterbrook accepted the premise for the sake of argument. Indeed, he leaned into it. For him, the harms the city alleged “simply demonstrate[d] the power of pornography as speech.” That pornography affects attitudes, which in turn affect conduct, does not distinguish it from other forms of expression. Hitler’s speeches polluted minds and inspired horrific actions. Religions deeply shape people’s lifestyles and worldviews. Television leads (many worry) “to intellectual laziness, to a penchant for violence, to many other ills.” The strong effects of speech are an inherent part of speech—not a ground for regulation. “Any other answer leaves the government in control of all of the institutions of culture, the great censor and director of which thoughts are good for us.”

Like Texas today, Indianapolis targeted not obscenity alone, but adult content more broadly. And like Texas, the city sought to excuse this move by blending the two concepts together. Pornography is “low value” speech, it argued, akin to obscenity and therefore open to special restriction. There were several problems with this claim. But as Easterbrook explained, it also failed on its own terms. Indianapolis asserted that pornography shapes attitudes in the home and at the workplace. It believed, in other words, that the speech at issue influenced politics and society “on a grand scale.” True, Easterbrook acknowledged, “pornography and obscenity have sex in common.” Like Texas today, though, Indianapolis failed to carve out of its ordinance material with literary, artistic, political, or scientific value to adults.

“Exposure to sex is not,” Easterbrook declared, “something the government may prevent.” This is not an exceptional conclusion. “Much speech is dangerous.” Under the First Amendment, however, “the government must leave to the people the evaluation of ideas.” Otherwise free speech dies. Almost everyone would, if operating in a vacuum, happily outlaw certain kinds of noxious speech. Some would bar racial slurs (or disrespect), others religious fundamentalism (or atheism). Some would banish political radicalism (of some stripe or other), others misinformation (defined one way or another). Many of the lawmakers who claim merely to hate porn would, if given the chance, eagerly police all erotic film, literature, and art. (Another pathbreaking Manet painting, Luncheon on the Grass, would plainly have fallen afoul of the Indianapolis ordinance.) The First Amendment stops this downward spiral before it begins. It “removes the government from the role of censor.”

Indianapolis “paint[ed] pornography as part of the culture of power.” Maybe so. But in the end, Easterbrook responded, the First Amendment is a tool of the powerless:

Free speech has been on balance an ally of those seeking change. Governments that want stasis start by restricting speech. . . . Change in any complex system ultimately depends on the ability of outsiders to challenge accepted views and the reigning institutions. Without a strong guarantee of freedom of speech, there is no effective right to challenge what is.

Earlier this year, the Supreme Court’s conservative justices sang a similar tune. It is “not the role of the State or its officials,” they declared in 303 Creative v. Elenis, “to prescribe what shall be offensive.” On the contrary, the Constitution “protect[s] the speech rights of all comers, no matter how controversial—or even repugnant—many may find the message at hand.” Here’s hoping that, when they’re dragged back into the eternal war against pornography, those justices give these words their proper sweep.

Corbin K. Barthold is internet policy counsel at TechFreedom.

Filed Under: 1st amendment, 5th circuit, adult content, age verification, frank easterbrook, free speech, hb 1181, pornography, texas

You Can’t Wish Away The 1st Amendment To Mandate Age Verification

from the the-1st-amendment-is-not-that-flexible dept

So, we’ve been talking a lot about age verification of late, as governments around the world have all (with the exception of Australia?!?) seemed to settle on that as a solution to “the problem” of the internet (exactly what that problem is they cannot quite identify, but they’re pretty sure there is one). Of course, as we’ve explained time and time again, age verification creates all sorts of problems, including undermining both privacy and speech rights.

That’s why it was little surprise to us (though we warned the politicians pushing these bills) that a series of age verification bills have recently been found to be easily and clearly unconstitutional under the 1st Amendment. And it seems likely that other such bills will soon meet a similar fate.

David French, who recently became a NY Times columnist and is a long term free speech defender/constitutional litigator, has apparently chosen as his weird hill to die on, that the 1st Amendment should not stop age verification laws. While there are many, many things that I disagree with French on, historically, he’s been pretty good on internet speech issues. So it’s a little weird that he’s so focused on undermining the 1st Amendment over his own views regarding adult content.

Before the recent set of rulings reaffirming that these laws violate the 1st Amendment, French had suggested that age verification laws around adult content should be found to be constitutional. But, now that multiple courts have ruled otherwise, French took to the pages of the NY Times to argue that courts are misreading longstanding 1st Amendment precedents and that we should be able to mandate age verification and legally block kids from seeing adult content online.

So why not bring our offline doctrines to the online world? If we can impose age limits and age verification offline, we can online as well. If we can zone adult establishments away from kids offline, we can online as well. And if we do these things, we can improve the virtual world for our children without violating the fundamental rights of adults.

The underlying argument is that the precedential rulings in Reno v. ACLU and Ashcroft v. ACLU (the cases that killed as unconstitutional two earlier attempts to lock up the internet for kids: the Communications Decency Act and the Child Online Protection Act) were narrower than everyone believes, and were based on the state of technology at the time, rather than where it is today:

Our nation tried this before. In 1996, Congress passed the Communications Decency Act, which — among other things — criminalized the “knowing” transmission of “obscene or indecent” material online to minors. In 1997, however, the Supreme Court struck down the act’s age limits in Reno v. A.C.L.U., relying in part on a lower court finding that there “is no effective way to determine the identity or the age of a user who is accessing material through email, mail exploders, newsgroups or chat rooms.”

The entire opinion is like opening an internet time capsule. The virtual world was so new that the court spent a considerable amount of time explaining what the World Wide Web — when was the last time you heard that phrase? — even was. The internet was so new and the technology so comparatively primitive that the high court, citing a U.S. District Court finding, observed in its opinion that “credit card verification was ‘effectively unavailable to a substantial number of internet content providers.’”

Indeed, critical to the Supreme Court’s opinion was the lower court’s finding “that at the time of trial existing technology did not include any effective method for a sender to prevent minors from obtaining access to its communications on the internet without also denying access to adults.”

In 1998, Congress tried again, passing the Child Online Protection Act, but in 2004 a closely divided Supreme Court blocked enforcement. Its decision was based in part on the naïve belief that blocking and filtering technologies were “less restrictive alternatives” to the law. But time has demonstrated that blocking and filtering aren’t “less restrictive”; they’re wholly inadequate.

As French concludes, with more modern age verification technology, those precedents suggest that if the tech is better, than the concerns in those cases no longer apply, and the laws can be constitutional:

Thus, our nation’s challenge is more technical than constitutional. The best way to understand the court’s old precedents regarding online age verification to get access to pornography is not that it said “no” but rather that it said “not yet.” But now is the time, the need is clear, and the technology is ready. Congress should try once again to clean up the internet the way cities cleaned up their red-light districts. The law must do what it can to restrict access to pornography for children online.

There’s just one of (actually) many problems with this. It’s not true. It’s not true that the earlier precedents were that limited, and it’s not true that today’s “technology is ready.”

Thankfully, 1st Amendment lawyer Ari Cohn has a pretty thorough response to French that is mandatory reading if you found French’s argument compelling.

The 1st Amendment, and the key precedents around it, are not nearly as malleable as French believes, Cohn notes.

French’s thesis can be distilled to two basic arguments: first, there is no constitutional right to “convenient pornography, and second, that established precedent declaring government-mandated age verification unconstitutional is “outdated.” And so, he concludes, the problem is “more technical than constitutional.” But those arguments, and his conclusion, couldn’t be farther from the truth.

Reducing the controversy to one about “convenient pornography” grossly minimizes the First Amendment issues at stake. Like it or not, pornography—and adults’ ability to access it—is constitutionally protected. So despite this attempt to otherize it, what we are talking about is speech. And speech does not become any less speech merely because some people find it “icky” or morally questionable.

The key bit, and perhaps the most important part, is that French’s claim of moving “offline doctrines” into the “online world,” seems to involve him misunderstanding “offline doctrines.”

French points to “ID requirements for strip clubs and other adult establishments,” arguing that we already require some loss of anonymity to access adult materials offline. Maybe so. But first, few if any laws explicitly require checking IDs—establishments do so voluntarily to avoid potential liability from providing entrance or materials to minors.

More importantly, there is a world of difference between a quick glance at an ID to check date of birth, and uploading identity documents to the internet that create a record of a user’s access.

Online data about us is collected, stored, shared, sold, and used at a galactic level. If anything, the chilling effect of age verification is significantly worse than it was 20 years ago. The effect of creating that kind of digital trail is several orders of magnitude greater than handing over an ID to a bouncer or store clerk—who likely could not remember your name seconds after handing it back.

Comparison of those two drastically different scenarios is reminiscent of the government’s argument in the door-to-door canvassing case: that canvassers necessarily reveal part of their identity by simply showing up at someone’s doorstep, perhaps someone who already knows them. The Supreme Court forcefully rejected that argument, finding that it did not mitigate the constitutional concerns.

Furthermore, in the few cases French can point to where “offline doctrine” has limited children’s access to adult content, as Cohn notes, those laws don’t chill 1st Amendment rights:

… having to travel a little farther to reach a business does not chill a patron’s First Amendment rights; compelling adults to sacrifice their anonymity before accessing disfavored content plainly does.

As for the ruling in Reno that French suggests is obsolete? Cohn points out that French is really annoyed about the facts, not the legal standards:

The principles laid out in Reno remain sound: the First Amendment protects online speech the same as offline speech, and any content-based restrictions must satisfy strict scrutiny—that is, the law must be narrowly tailored to serve a compelling government interest, and must be the least restrictive means of accomplishing the government’s goal. Far from being outdated, this remains the analytical approach the court uses to assess any content-based speech regulation.

French’s real issue is with the facts and evidence presented in Reno. But Reno has never precluded arguing that new facts and circumstances militate a different outcome; it simply held that on the record before the court, the law was unconstitutional. The question is not whether Reno should be revisited, but rather whether these new laws, under new facts, can satisfy the relatively routine constitutional analysis that the Reno court applied.

As for the idea that modern credit card technology changes the ballgame by making age verification effective, Cohn breaks that down as well:

French argues that because “secure credit card use and age verification are practically ubiquitous,” we have evolved past Reno’s assessment that credit card verification is “effectively unavailable.” In doing so, he misses the true meaning of “effectively unavailable.” Reno, and thecases that followed, found that credit card age verification failed to render the law “narrowly tailored” because it doesn’t actually verify age.

And nothing has changed in that respect. Neither entering a credit card nor uploading a picture of an ID actually verifies that it is that person who has provided the identity information. It’s just as easy to borrow an older sibling’s ID as it is to borrow a parent’s credit card. And while there are new forms of age verification that utilize selfies or video, a quick Google search turns up countless pages on fooling such systems using free, easy-to-use software. Whatever the advances in technology since 2008, they have not yet solved this fatal problem.

Reno and its progeny also held that parental controls and content filtering were less restrictive alternatives than age verification. French argues that we have now learned they are “wholly inadequate.”

But is that so? French doesn’t provide a basis for this claim.

And, in fact, Judge Ezra noted that Texas’ own studies tended to show that content filtering and parental controls would be more effective, and better tailored, than age verification.

Perhaps French believes such measures are inadequate because parents lack the knowledge and ability to implement them. But that does not allow the government to sidestep them as a less restrictive means: “A court should not assume a plausible, less restrictive alternative would be ineffective; and a court should not presume parents, given full information, will fail to act.”

There’s a lot more in Cohn’s analysis, but it saves me from having to do a similar breakdown myself.

The 1st Amendment still applies, and as courts in Texas and Arkansas (and hopefully soon in California) have rightly found, these laws do not get anywhere close to passing the standards required to get around the 1st Amendment.

Filed Under: 1st amendment, adult content, age verification, ari cohn, david french, free speech, pornography, protect the children

Court Says Texas’ Adult Content Age Verification Law Clearly Violates The 1st Amendment

from the 1st-amendment-wins-again dept

One down, many more to go.

We’ve been talking a lot by the rush of states to push for age verification laws all over the world, despite basically every expert noting that age verification technology is inherently a problem for privacy and security, and the laws mandating it are terrible. So far, it seems that only the Australian government has decided to buck the trend and push back on implementing such laws. But, much of the rest of the world is moving forward with them, while a bunch of censorial prudes cheer these laws on despite the many concerns about them.

The Free Speech Coalition, the trade group representing the adult content industry, has sued to block the age verification laws in the US that specifically target their websites. We reported on how their case in Utah was dismissed on procedural grounds, because that law is a bounty-type law with a private right of action, so there was no one in the government that could be sued. However, the similar law in Texas did not include that setup (even as Texas really popularized that method with its anti-abortion law). The Free Speech Coalition sued over the law to block it from going into effect.

Judge David Alan Ezra (who is technically a federal judge in Hawaii, but is hearing Texas cases because the Texas courts are overwhelmed) has issued a pretty sweeping smackdown of these kinds of laws, noting that they violate the 1st Amendment and that they’re barred by Section 230.

Given the rushed nature of the proceedings (the case was filed a few weeks ago, and the judge needed to decide before the law was scheduled to go into effect on Friday), it’s impressive that the ruling is 81 pages of detailed analysis. We’ll have a separate post soon regarding the judge’s discussion on the “health warnings” part of the opinion, but I wanted to cover the rest of the legal analysis, mostly regarding the 1st Amendment and Section 230.

However, it is worth mentioning Texas’ ridiculous argument that there was no standing for the Free Speech Coalition in this case. They tried to argue that there was no standing because FSC didn’t name a particular association member impacted by the law, but we’ve been over this in other cases in which trade associations (see: NetChoice and CCIA) are able to bring challenges on behalf of their member companies. The more bizarre standing challenge was that some of the websites that are members of the Free Speech Coalition are not American companies.

But, the judge notes (1) many of the members are US companies and (2) even the non-US companies are seeking to distribute content in the US, where the 1st Amendment still protects them:

Defendant repeatedly emphasizes that the foreign website Plaintiffs “have no valid constitutional claims” because they reside outside the United States. (Def.’s Resp., Dkt. # 27, at 6–7). First, it is worth noting that this argument, even if successful, would not bar the remaining Plaintiffs within the United States from bringing their claims. Several website companies, including Midus Holdings, Inc., Neptune Media, LLC, and Paper Street Media, LLC, along with Jane Doe and Free Speech Coalition (with U.S. member Paper Street Media, LLC), are United States residents. Defendant, of course, does not contest that these websites and Doe are entitled to assert rights under the U.S. Constitution. Regardless of the foreign websites, the domestic Plaintiffs have standing.

As to the foreign websites, Defendant cites Agency for Intl. Dev. v. All. for Open Socy. Intl., Inc., 140 S. Ct. 2082 (2020) (“AOSI”), which reaffirmed the principle that “foreign citizens outside U.S. territory do not possess rights under the U.S. Constitution.” Id. at 2086. AOSI’s denial of standing is distinguishable from the instant case. That case involved foreign nongovernmental organizations (“NGOs”) that received aid—outside the United States—to distribute outside the United States. These NGOs operated abroad and challenged USAID’s ability to condition aid based on whether an NGO had a policy against prostitution and sex trafficking. The foreign NGOs had no domestic operations and did not plan to convey their relevant speech into the United States. Under these circumstances, the Supreme Court held that the foreign NGOs could not claim First Amendment protection. Id.

AOSI differs from the instant litigation in two critical ways. First, Plaintiffs do not seek to challenge rule or policymaking with extraterritorial effect, as the foreign plaintiffs did in AOSI. By contrast, the foreign Plaintiffs here seek to exercise their First Amendment rights only as applied to their conduct inside the United States and as a preemptive defense to civil prosecution. Indeed, courts have typically awarded First Amendment protections to foreign companies with operations in the United States with little thought. See, e.g., Manzari v. Associated Newspapers Ltd., 830 F.3d 881 (9th Cir. 2016) (in a case against British newspaper, noting that defamation claims “are significantly cabined by the First Amendment”); Mireskandari v. Daily Mail and Gen. Tr. PLC, CV1202943MMMSSX, 2013 WL 12114762 (C.D. Cal. Oct. 8, 2013) (explicitly noting that the First Amendment applied to foreign news organization); Times Newspapers Ltd. v. McDonnell Douglas Corp., 387 F. Supp. 189, 192 (C.D. Cal. 1974) (same); Goldfarb v. Channel One Russia, 18 CIV. 8128 (JPC), 2023 WL 2586142 (S.D.N.Y. Mar. 21, 2023) (applying First Amendment limits on defamation to Russian television broadcast in United States); Nygård, Inc. v. UusiKerttula, 159 Cal. App. 4th 1027, 1042 (2008) (granting First Amendment protections to Finnish magazine); United States v. James, 663 F. Supp. 2d 1018, 1020 (W.D. Wash. 2009) (granting foreign media access to court documents under the First Amendment). It would make little sense to allow Plaintiffs to exercise First Amendment rights as a defense in litigation but deny them the ability to raise a pre-enforcement challenge to imminent civil liability on the same grounds.

Moving on. The judge does a fantastic job detailing how Texas’ age verification law is barred by the 1st Amendment. First, the decision notes that the law is subject to strict scrutiny, the highest level of scrutiny in 1st Amendment cases. As the court rightly notes, in the landmark Reno v. ACLU case (the case that found everything except Section 230 of the Communications Decency Act unconstitutional), the Supreme Court said governments can’t just scream “for the children” and use that as a shield against 1st Amendment strict scrutiny:

However, beginning in the 1990s, use of the “for minors” language came under more skepticism as applied to internet regulations. In Reno v. ACLU, the Supreme Court held parts of the CDA unconstitutional under strict scrutiny. 521 U.S. 844, 850 (1997). The Court noted that the CDA was a content-based regulation that extended far beyond obscene materials and into First Amendment protected speech, especially because the statute contained no exemption for socially important materials for minors. Id. at 865. The Court noted that accessing sexual content online requires “affirmative steps” and “some sophistication,” noting that the internet was a unique medium of communication, different from both television broadcast and physical sales.

It also points to ACLU vs. Ashcroft, which found the Child Online Protection Act unconstitutional on similar grounds, and notes that Texas’ law is pretty similar to COPA.

Just like COPA, H.B. 1181 regulates beyond obscene materials. As a result, the regulation is based on whether content contains sexual material. Because the law restricts access to speech based on the material’s content, it is subject to strict scrutiny

Texas also tried to argue that there should be no 1st Amendment protections for adult content because it’s “obscene.” But the judge noted that’s not at all how the system works:

In a similar vein, Defendant argues that Plaintiffs’ content is “obscene” and therefore undeserving of First Amendment coverage. (Id. at 6). Again, this is precedent that the Supreme Court may opt to revisit, but we are bound by the current Miller framework. Miller v. California, 413 U.S. 15, 24 (1973). 3 Moreover, even if we were to abandon Miller, the law would still cover First Amendmentprotected speech. H.B. 1181 does not regulate obscene content, it regulates all content that is prurient, offensive, and without value to minors. Because most sexual content is offensive to young minors, the law covers virtually all salacious material. This includes sexual, but non-pornographic, content posted or created by Plaintiffs. See (Craveiro-Romão Decl., Dkt. # 28-6, at 2; Seifert Decl., Dkt. # 28-7, at 2; Andreou Decl., Dkt. # 28-8, at 2). And it includes Plaintiffs’ content that is sexually explicit and arousing, but that a jury would not consider “patently offensive” to adults, using community standards and in the context of online webpages. (Id.); see also United States v. Williams, 553 U.S. 285, 288 (2008); Ashcroft v. Free Speech Coal., 535 U.S. 234, 252 (2002). Unlike Ginsberg, the regulation applies regardless of whether the content is being knowingly distributed to minors. 390 U.S. at 639. Even if the Court accepted that many of Plaintiffs’ videos are obscene to adults—a question of fact typically reserved for juries—the law would still regulate the substantial portion of Plaintiffs’ content that is not “patently offensive” to adults. Because H.B. 1181 targets protected speech, Plaintiffs can challenge its discrimination against sexual material.

And under strict scrutiny, the law… fails. Badly. The key part of strict scrutiny is whether or not the law is tailored specifically to address a compelling state interest, and not go beyond that. While the court says that protecting children is a compelling state interest, the law is not even remotely narrowly tailored to that interest:

Although the state defends H.B. 1181 as protecting minors, it is not tailored to this purpose. Rather, the law is severely underinclusive. When a statute is dramatically underinclusive, that is a red flag that it pursues forbidden viewpoint discrimination under false auspices, or at a minimum simply does not serve its purported purpose….

H.B. 1181 will regulate adult video companies that post sexual material to their website. But it will do little else to prevent children from accessing pornography. Search engines, for example, do not need to implement age verification, even when they are aware that someone is using their services to view pornography. H.B. 1181 § 129B.005(b). Defendant argues that the Act still protects children because they will be directed to links that require age verification. (Def.’s Resp., Dkt. # 27, at 12). This argument ignores visual search, much of which is sexually explicit or pornographic, and can be extracted from Plaintiffs’ websites regardless of age verification. (Sonnier Decl., Dkt. # 31-1, at 1–2). Defendant’s own expert suggests that exposure to online pornography often begins with “misspelled searches[.]”…

So, the law doesn’t stop most access to adult content. The judge highlights that, by the state’s own argument, it doesn’t apply to foreign websites, which host a ton of adult content. And it also doesn’t apply to social media, since most of their content is not adult content.

In addition, social media companies are de facto exempted, because they likely do not distribute at least one-third sexual material. This means that certain social media sites, such as Reddit, can maintain entire communities and forums (i.e., subreddits), dedicated to posting online pornography with no regulation under H.B. 1181. (Sonnier Decl., Dkt. # 31-1, at 5). The same is true for blogs posted to Tumblr, including subdomains that only display sexually explicit content. (Id.) Likewise, Instagram and Facebook pages can show material which is sexually explicit for minors without compelled age verification. (Cole Decl., Dkt. # 5-1, at 37–40). The problem, in short, is that the law targets websites as a whole, rather than at the level of the individual page or subdomain. The result is that the law will likely have a greatly diminished effect because it fails to reduce the online pornography that is most readily available to minors.

In short, if the argument is that we need to stop kids from seeing pornography, the law should target pornography, rather than a few sites which focus on pornography.

Also, the law is hella vague, in part because it does not consider that 17-year-olds are kinda different from 5-year-olds.

The statute’s tailoring is also problematic because of several key ambiguities in H.B. 1181’s language. Although the Court declines to rest its holding on a vagueness challenge, those vagueness issues still speak to the statute’s broad tailoring. First, the law is problematic because it refers to “minors” as a broad category, but material that is patently offensive to young minors is not necessarily offensive to 17-year-olds. As previously stated, H.B. 1181 lifts its language from the Supreme Court’s holdings in Ginsberg and Miller, which remains the test for obscenity. H.B. 1181 § 129B.001; Miller, 413 U.S. at 24; Ginsberg, 390 U.S. at 633. As the Third Circuit held, “The type of material that might be considered harmful to a younger minor is vastly different—and encompasses a much greater universe of speech—than material that is harmful to a minor just shy of seventeen years old. . . .” ACLU v. Ashcroft, 322 F.3d at 268. 7 H.B. 1181 provides no guidance as to what age group should be considered for “patently offensive” material. Nor does the statute define when material may have educational, cultural, or scientific value “for minors,” which will likewise vary greatly between 5-yearolds and 17-year-olds.

And even the “age verification” requirements are vague because it’s not clear what counts.

Third, H.B. 1181 similarly fails to define proper age verification with sufficient meaning. The law requires sites to use “any commercially reasonable method that relies on public or private transactional data” but fails to define what “commercially reasonable” means. Id. § 129B.03(b)(2)(B). “Digital verification” is defined as “information stored on a digital network that may be accessed by a commercial entity and that serves as proof of the identify of an individual.” Id. § 129B.003(a). As Plaintiffs argue, this definition is circular. In effect, the law defines “identity verification” as information that can verify an identity. Likewise, the law requires “14-point font,” but text size on webpages is typically measured by pixels, not points. See Erik D. Kennedy, The Responsive Website Font Size Guidelines, Learn UI Design Blog (Aug. 7, 2021) (describing font sizes by pixels) (Dkt. # 5-1 at 52–58). Overall, because the Court finds the law unconstitutional on other grounds, it does not reach a determination on the vagueness question. But the failure to define key terms in a comprehensible way in the digital age speaks to the lack of care to ensure that this law is narrowly tailored. See Reno, 521 U.S. at 870 (“Regardless of whether the CDA is so vague that it violates the Fifth Amendment, the many ambiguities concerning the scope of its coverage render it problematic for purposes of the First Amendment.”).

So the law is underinclusive and vague. But it’s also overinclusive by covering way more than is acceptable under the 1st Amendment.

Even if the Court were to adopt narrow constructions of the statute, it would overburden the protected speech of both sexual websites and their visitors. Indeed, Courts have routinely struck down restrictions on sexual content as improperly tailored when they impermissibly restrict adult’s access to sexual materials in the name of protecting minors.

The judge notes (incredibly!) that parts of HB 1181 are so close to COPA (the law the Supreme Court found unconstitutional in the ACLU v. Ashcroft case) that he seems almost surprised Texas even bothered.

The statutes are identical, save for Texas’s inclusion of specific sexual offenses. Unsurprisingly, then, H.B. 1181 runs into the same narrow tailoring and overbreadth issues as COPA….

[….]

Despite this decades-long precedent, Texas includes the exact same drafting language previously held unconstitutional.

Nice job, Texas legislature.

The court also recognizes the chilling effects of age verification laws, highlighting how, despite the ruling in Lawrence v. Texas saying anti-gay laws were unconstitutional, Texas has still kept the law in question on the books.

Privacy is an especially important concern under H.B. 1181, because the government is not required to delete data regarding access, and one of the two permissible mechanisms of age-verification is through government ID. People will be particularly concerned about accessing controversial speech when the state government can log and track that access. By verifying information through government identification, the law will allow the government to peer into the most intimate and personal aspects of people’s lives. It runs the risk that the state can monitor when an adult views sexually explicit materials and what kind of websites they visit. In effect, the law risks forcing individuals to divulge specific details of their sexuality to the state government to gain access to certain speech. Such restrictions have a substantial chilling effect. See Denver Area Educ. Telecomm. Consortium, Inc., 518 U.S. at 754 (“[T]he written notice requirement will further restrict viewing by subscribers who fear for their reputations should the operator, advertently or inadvertently, disclose the list of those who wish to watch the patently offensive channel.”).

The deterrence is particularly acute because access to sexual material can reveal intimate desires and preferences. No more than two decades ago, Texas sought to criminalize two men seeking to have sex in the privacy of a bedroom. Lawrence v. Texas, 539 U.S. 558 (2003). To this date, Texas has not repealed its law criminalizing sodomy. Given Texas’s ongoing criminalization of homosexual intercourse, it is apparent that people who wish to view homosexual material will be profoundly chilled from doing so if they must first affirmatively identify themselves to the state.

Texas argued that the age verification data will be deleted, but that doesn’t cut it, which is an important point in many other states passing similar laws:

Defendant contests this, arguing that the chilling effect will be limited by age verification’s ease and deletion of information. This argument, however, assumes that consumers will (1) know that their data is required to be deleted and (2) trust that companies will actually delete it. Both premises are dubious, and so the speech will be chilled whether or not the deletion occurs. In short, it is the deterrence that creates the injury, not the actual retention. Moreover, while the commercial entities (e.g., Plaintiffs) are required to delete the data, that is not true for the data in transmission. In short, any intermediary between the commercial websites and the third-party verifiers will not be required to delete the identifying data.

The judge also notes that leaks and data breaches are a real risk, even if the law requires deletion of data! And that the mere risk of such a leak is a speech deterrent.

Even beyond the capacity for state monitoring, the First Amendment injury is exacerbated by the risk of inadvertent disclosures, leaks, or hacks. Indeed, the State of Louisiana passed a highly similar bill to H.B. 1181 shortly before a vendor for its Office of Motor Vehicles was breached by a cyberattack. In a related challenge to a similar law, Louisiana argues that age-verification users were not identified, but this misses the point. See Free Speech Coalition v. Leblanc, No. 2:23-cv-2123 (E.D. La. filed June 20, 2023) (Defs.’ Resp., Dkt. # 18, at 10). The First Amendment injury does not just occur if the Texas or Louisiana DMV (or a third-party site) is breached. Rather, the injury occurs because individuals know the information is at risk. Private information, including online sexual activity, can be particularly valuable because users may be more willing to pay to keep that information private, compared to other identifying information. (Compl. Dkt. # 1, at 17); Kim Zetter, Hackers Finally Post Stolen Ashley Madison Data, Wired, Aug. 18, 2015, https://www.wired.com/2015/08/happened-hackers-posted-stolen-ashleymadison-data (discussing Ashley Madison data breach and hackers’ threat to “release all customer records, including profiles with all the customers’ secret sexual fantasies and matching credit card transactions, real names and addresses.”). It is the threat of a leak that causes the First Amendment injury, regardless of whether a leak ends up occurring.

Hilariously, Texas’ own “expert” (who works on age verification tech and is on the committee that runs the trade association of age verification companies) basically undermined Texas’ argument:

Defendant’s own expert shows how H.B. 1181 is unreasonably intrusive in its use of age verification. Tony Allen, a digital technology expert who submitted a declaration on behalf of Defendant, suggests several ways that age-verification can be less restrictive and costly than other measures. (Allen Decl., Dkt. # 26-6). For example, he notes that age verification can be easy because websites can track if someone is already verified, so that they do not have to constantly prove verification when someone visits the page. But H.B. 1181 contains no such exception, and on its face, appears to require age verification for each visit.

Given all that, the age verification alone violates the 1st Amendment.

With that, there isn’t even a need to do a Section 230 analysis, but the court does so anyway. It doesn’t go particularly deep, other than to note that Section 230’s coverage is considered broad (even in the 5th Circuit):

Defendant seeks to differentiate MySpace because the case dealt with a negligence claim, which she characterizes as an “individualized harm.” (Def.’s Resp., Dkt. # 27, at 19). MySpace makes no such distinction. The case dealt with a claim for individualized harm but did not limit its holding to those sorts of harms. Nor does it make sense that Congress’s goal of “[paving] the way for a robust new forum for public speech” would be served by treating individual tort claims differently than state regulatory violations. Bennett v. Google, LLC, 882 F.3d 1163, 1166 (D.C. Cir. 2018) (cleaned up). The text of the CDA is clear: “No cause of action may be brought and no liability may be imposed under any State or local law that is inconsistent with this section.” 47 U.S.C. § 230(e)(3). “[A]ny” state law necessarily includes those brought by state governments, so Defendant’s distinction between individual vs. regulatory claims is without merit.

The Fifth Circuit “and other circuits have consistently given [Section 230(c)] a wide scope.” Google, Inc. v. Hood, 822 F.3d 212, 220-21 (5th Cir. 2016) (quoting MySpace, 528 F.3d at 418). “The expansive scope of CDA immunity has been found to encompass state tort claims, alleged violations of state statutory law, requests for injunctive relief, and purported violations of federal statutes not specifically excepted by § 230(e).” Hinton v. Amazon.com.dedc, LLC, 72 F. Supp. 3d 685, 689 (S.D. Miss. 2014) (citing cases).

And while the court says 230 preemption might not apply to adult content websites that create and host their own content, it absolutely does apply to those that host 3rd party user-uploaded content.

Those Plaintiffs that develop and post their own content are not entitled to an injunction on Section 230 grounds. Still, other Plaintiffs, such as WebGroup, which operates XVideos, only hosts third-party content, and therefore is entitled to Section 230 protection.

Given all that it’s not difficult for the court to issue the injunction, noting that a violation of 1st Amendment rights is irreparable harm.

In short, Plaintiffs have shown that their First Amendment rights will likely be violated if the statute takes effect, and that they will suffer irreparable harm absent an injunction. Defendant suggests this injury is speculative and notimminent, (Def.’s Resp., Dkt. # 27, at 21–23), but this is doubtful. H.B. 1181 takes effect on September 1—mere days from today. That is imminent. Nor is the harm speculative. The Attorney General has not disavowed enforcement. To the contrary, her brief suggests a genuine belief that the law should be vigorously enforced because of the severe harms purportedly associated with what is legal pornography. (Id. at 1–5). It is not credible for the Attorney General to state that “[p]orn is absolutely terrible for our kids” but simultaneously claim that they will not enforce a law ostensibly aimed at preventing that very harm. Because the threat of enforcement is real and imminent, Plaintiffs’ harm is non-speculative. It is axiomatic that a plaintiff need not wait for actual prosecution to seek a preenforcement challenge. See Babbitt v. United Farm Workers Nat. Union, 442 U.S. 289, 298 (1979). In short, Plaintiffs have more than met their burden of irreparable harm.

All in all this is a very good, very clear, very strong ruling, highlighting how age verification mandates for adult content violate the 1st Amendment. It’s likely Texas will appeal, and the 5th Circuit has a history of ignoring 1st Amendment precedent, but for now this is a win for free speech and against mandatory age verification.

Filed Under: 1st amendment, adult content, age verification, chilling effects, hb 1181, pornography, preemption, privacy, section 230, standing, state laws, texas
Companies: free speech coalition

As Prudes Drive Social Media Takedowns, Museums Embrace… OnlyFans?

from the didn't-see-that-one-coming dept

Over the last few years, we’ve seen more and more focus on using content moderation efforts to stamp out anything even remotely upsetting to certain loud interest groups. In particular, we’ve seen NCOSE, formerly “Morality in Media,” spending the past few years whipping up a frenzy about “pornography” online. They were one of the key campaigners for FOSTA, which they flat out admitted was step one in their plan to ban all pornography online. Recently, we’ve discussed how MasterCard had put in place ridiculous new rules that were making life difficult for tons of websites. Some of the websites noted that Mastercard told them it was taking direction from… NCOSE. Perhaps not surprisingly, just recently, NCOSE gave MasterCard its “Corporate Leadership Award” and praised the company for cracking down on pornography (which NCOSE considers the same as sex trafficking or child sexual abuse).

Of course, all of this has some real world impact. We’ve talked about how eBay, pressured to remove such content because of FOSTA and its payment processors, has been erasing LGBTQ history (something, it seems, NCOSE is happy about). And, of course, just recently, OnlyFans came close to prohibiting all sexually explicit material following threats from its financial partners — only to eventually work out a deal to make sure it could continue hosting adult content.

But all of this online prudishness has other consequences. Scott Nover, over at Quartz, has an amazing story about how museums in Vienna are finding that images of classic paintings are being removed from all over the internet. Though, they’ve come up with a somewhat creative (and surprising) solution: the museums are setting up OnlyFans accounts, since the company is one of the remaining few which is able to post nude images without running afoul of content moderation rules. Incredibly, the effort is being run by Vienna’s Tourist Board.

The Vienna Tourist Board said its museums have faced a litany of online challenges. After the Natural History Museum Vienna posted images of the Venus of Willendorf, a 25,000-year-old Paleolithic limestone figurine, Facebook deleted the images and called them pornographic. The Albertina Museum had its TikTok account suspended in July for showing nudes from the Japanese artist and photographer ??Nobuyoshi Araki, CNN reported. And the Leopold Museum, which houses modern Austrian art, has struggled to advertise on social media because of the bans on nudity.

Even advertising the new OnlyFans account on other social media proved difficult, the board said. Twitter rejected links to the board?s website because it linked out to the OnlyFans account. (Twitter allows nudity on its platform as long as the account and images are labeled as such.) Facebook and Instagram only allowed ads featuring the Venus of Willendorf and a nude painting by Amedeo Modigliani after the tourist board explained the context to the platforms, but other images by artists Egon Schiele and Peter Paul Rubens were rejected.

This is all kind of ridiculous, but certainly falls into the Masnick’s Impossibility Theorem collection of the impossibility of content moderation at scale. Of course, it also recalls the case in France where Facebook took down an classic 1866 oil painting by Gustave Courbet, in which the court initially ruled that Facebook could not take down the image. Facebook has (for many years now) had exceptions to its nudity rule for “art,” but figuring out how to enforce that kind of thing is notoriously difficult.

And when you have prudish, moralizing busybodies like NCOSE pressuring companies to wipe out any and all nudity, it’s no surprise that this kind of thing is the result. But, really, all of this seems likely to backfire in the end. Cordoning off even artistic nudity into sites like OnlyFans… also means that more and more people may be introduced to OnlyFans “for the paintings,” only to discover what else is available there.

Filed Under: content moderation, museums, nudity, paintings, pornography, prudes, social media, vienna, vienna tourist bouard
Companies: onlyfans

Prudish Mastercard About To Make Life Difficult For Tons Of Websites

from the content-moderation-at-the-financial-layer dept

For all the attention that OnlyFans got for its shortlived plan to ban sexually explicit content in response to “pressures” from financial partners, as we’ve discussed, it was hardly the only website to face such moderation pressures from financial intermediaries. You can easily find articles from years back highlighting how payment processors were getting deeply involved in forcing website to moderate content.

And the OnlyFans situation wasn’t entirely out of nowhere either. Back in April we noted that Mastercard had announced its new rules for streaming sites, and other sites, such as Patreon, have already adjusted their policies to comply with Mastercard’s somewhat prudish values.

However, as those new rules that were announced months ago are set to become official in a few days, the practical realities of what Mastercard requires are becoming clear, and it’s a total mess. Websites have received “compliance packages” in which they have to set up a page to allow reports for potential abuse. In theory, this sounds reasonable — if there really is dangerous or illegal activity happening on a site, making it easier for people to report it makes sense. But some of it is highly questionable:

The form features a checklist of clickable boxes that anyone visiting an adult site is encouraged to use to report what they believe to be ?exposed personally identifiable information,? ?impersonation,? ?underage material,? ?copyright/trademark infringement? and ?spam” as well as ?prostitution or trafficking,? ?weapons,? ?drugs? and ?other.?

First off “prostitution” and “trafficking” are different things, and lumping them together is already somewhat problematic. As a webmaster explained to Xbiz, this seems to have come from “Morality in Media” — a horrifically repressed group of prudish busybodies who renamed themselves the “National Center on Sexual Exploitation” (NCOSE) and who were a major force behind FOSTA, which they admitted was part of their plan to outlaw all pornography. Last year, we noted that the group had put a major focus on demanding credit card companies stop working with porn sites, and some of Mastercard’s new rules are clearly designed to appease them.

?Groups like NCOSE are convinced that all adult content falls under ?prostitution or trafficking,?? the webmaster noted. ?This form is just encouraging them to bury us in paperwork that won?t accomplish anything.?

Not only that, but every such report is cc’d back to Mastercard, which seems bizarrely stupid. Of course, as we’ve seen with things like copyright takedowns, having the mechanism means that it will get abused. A lot. And then campaigners like NCOSE will try to use the number of “reports” (not proof of anything actually illegal) as proof of “illegal activity” and push for new regulations.

Also, the rules requiring the form to be linked from every page is likely to have much wider consequences as well:

The webmaster also noted that the form essentially forces all adult sites to add the words ?underage material,? ?prostitution or trafficking,? ?weapons? and ?drugs? to their metadata, which then puts them at risk of AI shadowbans or even state surveillance.

?I don?t want that metadata associated with my brands,? they protested.

As we’ve said in other situations, one of the big questions and concerns that comes about when infrastructure layer partners get into the content moderation game is that it matters how much competition there is the market. If websites could simply drop Mastercard maybe it wouldn’t be such a big deal. But, unfortunately, right now, it’s hard for a site that wants to accept payments to not work with Mastercard. Both it and Visa (and to a lesser extent, American Express) are basically required if you want to accept payments for anything. Perhaps that will change over time (and things like this might help drive that change). But in the meantime, it certainly appears that a disingenuous and dishonest campaign by a prudish group that hates pornography has convinced Mastercard to make life difficult on lots of websites.

Filed Under: adult content, content moderation, infrastructure, infrastructure moderation, payment processors, pornography, sex trafficking
Companies: mastercard

Content Moderation Case Study: Social Media Upstart Parler Struggles To Moderate Pornography (2020)

from the not-so-easy-to-be-clean dept

Summary: Upstart social network Parler (which is currently offline, but attempting to come back) has received plenty of attention for trying to take on Twitter — mainly focusing on attracting many of the users who have been removed from Twitter or who are frustrated by how Twitter?s content moderation policies are applied. The site may only boast a fraction of the users that the social media giants have, but its influence can’t be denied.

Parler promised to be the free speech playground Twitter never was. It claimed it would never “censor” speech that hadn’t been found illegal by the nation’s courts. When complaints about alleged bias against conservatives became mainstream news (and the subject of legislation), Parler began to gain traction.

But the company soon realized that moderating content (or not doing so) wasn’t as easy as it hoped it would be. The problems began with Parler’s own description of its moderation philosophy, which cited authorities that had no control over its content (the FCC), and the Supreme Court, whose 1st Amendment rulings apply to what the government may regulate regarding speech, but not private websites.

Once it became clear Parler was becoming the destination for users banned from other platforms, Parler began to tighten up its moderation efforts, resulting in some backlash from users. CEO John Matze issued a statement, hoping to clarify Parler’s moderation decisions.

Here are the very few basic rules we need you to follow on Parler. If these are not to your liking, we apologize, but we will enforce:

– When you disagree with someone, posting pictures of your fecal matter in the comment section WILL NOT BE TOLERATED – Your Username cannot be obscene like “CumDumpster” – No pornography. Doesn’t matter who, what, where,

Parler’s hardline stance on certain content appeared to be more extreme than the platforms (Twitter especially) that Parler?s early adopters decried as too restrictive. In addition to banning content allowed by other platforms, Parler claimed to pull the plug on the sharing of porn, even though it had no Supreme Court/FCC precedent justifying this act.

Parler appears to be unable — at least at this point — to moderate pornographic content. Despite its clarification of its content limitations, Parler does not appear to have the expertise or the manpower to dedicate to removing porn from its service.

A report by the Houston Chronicle (which builds on reporting by the Washington Post) notes that Parler has rolled back some of its anti-porn policies. But it still wishes to be seen as a cleaner version of Twitter — one that caters to “conservative” users who feel other platforms engage in too much moderation.

According to this report, Parler outsources its anti-porn efforts to volunteers who wade through user reports to find content forbidden by the site’s policies. Despite its desires to limit the spread of pornography, Parler has become a destination for porn seekers.

The Post’s review found that searches for sexually explicit terms surfaced extensive troves of graphic content, including videos of sex acts that began playing automatically without any label or warning. Terms such as #porn, #naked and #sex each had hundreds or thousands of posts on Parler, many of them graphic. Some pornographic images and videos had been delivered to the feeds of users tens of thousands of times on the platform, according to totals listed on the Parler posts.

Parler continues to struggle with the tension of upholding its interpretation of the First Amendment and ensuring its site isn’t overrun by content it would rather not host.

Decisions to be made by Parler:

Questions and policy implications to consider:

Resolution: Parler?s Chief Operating Officer responded to these stories after they were published by insisting that its hands-off approach to pornography made sense, but also claiming that he did not want pornographic ?spam.?

After this story was published online, Parler Chief Operating Officer Jeffrey Wernick, who had not responded to repeated pre-publication requests seeking comment on the proliferation of pornography on the site, said he had little knowledge regarding the extent or nature of the nudity or sexual images that appeared on his site but would investigate the issue.

?I don?t look for that content, so why should I know it exists?” Wernick said, but he added that some types of behavior would present a problem for Parler. ?We don?t want to be spammed with pornographic content.?

Given how Parler?s stance on content moderation of pornographic material has already changed significantly in the short time the site has been around, it is likely to continue to evolve.

Originally posted to the Trust & Safety Foundation website.

Filed Under: content moderation, free speech, pornography, social media
Companies: parler, twitter

Bonkers, Unconstitutional Rhode Island Porn Tax Law Faces Backlash From Elizabeth Smart Over Use Of Her Name

from the revictimizing-again-and-again dept

It may be time to do some tests of Rhode Island water for heavy metals, as the state is experiencing a spasm of stupid when it comes to lawmaking. You will recall that there have been two recent proposals for new taxes in Rhode Island, one that would target video games rated “Mature” or higher, and one taxing the removal of porn-blocking software from any internet connected device sold in the state. If both sound almost hilariously unconstitutional to you, don’t worry, they are. These laws likely won’t pass and, if they do, the Supreme Court will certainly look upon them the same way a professional golfer looks at a two-inch putt. That the work of the anti-porn law is largely that of Chris Sevier, or Mark Sevier when the mood strikes him, who once tried to marry his own computer in protest of gay marriage and has been charged with stalking people twice, gives rise to one question: why are legislators in several states paying any of this any attention at all?

Sadly, it’s an open question. Mostly unreported in the past is that Sevier is pitching this law, formally the Human Trafficking and Child Exploitation Prevention Act, by slapping Elizabeth Smart’s name all over it and promoting it as the Elizabeth Smart Law. Smart, should you not know, was kidnapped when she was a teenager and forced by her captor to do all sorts of inhuman things, including the forced watching of pornography. Smart now often speaks about the harm of some pornography in some situations for some people. What she has not done, apparently, is consented to have her name used to push this particular bill in Rhode Island.

Smart, who was kidnapped from her Utah home as a teenager in 2002, sent a cease-and-desist letter to demand her name be removed from it. And the National Center on Sexual Exploitation, an anti-pornography advocacy group, demanded last year that the man behind the legislation, Chris Sevier, stop claiming it supported his work.

Sevier said he chose Smart’s name because she has spoken about the negative effects of pornography, including saying that pornography during her captivity “made my living hell worse.”

After being told by AP earlier this month that Smart’s lawyer was sending a cease-and-desist letter, Sevier said the name “Elizabeth Smart Law” was an “offhand name” that had been given to the legislation by lawmakers. The bill is also being promoted as the Human Trafficking and Child Exploitation Prevention Act.

Cute, but Sevier’s site still has Elizabeth Smart’s name slapped across the top of his website he’s using to push the bill at the time of this writing. Regardless of who came up with the idea to use her name, Sevier has used it, is using it, and by all accounts isn’t intending to stop using it anytime soon.

Asked if he would take her name off the site, Sevier wouldn’t say.

“It’s not that we will take it down or won’t take it down,” he said. “It’s irrelevant.”

And, yet, not irrelevant to the person who’s name Sevier is using so brazenly. Let’s not forget that Smart is herself a victim of horrible, horrible crimes. She has since made a job of advocating for child safety and also contributes to news organizations. Whatever you might think of her stances, she is a smart, courageous woman who has tried to make something meaningful out of an absolutely awful deck of cards she was dealt. This pernicious continued use of her name should certainly qualify as re-victimization.

So, again, why are legislators working with this clown?

Filed Under: chris sevier, elizabeth smart, laws, porn filter, pornography, rhode island, tax

Pakistan Orders ISPs To Block 429,343 Websites Completely, Because There's Porn On The Internet

from the i'm-sure-all-have-been-carefully-reviewed dept

It appears that efforts to censor the internet globally continues to spread, with the latest being a report out of Pakistan that the Pakistan Telecommunication Authority (PTA) has told ISPs that they need to start blocking an astounding 429,343 websites at the domain level as quickly as possible, following a Supreme Court order to the PTA about the evils of porn online.

The move apparently follows a recent order by the Supreme Court wherein the telecom sector?s regulatory body had been asked to ?take remedial steps to quantify the nefarious phenomenon of obscenity and pornography that has an imminent role to corrupt and vitiate the youth of Pakistan?.

PTA said it has decided to take pre-emptive measures to block such websites at the domain level to control dissemination of pornographic content through the internet as it provided ISPs with a list of 429,343 domains to be blocked on their respective networks.

The order apparently was issued just a few weeks ago, which raises the question of how the PTA put together a list of so many domains so quickly… and how carefully that list has been vetted. The answer, of course, is that it hasn’t been vetted. And that means that tons of perfectly legitimate content is about to get blocked in Pakistan. Remember, this is the same country that once blocked all of YouTube, and did so in a way that basically knocked Pakistan off the internet, while also blocking YouTube throughout many countries across Asia. Let’s hope mistakes of that nature aren’t made again.

Even so, it’s pretty obvious that mistakes will be made. First, that list is going to include tons of sites that aren’t pornography. Is there a way to appeal? Who knows! Second, it’s likely that in the process of blocking “at the domain level” some may choose to block IP addresses of certain sites, not realizing that many IP addresses are shared among multiple domains, meaning that lots of other sites may get sucked up as well. And then there’s the issue of what good will this do anyway. People who really want to access porn on the internet won’t have trouble finding it. I’m pretty sure there are more than 429,343 websites with porn on the internet, and even if there weren’t, I’m guessing that VPNs and proxies work just as well in Pakistan as they do elsewhere.

Filed Under: censorship, dns blocking, free speech, pakistan, pornography, site blocking

China's Internet Giant Sina.com Loses Publication License For Publishing Pornography — 20 Articles And Four Videos

from the red-line-of-law dept

One of the shrewder moves of the Chinese government was to allow home-grown startups like Alibaba, Baidu, Sina and Tencent to stand in for US Internet companies that were blocked in China. Sina is best-known for its Weibo service, the leading microblogging platform in China, and has featured several times on Techdirt as the Chinese authorities have tried to rein in the discussions there when they started straying into forbidden areas. Surprisingly, it’s another division of Sina, its online publishing arm, that has just been hit by a serious punishment from the Chinese government:

> China’s Internet giant Sina.com will be stripped of its online publication license, a penalty that might partially ban its operations, after articles and videos on the site fell prey to the country’s high-profile anti-porn movement. > > According to a statement released on Thursday by the National Office Against Pornographic and Illegal Publications, 20 articles and four videos posted on Sina.com were confirmed to have contained lewd and pornographic content following “a huge amount” of public tip-offs. > > As of result, the State Administration of Press, Publication, Radio, Film and Television decided to revoke the company’s two crucial licenses on Internet publication and audio and video dissemination and impose “a large number of fines.” > > People suspected of criminal offenses in the case have been transferred to police organs for further investigation, the statement said.

That comes from an article published by Xinhua.net, the Chinese government’s official news service, which therefore lends the following comment extra weight:

> Last year, Sina.com received administrative punishments twice for spreading online publications with banned contents, and its latest offense seems to have pushed authorities over the edge, with the statement describing the website as “having not learned a lesson at all and turning a cold shoulder on social responsibility.” > > “[The website] overstepped the red line of law… and it must be punished in accordance with laws and regulations,” it said.

Well, that may be, but it does seem curious that such a high-profile and popular Internet company should be so severely slapped down in public over just “20 articles and four videos” — a tiny proportion of its total holdings. It’s hard not to see this as a warning to all China’s Internet companies to be careful. That interpretation is bolstered by another comment reported by Xinhua.net:

> Meanwhile, the office warned other Internet service providers against similar errors, telling them to set up a comprehensive online info management system and check themselves for banned content. > > Earlier this week, the country’s “Cleaning the Web 2014” campaign saw 110 websites shut down and some 3,300 accounts on China-based social networking services as well as online forums deleted. > > The office vowed to maintain a persistent crackdown on online pornography and hand down whatever punishments violators deserve, whether it be fines, license removals or pursuit of criminal liabilities.

This makes it clear that there is a crucial quid pro quo for China’s giant Internet companies, no matter how big they have now become (in 2012, Alibaba’s sales were bigger than those of Amazon and eBay combined): feel free to make big capitalist profits serving the huge demand for online services in China, but just remember never to overstep the state’s “red line of law”.

Follow me @glynmoody on Twitter or identi.ca, and +glynmoody on Google+

Filed Under: censorship, china, great firewall, pornography, publication license, weibo
Companies: sina