anonymity – Techdirt (original) (raw)

In YOLO Ruling, Ninth Circuit Cracks Open Pandora’s Box For Section 230

from the you-only-destroy-the-internet-once dept

The Ninth Circuit appeals court seems to have figured out the best way to “reform” Section 230: by pretending it doesn’t apply to some stuff that the judges there just randomly decide it doesn’t apply to anymore. At least that’s my reading of the recent ruling against YOLO Technologies.

Now, let’s start by making something clear: YOLO Technologies appears to be a horrible company, making a horrible service, run by horrible people. We’ll get more into the details of that below. I completely understand the instinctual desire that YOLO should lose. That said, there are elements of this ruling that could lead to dangerous results for other services that aren’t horrible. And that’s what always worries me.

First, a quick history lesson: over fifteen years ago, we wrote about the Ninth Circuit’s ruling in Barnes v. Yahoo. At the time, and in the years since Barnes, that ruling seemed potentially problematic. The case revolved around another horrible situation, where an ex-boyfriend posted fake profiles. Barnes contacted Yahoo and reached a Director of Communications who promised to “take care of” the fake profiles.

However, the profiles remained up. Barnes sued, and Yahoo used 230 to try to get out of it. Much of the Barnes decision is very good. It’s an early decision that makes it clear Section 230 protects websites for their publishing activity of third-party content. It clearly debunks the completely backwards notion that you are “either a platform or a publisher” and only “platforms” get 230 protections. In Barnes, the court is quite clear that what Yahoo is doing is publishing activity, but since it is an interactive computer service and the underlying content is from a third party, it cannot be held liable as the publisher for that publishing activity under Section 230.

And yet, the court still sided with Barnes, noting that the direct promise from the employee at Yahoo to take care of the content went outside of traditional publishing activity and created a promise, and therefore a duty to live up to that promise.

In the fifteen years since that ruling, there have been various attempts to use Barnes to get around Section 230, but most have failed, as they didn’t have that clear promise like Barnes had. However, in the last couple of months, it seems the Ninth Circuit has decided that the “promise” part of Barnes can be used more broadly, and that could create a mess.

In YOLO, the company makes an add-on to Snapchat that lets users post questions and polls on the app. Other users could respond anonymously (they also had the option to reveal who they were). The app was very popular, but it shouldn’t be a huge surprise that some users used to harass and abuse others.

However YOLO claimed publicly, and in how it represented the service to users who signed up, that one way it would deal with harassment and abuse would be to reveal those users. As the Ninth Circuit explains:

As a hedge against these potential problems, YOLO added two “statements” to its application: a notification to new users promising that they would be “banned for any inappropriate usage,” and another promising to unmask the identity of any user who “sen[t] harassing messages” to others.

But it appears that YOLO never actually intended to live up to this, or it just became overwhelmed, because it appears not to have done it.

Now, this is always a bit tricky, because what some users consider abuse and harassment, a service (or other users!) might not consider to be abuse and harassment. But, in this case, it seems pretty clear that whatever trust & safety practices YOLO had were not living up to the notification it gave to users:

All four were inundated with harassing, obscene, and bullying messages including “physical threats, obscene sexual messages and propositions, and other humiliating comments.” Users messaged A.C. suggesting that she kill herself, just as her brother had done. A.O. was sent a sexual message, and her friend was told she was a “whore” and “boy-obsessed.” A.K. received death threats, was falsely accused of drug use, mocked for donating her hair to a cancer charity, and exhorted to “go kill [her]self,” which she seriously considered. She suffered for years thereafter. Carson Bride was subjected to constant humiliating messages, many sexually explicit and highly disturbing.

These users, and their families, sought to unmask the abusers. Considering that YOLO told users that’s how abuse and harassment would be dealt with, it wasn’t crazy for them to think that might work. But it did not. At all.

A.K. attempted to utilize YOLO’s promised unmasking feature but received no response. Carson searched the internet diligently for ways to unmask the individuals sending him harassing messages, with no success. Carson’s parents continued his efforts after his death, first using YOLO’s “Contact Us” form on its Customer Support page approximately two weeks after his death. There was no answer. Approximately three months later, his mother Kristin Bride sent another message, this time to YOLO’s law enforcement email, detailing what happened to Carson and the messages he received in the days before his death. The email message bounced back as undeliverable because the email address was invalid. She sent the same to the customer service email and received an automated response promising an answer that never came. Approximately three months later, Kristin reached out to a professional friend who contacted YOLO’s CEO on LinkedIn, a professional networking site, with no success. She also reached out again to YOLO’s law enforcement email, with the same result as before.

So, uh, yeah. Not great! Pretty terrible. And so there’s every reason to want YOLO to be in trouble here. The court determines that YOLO’s statements about unmasking harassers meant that it had made a promise, a la Barnes, and therefore had effectively violated an obligation which was separate from its publishing activities that were protected by Section 230.

Turning first to Plaintiffs’ misrepresentation claims, we find that Barnes controls. YOLO’s representation to its users that it would unmask and ban abusive users is sufficiently analogous to Yahoo’s promise to remove an offensive profile. Plaintiffs seek to hold YOLO accountable for a promise or representation, and not for failure to take certain moderation actions. Specifically, Plaintiffs allege that YOLO represented to anyone who downloaded its app that it would not tolerate “objectionable content or abusive users” and would reveal the identities of anyone violating these terms. They further allege that all Plaintiffs relied on this statement when they elected to use YOLO’s app, but that YOLO never took any action, even when directly requested to by A.K. In fact, considering YOLO’s staff size compared to its user body, it is doubtful that YOLO ever intended to act on its own representation.

And, again, given all the details, this feels understandable. But I still worry about where the boundaries are here. We’ve seen plenty of other cases. For example, six years ago, when the white supremacist Jared Taylor sued Twitter for banning him, he argued that it could not ban users because Twitter had said that it “believe[s] in free expression and believe[s] every voice has the power to impact the world.”

So it seems like there needs to be some clear line. In Barnes, there was a direct communication between the person and the company where an executive at the company directly made a promise to Barnes. That’s not the case in the YOLO ruling.

And when we combine the YOLO ruling with the Ninth Circuit’s ruling in the Calise case back in June, things get a little more worrisome. I didn’t get a chance to cover that ruling when it came out, but Eric Goldman did a deep dive on it and why it’s scary. That case also uses Barnes’ idea of a “promise” by the company to mean a “duty” to act that is outside of Section 230.

In that case, it was regarding scammy ads from Chinese advertisers. The court held that Meta had a “duty” based on public comments to somehow police advertisements, that was outside of its Section 230 protections. That ruling also contained a separate concurrence (oddly written by the same Judge who wrote the opinion, but which apparently he couldn’t get others to agree to) that just out and out trashed Section 230 and basically made it clear he hated it.

And thus, as Eric Goldman eloquently puts it, you have the Ninth Circuit “swiss-cheesing” Section 230 by punching all kinds of holes in it, enabling more questionable lawsuits to be brought, arguing that this or that statement by a company or a company employee represented some form of a promise under Barnes, and therefore a “duty” outside of Section 230.

In summary, Barnes is on all fours with Plaintiffs’ misrepresentation claims here. YOLO repeatedly informed users that it would unmask and ban users who violated the terms of service. Yet it never did so, and may have never intended to. Plaintiffs seek to enforce that promise—made multiple times to them and upon which they relied—to unmask their tormentors. While yes, online content is involved in these facts, and content moderation is one possible solution for YOLO to fulfill its promise, the underlying duty being invoked by the Plaintiffs, according to Calise, is the promise itself. See Barnes, 570 F.3d at 1106–09. Therefore, the misrepresentation claims survive.

And maybe that feels right in this case, where YOLO’s behavior is so egregious. But, it’s unclear where this theory ends, and that leaves it wide open for abuse. For example, how would this case have turned out if the messages sent to the kids weren’t actually “abusive” or “harassing”? I’m not saying that happened here, as it seems pretty clear that they were. But imagine a hypothetical where many people did not feel that the behavior was actually abusive, but the user argued that it was. Perhaps they even said this to be abusive back.

Under this ruling, would YOLO still need to reveal who the anonymous user was to avoid liability?

That seems… problematic?

However, the real lesson here is that anyone who runs a website now needs to be way more careful about what they say regarding how they moderate or do anything. Because anything they say could be used in court as an argument for why Section 230 doesn’t apply. Indeed, I could see how this could even conflict with other laws requiring websites to be more transparent about their moderation practices, but where doing so could remove 230 protections.

And I really worry about how this plays out in situations where a platform changes trust & safety policies mid-stream. I have no idea how that works out. What if when you signed up, the platform had a policy that said it would remove certain kinds of content, but later on decided to change that policy as it was ineffective. Would someone who signed up under the old policy regime now claim that the new policy regime violates the original promise that got them to sign up?

On top of that, I fear that this will lead companies to be way less transparent about their moderation policies and practices. Because now, being transparent about moderation policies means that anyone who thinks you didn’t enforce them properly might be able to sue and get around Section 230 by arguing you didn’t fulfill the duty you promised.

All that said, there is some other good language in this decision. The plaintiffs also tried a “product liability” claim, which has become a hipster legal strategy for many plaintiffs’ lawyers to try to get around Section 230. It has worked in some cases, but it fails here.

At root, all Plaintiffs’ product liability theories attempt to hold YOLO responsible for users’ speech or YOLO’s decision to publish it. For example, the negligent design claim faults YOLO for creating an app with an “unreasonable risk of harm.” What is that harm but the harassing and bullying posts of others? Similarly, the failure to warn claim faults YOLO for not mitigating, in some way, the harmful effects of the harassing and bullying content. This is essentially faulting YOLO for not moderating content in some way, whether through deletion, change, or suppression.

They also make clear, contrary to the claims we keep hearing, that an app having anonymous messaging as a feature isn’t an obvious liability. We’ve seen people claim this in many cases, but the court clearly rejects that idea:

Here, Plaintiffs allege that anonymity itself creates an unreasonable risk of harm. But we refuse to endorse a theory that would classify anonymity as a per se inherently unreasonable risk to sustain a theory of product liability. First, unlike in Lemmon, where the dangerous activity the alleged defective design incentivized was the dangerous behavior of speeding, here, the activity encouraged is the sharing of messages between users. See id. Second, anonymity is not only a cornerstone of much internet speech, but it is also easily achieved. After all, verification of a user’s information through government-issued ID is rare on the internet. Thus we cannot say that this feature was uniquely or unreasonably dangerous.

So, this decision is not the worst in the world, and it does seem targeted at a truly awful company. But poking a hole like this in Section 230 so frequently leads to others piling through that hole and widening it.

And one legitimate fear of a ruling like this is that it will actually harm efforts to get transparency in moderation practices, because the more companies say, the more liability they may face.

Filed Under: 9th circuit, anonymity, duty, duty of care, promises, promissory estoppel, section 230, terms of service
Companies: yolo

FTC Oversteps Authority, Demands Unconstitutional Age Verification & Moderation Rules

from the is-that-allowed? dept

Call me crazy, but I don’t think it’s okay to go beyond what the law allows even in pursuit of “good” intentions. It is consistently frustrating how this FTC continues to push the boundaries of its own authority, even when the underlying intentions may be good. The latest example is in its order against a sketchy messaging app, it has demanded things it’s not clear that it can order.

This has been a frustrating trend with this FTC. Making sure the market is competitive is a good thing, but bringing weak and misguided cases makes a mockery of its antitrust power. Getting rid of non-competes is a good thing, but the FTC doesn’t have the authority to do so.

Smacking down sketchy anonymous messaging apps preying on kids is also a good thing, but once again, the FTC seems to go too far. A few weeks ago, the FTC announced an order against NGL Labs, a very sketchy anonymous messaging app that was targeting kids and leading to bullying.

It certainly appears that the app was violating some COPPA rules on data collection for sites targeting kids. And it also appears that the app’s founders were publicly misrepresenting aspects of the app, as well as hiding that when they charged you, users were actually signing up for a weekly subscription. So I have no issues with the FTC going after the company for those things. Those are the kinds of actions the FTC should be taking.

The FTC’s description highlights at least some of the sketchiness behind the app:

After consumers downloaded the NGL app, they could share a link on their social media accounts urging their social media followers to respond to prompts such as “If you could change anything about me, what would it be?” Followers who clicked on this link were then taken to the NGL app, where they could write an anonymous message that would be sent to the consumer.

After failing to generate much interest in its app, NGL in 2022 began automatically sending consumers fake computer-generated messages that appeared to be from real people. When a consumer posted a prompt inviting anonymous messages, they would receive computer-generated fake messages such as “are you straight?” or “I know what you did.” NGL used fake, computer-generated messages like these or others—such as messages regarding stalking—in an effort to trick consumers into believing that their friends and social media contacts were engaging with them through the NGL App.

When a user would receive a reply to a prompt—whether it was from a real consumer or a fake message—consumers saw advertising encouraging them to buy the NGL Pro service to find out the identity of the sender. The complaint alleges, however, that consumers who signed up for the service, which cost as much as $9.99 a week, did not receive the name of the sender. Instead, paying users only received useless “hints” such as the time the message was sent, whether the sender had an Android or iPhone device, and the sender’s general location. NGL’s bait-and-switch tactic prompted many consumers to complain, which NGL executives laughed off, dismissing such users as “suckers.”

In addition, the complaint alleges that NGL violated the Restore Online Shoppers’ Confidence Act by failing to adequately disclose and obtain consumers’ consent for such recurring charges. Many users who signed up for NGL Pro were unaware that it was a recurring weekly charge, according to the complaint.

But just because the app was awful, the founders behind it were awful, and it seems clear they violated some laws, does not mean any and all remedies are open and appropriate.

And here, the FTC is pushing for some remedies that are likely unconstitutional. First off, it requires age verification and blocking all kids under the age of 18.

But, again, most courts have repeatedly made clear that government-mandated age verification or age-gating is unconstitutional on the internet. The Supreme Court just agreed to hear yet another case on this point, but it’s still a weird choice for the FTC to demand this here, knowing that the issue could end up before a hostile Supreme Court.

On top of that, as Elizabeth Nolan Brown points out at Reason, it appears that some of the other things the FTC are mad about regarding NGL is simply that offering anonymous communications tools to kids is somehow inherently harmful behavior that shouldn’t be allowed:

“The anonymity provided by the app can facilitate rampant cyberbullying among teens, causing untold harm to our young people,” Los Angeles District Attorney George Gascón said in a statement.

“NGL and its operators aggressively marketed its service to children and teens even though they were aware of the dangers of cyberbullying on anonymous messaging apps,” the FTC said.

Of course, plenty of apps allow for anonymity. That this has the potential to lead to bullying can’t be grounds for government action.

So, yes, I think the FTC can call out violating COPPA and take action based on that, but I don’t see how they can legitimately force the app to age gate at a time when multiple courts have already said the government cannot mandate such a thing. And they shouldn’t be able to claim that anonymity itself is somehow obviously problematic, especially at time when studies often suggest the opposite for some kids who need their privacy.

The other problematic bit is that the FTC is mad that NGL may have overstated their content moderation abilities. The FTC seems to think that it can legally punish the company for not living up to the FTC’s interpretation of NGL’s moderation promises. From the complaint itself:

Defendants represent to the public the NGL App is safe for children and teens to use because Defendants utilize “world class AI content moderation” including “deep learning and rule-based character pattern-matching algorithms” in order to “filter out harmful language and bullying.” Defendants further represent that they “can detect the semantic meaning of emojis, and [] pull[] specific examples of contextual emoji use” allowing them to “stay on trend, [] understand lingo, and [] know how to filter out harmful messages.”

In reality however, Defendants’ representations are not true. Harmful language and bullying, including through the use of emojis, are commonplace in the NGL App—a fact of which Defendants have been made aware through numerous complaints from users and their parents. Media outlets have reported on these issues as well. For example, one media outlet found in its testing of the NGL App that the App’s “language filters allowed messages with more routine bullying terms . . . including the phrases ‘You’re fat,’ ‘Everyone hates you,’ ‘You’re a loser’ and ‘You’re ugly.’” Another media outlet reported that it had found that “[t]hreatening messages with emojis that could be considered harmful like the knife and dagger icon were not blocked.” Defendants reviewed several of these media articles, yet have continued to represent that the NGL App is “safe” for children and teens to use given the “world class AI content moderation” that they allegedly employ.

I recognize that some people may be sympathetic to the FTC here. It definitely looks like NGL misrepresented the power of their moderation efforts. But there have been many efforts by governments or angry users to sue companies whenever they feel that they have not fully lived up to public marketing statements regarding their moderation.

People have sued companies like Facebook and Twitter for being banned, arguing that public statements about “free speech” by those companies meant that they shouldn’t have been banned. How is this not any different than that?

And the FTC’s claim here that “if you promise your app is safe” and then someone can find “harmful language and bullying” on the platform, that you’ve then violated the law just flies in the face of everything we just heard from the Supreme Court in the Moody case.

The FTC doesn’t get to be the final arbiter of whether or not a company successfully moderates away unsafe content. If so, it will be subject to widespread abuse. Just think of whichever presidential candidate you dislike the most, and what would happen if they could have their FTC investigate any platform they dislike for not fairly living up to their public promises on moderation.

It would be a dangerous, free speech-attacking mess.

Yes, NGL seems like a horrible company, run by horrible people. But go after them on the basics: the data collection in violation of COPPA and the sneaky subscription charging. Not things like age verification and content moderation.

Filed Under: age gating, anonymity, content moderation, coppa, ftc
Companies: ngl

California State Senator Pushes Bill To Remove Anonymity From Anyone Who Is Influential Online

from the someone-buy-padilla-a-'constitutional-lawmaking-for-dummies'-book dept

What the fuck is wrong with state lawmakers?

It seems that across the country, they cannot help but to introduce the absolute craziest, obviously unconstitutional bullshit, and seem shocked when people suggest the bills are bad.

The latest comes from California state Senator Steve Padilla, who recently proposed a ridiculous bill, SB 1228, to end anonymity for “influential” accounts on social media. (I saw some people online confusing him with Alex Padilla, who is the US Senator from California, but they’re different people.)

This bill would require a large online platform, as defined, to seek to verify the name, telephone number, and email address of an influential user, as defined, by a means chosen by the large online platform and would require the platform to seek to verify the identity of a highly influential user, as defined, by asking to review the highly influential user’s government-issued identification.

This bill would require a large online platform to note on the profile page of an influential or highly influential user, in type at least as large and as visible as the user’s name, whether the user has been authenticated pursuant to those provisions, as prescribed, and would require the platform to attach to any post of an influential or highly influential user a notation that would be understood by a reasonable person as indicating that the user is authenticated or unauthenticated, as prescribed.

First off, this is unconstitutional. The First Amendment has been (rightly) read to protect anonymity in most cases — especially regarding election-related information. That’s the whole point of McIntyre v. Ohio. It’s difficult to know what Padilla is thinking, especially given his blatant admission that this bill seeks to target speech regarding elections. There are exceptions to the right to be anonymous, but they are limited to pretty specific scenarios. Cases like Dendrite lay out a pretty strict test for de-anonymizing a person (while limited as a precedent, but adopted in other courts), and it has to only be after a plaintiff demonstrates to a court that the underlying speech is actionable under the law. And not, as in this bill, because the speech is “influential.”

Padilla’s bill recognizes none of that, and almost gleefully makes it clear that he is either ignorant of the legal precedents here, or he doesn’t care. As he lays out in his own press release about the bill, he wants platforms to “authenticate” users because he’s worried about misinformation online about elections (again, that’s exactly what the McIntyre case said you can’t target this way).

“Foreign adversaries hope to harness new and powerful technology to misinform and divide America this election cycle,” said Senator Steve Padilla. “Bad actors and foreign bots now have the ability to create fake videos and images and spread lies to millions at the touch of a button. We need to ensure our content platforms protect against the kind of malicious interference that we know is possible. Verifying the identities of accounts with large followings allows us to weed out those that seek to corrupt our information stream.”

That’s an understandable concern, but an unconstitutional remedy. Anonymous speech, especially political speech, is a hallmark of American freedom. Hell, the very Constitution that this law violates was adopted, in part, due to “influential” anonymous pamphlets.

The bill is weird in other ways as well. It seems to be trying to attack both anonymous influential users and AI-generated content in the same bill, and does so sloppily. It defines “influential users” as someone who where

“Content authored, created, or shared by the user has been seen by more than 25,000 users over the lifetime of the accounts that they control or administer on the platform.”

This is odd on multiple levels. First, “over the lifetime of the account,” would mean a ridiculously large number of accounts will, at some point in the future, reach that threshold. Basically, you make ONE SINGLE viral post, and the social media site has to get your data and you can no longer be anonymous. Second, does Senator Padilla really think it’s wise to require social media sites to have to track “lifetime” views of content? Because that could be a bit of a privacy nightmare.

And then it adds in a weird AI component. This also counts as an “influential user”:

Accounts controlled or administered by the user have posted or sent more than 1,000 pieces of content, whether text, images, audio, or video, that are found to be 90 percent or more likely to contain content generated by artificial intelligence, as assessed by the platform using state-of-the-art tools and techniques for detecting AI-generated content.

So, first, posting 1,000 pieces of AI-generated content hardly makes an account “influential.” There are plenty of AI-posting bots that have little to no followings. Why should they have to be “verified” by platforms? Second, I have a real problem with the whole “if ‘state-of-the-art tools’ identify your content as mostly AI, then you lose your rights to anonymity,” when there’s zero explanation of why, or whether or not these “state-of-the-art tools” are even reliable (hint: they’re not!). Has Padilla run an analysis of these tools?

There are higher thresholds that designate someone as “highly influential”: 100,000 lifetime user views and 5,000 potentially AI-created pieces of content. Under these terms, I would be legally designated “highly influential” on a few platforms (my parents will be so proud). But then, “large online platforms” would be required to “verify” the “influential users’” identity, including the user’s name, phone number, and email, and would be required to “seek” government-issued IDs from “highly influential” users.

There is no fucking way I’m giving ExTwitter my government ID, but under the bill, Elon Musk would be required to ask me for it. No offense, Senator Padilla, but I’m taking the state of California to court for violating my rights long before I ever hand my driver’s license over to Elon Musk at your demand.

While the bill only says that the platforms “shall seek” this info, it would then require them to add a tag “at least as large and as visible as the user’s name” to their profile designating them “authenticated” or “unauthenticated.”

It would then further require that any site allow users to block all content from “unauthenticated influential or highly influential” users.

It even gets down to the level of product management, in that it tells “large online platforms” how it has to handle showing content from “unauthenticated” influential users:

(1) A large online platform shall attach to any post of an influential or highly influential user a notation that would be understood by a reasonable person as indicating that the user is authenticated or unauthenticated.

(2) For a post from an unauthenticated influential or highly influential user, the notation required by paragraph (1) shall be visible for at least two seconds before the rest of the post is visible and then shall remain visible with the post.

Again, there is so much problematic about this bill. Anyone who knows anything about anonymity would know this is so far beyond what the Constitution allows, that it should be an embarrassment for Senator Padilla, who should pull this bill.

And, on top of anything else, this would become a massive target for anyone who wants to identify anonymous users. Companies are going to get hit with a ton of subpoenas or other legal demands for information on people, which they’ll have collected, because someone had a post go viral.

Senator Padilla should be required to read Jeff Kosseff’s excellent book, “The United States of Anonymous,” as penance, and to publish a book report that details the many ways in which his bill is an unconstitutional attack on free speech and anonymity.

Yes, it’s reasonable to be concerned about manipulation and a flood of AI content. But, we don’t throw out basic constitutional principles based on such concerns. Tragically, Senator Padilla failed at this basic test of constitutional civics.

Filed Under: 1st amendment, ai, anonymity, california, elections, influencers, steve padilla

How To Bell The AI Cat?

from the are-you-a-bot? dept

The mice finally agreed how they wanted the cat to behave, and congratulated each other on the difficult consensus. They celebrated in lavish cheese island retreats and especially feted those brave heroes who promised to place the bells and controls. The heroes received generous funding, with which they first built a safe fortress in which to build and test the amazing bells they had promised. Experimenting in safety without actually touching any real cats, the heroes happily whiled away many years.

As wild cats ran rampant, the wealthy and wise hero mice looked out from their well-financed fortresses watching the vicious beasts pouncing and polishing off the last scurrying ordinaries. Congratulating each other on their wisdom of testing the controls only on tame simulated cats, they mused over the power of evolution to choose those worthy of survival…

Deciding how we want AIs to behave may be useful as an aspirational goal, but it tempts us to spend all our time on the easy part, and perhaps cede too much power up front to those who claim to have the answers.

To enforce rules, one must have the ability to deliver consequences – which presumes some long-lived entity that will receive them, and possibly change its behavior. The fight with organized human scammers and spammers is already a difficult battle, and even though many of them are engaged in behaviors that are actually illegal, the delivery of consequences is not easy. Most platforms settle for keeping out the bulk of the attackers, with the only consequence being a blocked transaction or a ban. This is done with predictive models (yes, AI, though not the generative kind) that makes features out of “assets” such as identifiers, logins, device ids which are at least somewhat long-lived. The longer such an “asset” behaves well, the more it is trusted. Sometimes attackers intentionally create “sleeper” logins that they later burn.

Add generative AI to the mix, and the playing field tilts more towards the bad actors. AI driven accounts might more credibly follow “normal” patterns, creating more trust over time before burning it. They may also be able to enter walled gardens that have barriers of social interaction over time, damaging trust in previously safe smaller spaces.

What generative AI does is lower the value of observing “normal” interactions, because malicious code can now act like a normal human much more effectively than before. Regardless of how we want AIs to behave, we have to assume that many of them will be put to bad uses, or even that they may be released like viruses before long. Even without any new rules, how can we detect and counteract the proliferation of AIs who are scamming, spamming, behaving inauthentically, and otherwise doing what malicious humans already do?

Anyone familiar with game theory (see Nicky Case’s classic Evolution of Trust for a very accessible intro) knows that behavior is “better” — more honest and cooperative — in a repeated game with long-lived entities. If AIs can somehow be held responsible for their behavior, if we can recognize “who” we are dealing with, perhaps that will enable all the rules we might later agree we want to enforce on them.

However, upfront we don’t know when we are dealing with an AI as opposed to a human — which is kind of the point. Humans need to be pseudonymous, and sometimes anonymous, so we can’t always demand that the humans do the work of demonstrating who they are. The best we can do in such scenarios, is to have some long-lived identifier for each entity, without knowing its nature. That identifier is something it can take with it for establishing its credibility in a new location.

“Why, that’s a DID!” I can hear the decentralized tech folx exclaim — a decentralized identifier, with exactly this purpose, to create long-lived but possibly pseudonymous identifiers for entities that can then be talked about by other entities who might express more or less trust in them. The difference between a DID and a Twitter handle, say, is that a DID is portable — the controller has the key which allows them to prove they are the owner of the DID, by signing a statement cryptographically (the DID is essentially the public key half of the pair) — so that the owner can assert who they are on any platform or context.

Once we have a long-lived identity in place, the next question is how do you set up rules — and how would those rules apply to generative AI?

We could require that AIs always answer the question “**Who are you?**” by signing a message with their private key and proving their ownership of a DID, even when interacting from a platform that does not normally expose this. Perhaps anyone who cannot or does not wish to prove their humanity to a zktrust trusted provider, must always be willing to answer this challenge, or be banned from many spaces.

What we are proposing is essentially a dog license, that each entity (whether human or AI) interacting must identify who it is in some long term way, so that both public attestations about it and private or semi-private ones can be made. Various accreditors can spring up, and each maintainer of a space can decide how high (or low) to put the bar. The key is we must make it easy for spaces to gauge the trust of new participants, independent of their words.

Without the expectation of a DID, essentially all we have to lean on is the domain name service of where the entity is representing itself, or the policy of the centralized provider which may be completely opaque. But this means that new creators of spaces have no way to screen participants — so we would ossify even further into the tech giants we have now. Having long-lived identifiers that cross platforms enables the development of trust services, including privacy-preserving zero-knowledge trust services, that any new platform creator could lean on to create useful, engaging spaces (relatively) safe from spammers, scammers, and manipulators.

Identifiers are not a guarantee of good behavior, of course — a human or AI can behave deceptively, run scams, spread disinformation and so on even if we know exactly who they are. They do, however, allow others to respond in kind. In game theory, a generous tit-for-tat strategy winds up generally being successful in out-competing bad actors, allowing cooperators who behave fairly with others to thrive. Without the ability to identify the other players, however, the cheaters will win every round.

With long term identifiers, the game is not over — but it does become much deeper and more complex, and opens an avenue for the “honest” cooperators to win, that is, for those who reliably communicate their intentions. Having identifiers enables a social graph, where one entity can “stake” their own credibility to vouch for another. It also enables false reporting and manipulation, even coercion! The game is anything but static. Smaller walled gardens of long-trusted actors may have more predictable behavior, while more open spaces provide opportunity for newcomers.

This brings us to the point where consensus expectations have value. Once we can track and evaluate the behavior, we can set standards for the spaces we occupy. Creating the expectation of an identifier, is perhaps the first and most critical standard to set.

Generative AI can come play with us, but it should do so in an honest, above board way, and play by the same rules we expect from each other. We may have to adapt our tools for everyone in order to accomplish it — and must be careful we don’t lose our own freedoms in the process.

Filed Under: ai, anonymity, dids, identifiers, trust

Main Chinese Social Media Platforms Now Require Top Influencers To Display Their Real Names Online

from the we-know-who-you-are dept

Back in 2015, Techdirt wrote about one of China’s many attempts to control the online world, in this case by requiring everyone to use real names when they register for online services. As that post noted, the fact that the Chinese authorities had announced similar initiatives several times since 2003 suggests that implementing the policy was proving hard. Twenty years after those first attempts to root out anonymity online, China is still trying to tighten its grip. A post on the Rest of the World site reports:

On October 31, Weibo, as well as several other major Chinese social media platforms including WeChat, Douyin, Zhihu, Xiaohongshu, and Kuaishou, announced that they now required popular users’ legal names to be made visible to the public. Weibo stated in a public post that the new rule would first apply to all users with over 1 million followers, then to those with over 500,000.

As that indicates, there’s a new wrinkle in the fight against anonymity: real names are only required for top influencers on the main social media sites. That’s obviously much easier to police than trying to force hundreds of millions of users to comply. Here’s why the Chinese government is concentrating on the smaller group:

Min Jiang, a professor of communication studies at the University of North Carolina at Charlotte, told Rest of World the real-name rule would limit the influence of key opinion leaders, who still wield a lot of power on the Chinese internet. “Outspoken individuals have been conditioned to navigate the red line with ingenuity and creativity, steering public opinions even under heavy censorship,” she said.

The new targeted approach seems to be working. Several high-profile influencers who use pseudonyms online have announced that they will give up posting altogether. Others are actively “purging” their fans to get the total below the one million threshold for the new policy:

Tianjin Stock King, who posts finance content, removed over 6 million followers overnight, cutting his following from 7 million to just over 900,000. Ken, another Weibo “Big V,” told Rest of World he used the extension Cyber Zombie Cleaner to remove about 20,000 followers over the past month. The software, developed by software engineer Xiao Gu, enables users to remove inactive followers in large numbers, and has accumulated over 100,000 views on China’s code-sharing forum, CSDN.

Interestingly, the Rest of the World post says that it is not government repression that those with big followings fear under the new rules. Previous policies regulating anonymity already require Weibo users to register with their real name, and to show their IP location next to their user name. But mandating real names online means that influencers will be subject to the scrutiny of other users, who will be able to compare a person’s online activity with their offline identity. Conveniently for the Chinese authorities, that will make it more difficult to express controversial opinions. One group who are likely to be particularly affected by this requirement are influencers working at state-affiliated organizations, who may be accused of disloyalty or lack of patriotism once their identity is known to the wider public.

_Follow me @glynmoody on Mastodon.

Filed Under: anonymity, china, influencers, purging, real names, social media
Companies: douyin, kuaishou, tiktok, weibo, xiaohongshu, zhihu

Jordan’s King Approves Bill That Criminalizes Online Anonymity, Publication Of Police Officers Names/Photos

from the King-Control-Freak dept

Jordan will never be mistaken for a human rights haven. The State Department’s assessment of the kingdom of Jordan’s human rights environment is, at best, extremely dismal.

Significant human rights issues included credible reports of: torture and other cruel, inhuman, and degrading treatment or punishment by government authorities; arbitrary arrest and detention; political prisoners or detainees; arbitrary or unlawful interference with privacy; serious restrictions on freedom of expression and media, including harassment and intimidation of journalists, unjustified arrests or prosecutions of journalists, censorship, and enforcement of and threat to enforce criminal libel laws; serious restrictions on internet freedom; substantial interference with the freedom of peaceful assembly and freedom of association, including overly restrictive laws on the organization, funding, or operation of nongovernmental organizations and civil society organizations; inability of citizens to elect their executive branch of government or upper house of parliament; lack of investigation of and accountability for gender-based violence, including but not limited to domestic or intimate partner violence, sexual violence, and other harmful practices; violence or threats of violence targeting lesbian, gay, bisexual, transgender, queer, or intersex persons; and significant restrictions on workers’ freedom of association, including threats against labor activists.

That definitely explains why the ruler of Jordan, King Abdullah II, would sign a bill making this hideous environment even worse. (It also possibly explains why NSO Group chose to sell its spyware to this country. The Israeli-based malware firm has definitely shown a predilection for hawking surveillance tech to human rights abusers.)

King Abdullah II is adding to residents’ misery with the passage of new law that’s supposed to somehow unify the nation by giving the government even more ways to punish people for saying or doing things the government doesn’t like.

The King of Jordan approved a bill Saturday to punish online speech deemed harmful to national unity, according to the Jordanian state news agency, legislation that has drawn accusations from human rights groups of a crackdown on free expression in a country where censorship is on the rise.

The measure makes certain online posts punishable with months of prison time and fines. These include comments “promoting, instigating, aiding, or inciting immorality,” demonstrating ”contempt for religion” or “undermining national unity.”

There’s nothing quite like tying a chosen religion to a non-representative form of government. When you do that, you can start writing laws that define “morality” or “unity” in self-serving ways without having to worry about getting your legislation rejected by people actually willing to serve their constituents or rejected by courts as blatantly illegal violations of guaranteed rights.

The country’s government apparently assumes the humans it presides over have no rights. So, they’ll be subject to arrest and possible imprisonment for saying things the government doesn’t like. On top of that, they can expect to be punished for attempting to protect themselves from this punishment, or for being so bold as to point out wrongdoing by law enforcement.

It also punishes those who publish names or pictures of police officers online and outlaws certain methods of maintaining online anonymity.

The king and his most immediate subservients want to be able to easily identify people in need of punishment for violating these new draconian measures. And they don’t want anyone pointing out who’s being tasked with handling arrests for this new list of speech crimes.

As with so many censorial laws are these days, it’s an amendment to an existing “cybercrime” bill — the sort of handy foundational material autocrats can use to justify increased domestic surveillance and widespread silencing/punishing of dissent.

Then there’s this, which makes you wonder why the State Department ever bothered taking a look at the human rights situation in Jordan in the first place.

The measure is the latest in a series of crackdowns on freedom of expression in Jordan, a key U.S. ally seen as an important source of stability in the volatile Middle East.

Come on, America. Make better friends. Buddying up with someone more closely aligned to the religion-based dictators surrounding him than the ideals that turned this country into the leader of the free world is never going to work out well.

Filed Under: anonymity, free speech, harmful to national unity, hate speech, jordan

Reddit Defeats Film Studios’ Attempt To Reveal Identities Of Anonymous Users Over RCN Trial

from the nice-try dept

Back in March, we discussed a fairly silly request, made by several film producers who are suing RCN for not being their copyright police, that the court subpoena Reddit to unmask 9 users of that site. There were several aspects of the request that made it all very dumb: half the Reddit users never mentioned RCN, most referenced Comcast being their ISP, most of the remaining users never mentioned anything about piracy, and the one user who did mention RCN and piracy in context together had done so nearly a decade prior to the lawsuit. Given the First Amendment implications and hurdles involved in a request like this, the desire for the subpoena seemed doomed to fail.

And fail it did. The court voided the subpoena entirely, stating that the request was immaterial to the trial brought against RCN.

Reddit doesn’t have to identify eight anonymous users who wrote comments in piracy-related threads, a judge in the US District Court for the Northern District of California ruled on Friday. US Magistrate Judge Laurel Beeler quashed a subpoena issued by film studios in an order that agrees with Reddit that the First Amendment protects the users’ right to speak anonymously online.

Reddit has no involvement in the underlying case, which is a copyright lawsuit in a different federal court against cable Internet service provider RCN. Bodyguard Productions, Millennium Media, and other film companies sued RCN in the US District Court in New Jersey over RCN customers’ alleged downloads of 34 movies such as Hellboy, Rambo: Last Blood, Tesla, and The Hitman’s Bodyguard.

It’s the right decision, to be sure. While the studios’ assertions were questionable generally, the standard the court applied in this instance was weighing essentially whether the anonymous comments, and commenters by extension, served as a primary or only source of the information they sought for the RCN trial. The court then goes through on a user by user basis to analyze whether that was the case, finding in all instances that it was not. Below is one example.

The user “compypaq” said that RCN would sometimes remotely reset his modem. The plaintiffs contend that this comment helps show that RCN can monitor and control its customers’ conduct, because the ability to reset a modem implies the ability to turn off a modem. This argument only reinforces that the plaintiffs can obtain the information they seek from RCN. It isn’t necessary to subpoena the identities of RCN customers from a third party to determine whether RCN can disable its customers’ internet access.

In other words, the request only makes sense as a fishing expedition, in which the plaintiffs aren’t actually after the information they claim to be. And because of that, the court quashed the subpoena.

If those plaintiffs want the actual information they sought to enter into evidence from these Reddit users, they will have to get it through the normal discovery process at the RCN trial.

Filed Under: 1st amendment, anonymity, copyright, dmca subpoena, privacy, subpoena
Companies: rcn, reddit

Nintendo Wants Discord Subpoenaed To Reveal Leaker Of Unreleased ‘Zelda’ Artbook

from the gone-fishing dept

Readers of this site will know by now that Nintendo polices its intellectual property in an extremely draconian fashion. However, there are still differences in the instances in which the company does so. In many cases, Nintendo goes after people or groups in a fashion that stretches, if not breaks, any legitimate intellectual property concerns. Other times, Nintendo’s actions are well within its rights, but those actions often times appear to do far more harm to the company than whatever IP concern is doing to it. This is probably one of those latter stories.

There’s a new Zelda game coming out in a few weeks on the Switch: The Legend of Zelda: Tears of the Kingdom. As with any rabid fanbase, fans of the series have been gobbling up literally any information they can find about the unreleased game. It was therefore unsurprising that there was a ton of interest in a leaked art book that would accompany its release. It is also not a shock that Nintendo DMCA’d the leaks and discussion of the leaks that occurred on Discord, even though that almost certainly brought even more attention to the leaks in a classic Streisand Effect.

The posts include images from the 204-page artbook that will come with the collector’s edition of the game. They quickly spread to other Discord servers, various subreddits, and beyond. While a ton of original art for the game was in the leak, it didn’t end up revealing much about the mysteries surrounding Tears of the Kingdom players have spent months speculating about. There was no real developer commentary in the leak, and barely any spoilers outside of some minor enemy reveals.

But now Nintendo is also seeking to get a subpoena to unmask the leaker, ostensibly to “protect its rights”, which will almost certainly involve going after the leaker with every legal tactic the company can muster. This despite the all of that context above about what was and was not included in the leak.

Now, I can certainly understand why Nintendo is upset about the leak. It has a book to sell and scans from that book showing up on the internet is irritating. I would argue that those scans in no way replace a 204 page physical artbook, and frankly might serve to actually generate more interest in the book and drive sales, but I can understand why the company might not see it that way.

In which case seeking to bury the links and content via the DMCA is the proper move, even if I think that only serves to generate more interest in the leaks themselves. The only real point of unmasking the leaker is to go after that individual. While Nintendo may still be within its rights to do so, that certainly feels like overkill to say the least.

Referencing the notices sent to Discord in respect of the “copyright-protected and unreleased special edition art book for The Legend of Zelda: Tears of the Kingdom” the company highlights a Discord channel and a specific user.

“[Nintendo of America] is requesting the attached proposed subpoena that would order Discord Inc. …to disclose the identity, including the name(s), address(es), telephone number(s), and e-mail addresses(es) of the user Julien#2743, who is responsible for posting infringing content that appeared at the following channel Discord channel Zelda: Tears of the Kingdom..[..].

As we’ve said in the past, unmasking anonymous speakers on the internet ought to come with a very high bar over which the requester should need to jump. Do some scans from an artbook temporarily appearing on the internet really warrant this unmasking? Is there real, demonstrable harm here? Especially when this appears to be something of a fishing expedition?

Information available on other platforms, Reddit in particular, suggests that the person Nintendo is hoping to identify is the operator of the Discord channel and, at least potentially, the person who leaked the original content.

A two-month-old comment on the origin of the leak suggests the source was “a long time friend.” A comment in response questioned why someone would get a friend “fired for internet brownie points?”

There are an awful lot of qualifiers in there. And if this is just Nintendo fishing for a leaker for which it has no other evidence, then the request for the subpoena should be declined by the court.

Filed Under: anonymity, artbook, copyright, dmca, leaks, subpoena, zelda
Companies: discord, nintendo

Reddit Pushes Back On Idiotic Unmasking Fishing Expedition By Movie Studios

from the hook-line-and-stinker dept

Bear with me here, because this is going to take some explaining as a matter of throat-clearing for this post, which is actually the entire problem.

Back in 2021, several film studios filed a lawsuit against ISP RCN, accusing it of ignoring piracy conducted by its customers. That suit mostly followed the same bullshit playbook used by studios in the past: copyright infringement was occurring by RCN customers, RCN didn’t do enough to play copyright police, therefore give us a whole bunch of money. But where this gets weird is that the studios wanted to present evidence of RCN’s blind eye towards piracy and, for some reason, decided that comments on Reddit forums going back over a decade ago were just the evidence they needed. As part of that, the studios demanded that Reddit unmask 9 of those users it claimed were involved in the piracy, according to them. Reddit only complied with 1 individual and pushed back on the other 8.

As a result of that, those studios filed a motion in court to try and force Reddit to comply.

The film companies last week filed a motion to compel Reddit to respond to the subpoena in US District Court for the Northern District of California. The latest filing and the ongoing dispute over the subpoena were detailed in a TorrentFreak article published Saturday.

“The evidence Plaintiff requests from Reddit in the Rule 45 subpoena is clearly relevant and proportional to the needs of the case,” the film studios’ motion said. The Reddit users’ comments allegedly “establish that RCN has not reasonably implemented a policy for terminating repeat infringers,” that “RCN controls the conduct of its subscribers and monitors its subscribers’ access,” and “establish that the ability to freely pirate without consequence was a draw to becoming a subscriber of RCN.”

The reason for some of the specific language the studios used is that requests like this come with a very high bar over which they must hurtle. That is because there are rather severe First Amendment implications involved in unmasking an anonymous internet user as a result of their speech.

And based on the response filed by Reddit, the studios are completely full of shit, or were massively incompetent in their requests.

Reddit’s new motion said the film studios “cannot overcome the Reddit users’ First Amendment rights because the users’ posts Plaintiffs have identified as the basis for this subpoena are completely irrelevant to Plaintiffs’ lawsuit.” Reddit continued:

Four of the seven users at issue do not appear to have ever even mentioned RCN, based on the evidence offered by Plaintiffs. They merely refer to “my provider” or “our ISP.” And those references are all made in a discussion about Comcast, not RCN. Plaintiffs’ argument that the users are “very likely” referring to RCN should be rejected as speculative. Two of the three remaining users did mention RCN, but were discussing issues (such as their customer service experience) unrelated to copyright infringement or Plaintiffs’ allegations. And the final user vaguely mentioned RCN arguably in the context of copyright infringement once nine years ago, well beyond any arguably relevant timeframe for Plaintiffs’ allegations.

Which is precisely what Reddit has been talking about this entire time as it has continuously described this entire thing as a fishing expedition by the studios. They want to find evidence that somehow ties RCN users specifically to discussions about how they can get away with piracy using RCN as an ISP. But if this is the best they can do, then perhaps it would be better to simply drop this entire original lawsuit, because as far as good evidence goes, this ain’t it. In most cases, RCN isn’t even the ISP in question, and it also wasn’t the subject of the Reddit thread for several of the users the studios are seeking to unmask.

The February 2022 thread was started by a user “explaining that they had received a copyright infringement email from Comcast and expressing that they were ‘kinda worried,'” Reddit wrote. “In the year since, there have been over 240 replies in that discussion. Among those hundreds of comments about Comcast’s copyright practices, one mentions RCN.”

Reddit said it provided identifying information for that one user to the plaintiffs. “But the remaining four Comcast Users are now being targeted merely because they happened to post in the Comcast Thread, despite the fact that none of the users were responding or referring to any discussion of RCN, and none mention RCN themselves,” Reddit wrote.

Reddit’s filing goes on with more details. For starters, several of the targets for unmasking that actually were RCN customers… never discussed piracy. Like, at all. The studios also claimed that because one user talked about how RCN had reset their home router, this somehow means that Reddit “monitors and controls” the actual behavior of the customer while on the internet. Which is pretty fucking stupid, because maintaining infrastructure and monitoring web activity are two completely separate things. And then there’s this…

The studios argued that the 2009 post “establishes that RCN has the technical ability [to monitor users]. If RCN had the ability 13 years ago, it certainly still has the ability now.” The post in question said RCN replaced an error page with branded search results. Reddit told the court that the post doesn’t prove what the film studios claim:

This practice is known as NXDOMAIN DNS hijacking, and many ISPs have engaged in it to display advertisements to their customers. It has absolutely nothing to do with copyright infringement or piracy… DNS hijacking does not demonstrate ever-present surveillance or control by an ISP over its users. It instead reflects an ISP’s global policy of routing certain DNS calls to an IP address of their choosing.

You know, it sure would be nice if these film studios, prior to pumping out this motion before the court, could be bothered to actually understand what the hell they were talking about. Almost none of this makes any sense as part of its lawsuit against RCN, it would absolutely violate core First Amendment principles, and is obviously the fishing expedition that Reddit has been claiming it is.

You can read the entire filing embedded below, but hopefully these studios get laughed out of this particular court with speed.

Filed Under: 1st amendment, anonymity, anonymous comments, commenters, copyright, dmca, fishing expedition, free speech, subpoena
Companies: rcn, reddit

Actual Free Speech Matters, Elon Musk’s Understanding Of It Puts Free Speech At Risk

from the not-the-free-speech-you're-looking-for dept

So, look, it’s been quite clear for a while now that Elon Musk has no clue what “free speech” actually means. We’ve covered this point from so many different angles that at this point anyone claiming that Elon Musk “supports free speech” is ignorant or stupid.

Constitutional scholar Steve Vladeck has a short, but useful article over at MSNBC highlighting that Musk’s “misunderstanding of free speech is a problem for us all.” And it raises an issue that deserves some discussion. Because, before this, I’d basically just thought his misunderstanding of free speech mostly just meant that he was making a mess of things for himself. But I’m now realizing it’s a much bigger problem for everyone else.

Much of the article just covers ground that we’ve covered before, on how little Musk actually understands about free speech. And only at the end does it get into why this is a larger problem:

It’s not exactly news that an eccentric billionaire with no formal legal training has no idea how the First Amendment works. But there are two problems in this case that make it newsworthy. First, Musk himself has claimed that one of his goals for Twitter is to increase “free speech” on the platform. If his definition of “free speech” is radically different from the current state of general public (and constitutional) discourse, it sure would behoove him to explain how — and why. And second, because of his impact and influence, Musk’s patent misunderstandings of what the First Amendment does and doesn’t protect (and to whom it does and doesn’t apply), and the actions he wrongly takes (or doesn’t take) in response, perpetuate misunderstandings among those who believe he has some special claim to expertise on the subject. As the Twitter Files document dumps continue, it’s clear that Musk not only lacks such expertise, he also seems wholly uninterested in developing one.

I think this is mostly correct. The fact that Musk has such a completely fucked up understanding of free speech means that no one can really say what he’s planning to do with the platform he now owns. To date, he’s shown no real inclination to define free speech.

But, I’d argue the problems go beyond what Vladeck has laid out here. As I’ve detailed in the past, Twitter was one of the strongest defenders of internet free speech in courts and in discussions with regulators and policymakers. The company was more willing to stand on principle than almost any other large internet company I can think of. The company would fight on for free speech where others caved.

And, the company famously would go above and beyond in going to court to fight for the free speech rights of its users, where other companies would often cave in and hand over information.

It is not at all clear if Musk’s very confused definition of free speech still includes any of that.

So it’s not just that Musk is confused and contributing to the miseducation of the public, but that those of us who have spent decades fighting for actual free speech online have lost a very important ally in that fight.

Filed Under: anonymity, disinformation, elon musk, free speech, steve vladeck
Companies: twitter