discord – Techdirt (original) (raw)

Ctrl-Alt-Speech: The Bell Tolls For TikTok

from the ctrl-alt-speech dept

Ctrl-Alt-Speech is a weekly podcast about the latest news in online speech, from Mike Masnick and Everything in Moderation‘s Ben Whitelaw.

Subscribe now on Apple Podcasts, Overcast, Spotify, Pocket Casts, YouTube, or your podcast app of choice — or go straight to the RSS feed.

In this week’s round-up of the latest news in online speech, content moderation and internet regulation, Mike and Ben cover:

This episode is brought to you with financial support from the Future of Online Trust & Safety Fund.

Filed Under: ai, artificial intelligence, augmented reality, content moderation, csam, ncmec, virtual reality
Companies: discord, meta, tiktok, wattpad

California Court: Passwords Are Communications, Protected By The Stored Communications Act

from the only-so-far-you-can-take-a-subpoena dept

The Stored Communications Act — enacted in 1986 — is not only outdated, it’s also pretty weird. An amendment to the ECPA (Electronic Communications Privacy Act), the SCA added and subtracted privacy from communications.

It’s the subtractions that are bothersome. Law enforcement wasn’t too happy a lot of electronic communications were now subject to warrant requirements. They much preferred the abundant use/misuse of subpoenas to force third-parties into handing over stuff they didn’t have the probable cause to demand directly from criminal suspects.

Private parties — especially those engaged in civil litigation — also preferred to see fewer communications protected by the ECPA. So, this law — which declared every unopened email more than 180 days old free game — was welcomed by plenty of people who didn’t have the general public’s best interests in mind.

The government tends to make the most use of the ECPA and SCA’s privacy protection limitations, using the law and legal interpretations to access communications most people logically assumed the government would need warrants to obtain.

But the SCA also factors into civil litigation. In some cases, the arguments revolve around who exactly is protected by the law when it comes to unexpected intrusion by private parties. In this case — one highlighted by FourthAmendment.com (even as the site owner notes it’s not really a Fourth Amendment case) — it involves international litigation involving US service providers. The case directly deals with the Stored Communications Act and what it does or does not protect.

This lawsuit was brought by Path, an Arizona corporation, and its subsidiary, Tempest. Central to the litigation is Canadian citizen Curtis Gervais, who apparently was hired as an independent contractor by Tempest, which promoted him to the position of CEO in February 2022. A few months later, Gervais allegedly hacked into a competitor’s (Game Server Kings [“GSK”]) computers, leading to Tempest demoting (lol) Gervais to COO (Chief Operating Officer).

This demotion apparently didn’t sit well with Gervais, who allegedly began sharing confidential Tempest information with GSK, utilizing communications platform Discord to hand over this information to GSK employees.

So, it’s three American companies and one Canadian individual wrapped up in a dispute over ex parte demands to disclose information to the plaintiffs (Path/Tempest). Discord challenged the subpoenas, which asked for — among other things — any passwords used by Gervais to log into its services.

That’s where it gets interesting. Very few courts have considered what’s explicitly covered by the SCA and/or what can be obtained with subpoenas issued under this authority.

As is implied by both laws in play here (Electronic Communications Protection Act, Stored Communications Act), the protections (or lack thereof) apply to communications. Path argued that its subpoenas did not exceed the grasp of these laws, despite demanding Discord hand over Gervais’ passwords. According to the plaintiffs, passwords aren’t communications.

But that’s a very reductive view of passwords, something Discord pointed out in its challenge of the subpoenas:

Applicants argue passwords are not afforded protection under the SCA because passwords should not be considered “content.” Discord argues passwords are implicitly included within the SCA’s prohibitions because passwords implicate communications. In other words, Discord argues that passwords are “content “ under the SCA because they are “information concerning the substance, purport, or meaning” of a communication.

The court [PDF] says Discord is correct. But only after a lot of discussion because, as the court notes, this is an issue of “first impression.” It has never been asked to make this determination prior to this unique set of circumstances. But, despite the lack of precedent, the court still delivers a ruling that sets a baseline for future cases involving SCA subpoenas.

It begins by saying that even if the language of the SCA doesn’t specifically include passwords in its definition of “content,” it’s clear Congress meant to add protections to electronic communications with this amendment, rather than lower barriers for access.

The legislative history agrees with a broad interpretation of “content.” Congress explained that the purpose of enacting the SCA was to protect individuals on the shortcomings of the Fourth Amendment. Specifically, Congress enacted the SCA due to the “tremendous advances in telecommunications and computer technologies” with the “comparable technological advances in surveillance devices and techniques.” The SCA was further meant to help “Americans [who] have lost the ability to lock away a great deal of personal and business information.”

With this analysis of the scope of the term “content” under the SCA in mind, the Court now turns to determine if passwords are afforded protection under the SCA under that understanding of the definition of the term “content.” Passwords are undoubtedly a form of “information.” And passwords broadly “relate to” (or are “concerning”) the “substance, purport, or meaning of [a] communication” even if passwords are not themselves the content of a communication. Passwords further relate to a person’s intended message to another; while a password is not the content of the intended message, a password controls a user’s access to the content or services that require the user to prove their identity. As a matter of technological access to an electronic message, a password thus “relates to” the intended message because without a password, the author cannot access their account to draft and send the message (and the user cannot access their account to receive and read the message).

When a person uses a password to access their account to draft and send a message, that author inherently communicates to the recipient at least one piece of information that is essential to complete the communication process: namely, that the author has completed the process of authentication. The password is information or knowledge which is intended to convey a person’s claim of identity not just to the messaging system but also implicitly to the recipient. As such, within the context of electronic communication systems, passwords are a critical element because they convey an “essential part” of the communication with respect to access and security protocols.

The dispute at issue here demonstrates the inherency of communicating about passwords when using a messaging platform such as Discord: when the user of the “Archetype” sent messages demanding ransom for the stolen source code, those messages conveyed to the recipients that the author is or was an authentic or authorized user of the “Archetype” account who used and had access to the password for that account. That password for that account thus is information concerning that communication, even if the password is not itself written out in the content directly.

In addition to all of that, there’s the undeniable fact that if you’re able to obtain login info (including passwords) with a subpoena, it doesn’t matter if courts limit the reach of demands for communications. If you have the keys to the accounts, you have full access to any stored communications, whether or not this access has been explicitly approved by a court.

With this password in hand, a litigant (or their ediscovery consultants) would have unfettered access to all communications within the account holder’s electronic storage, without regard to relevance, privilege, or other appropriate bounds of permissible discovery. In other words, litigants could circumvent the very purpose of the SCA by simply requesting that a service provider disclose the password for a user account, ultimately vitiating the protections of the SCA.

No court would allow the government to claim this is acceptable under the SCA and/or the Constitution. And no court should allow it just because it’s litigation involving only private parties. This particular demand cannot be honored without violating the law. And the companies behind the subpoenas know this because they obviously have zero interest in obtaining nothing more than Gervais’ login info.

The only conceivable use for the passwords here is for Applicants to access the requested accounts (such as “Archetype”) and view the contents of all electronically stored communications in those requested accounts.

That’s clearly the litigants’ intent. And it doesn’t mesh with the legislative intent, which was to create a few new protections for then-newfangled electronic communications. This particular demand is rejected. The subpoenas are still alive, but they’re no longer intact. If the suing entities want access to the defendant’s communications, they’ll have to do it the old-fashioned way: by making discovery requests that remain on the right side of the law.

Filed Under: california, communications, curtis gervais, ecpa, passwords, sca, stored communications act
Companies: discord, path, tempest

Nintendo Wants Discord Subpoenaed To Reveal Leaker Of Unreleased ‘Zelda’ Artbook

from the gone-fishing dept

Readers of this site will know by now that Nintendo polices its intellectual property in an extremely draconian fashion. However, there are still differences in the instances in which the company does so. In many cases, Nintendo goes after people or groups in a fashion that stretches, if not breaks, any legitimate intellectual property concerns. Other times, Nintendo’s actions are well within its rights, but those actions often times appear to do far more harm to the company than whatever IP concern is doing to it. This is probably one of those latter stories.

There’s a new Zelda game coming out in a few weeks on the Switch: The Legend of Zelda: Tears of the Kingdom. As with any rabid fanbase, fans of the series have been gobbling up literally any information they can find about the unreleased game. It was therefore unsurprising that there was a ton of interest in a leaked art book that would accompany its release. It is also not a shock that Nintendo DMCA’d the leaks and discussion of the leaks that occurred on Discord, even though that almost certainly brought even more attention to the leaks in a classic Streisand Effect.

The posts include images from the 204-page artbook that will come with the collector’s edition of the game. They quickly spread to other Discord servers, various subreddits, and beyond. While a ton of original art for the game was in the leak, it didn’t end up revealing much about the mysteries surrounding Tears of the Kingdom players have spent months speculating about. There was no real developer commentary in the leak, and barely any spoilers outside of some minor enemy reveals.

But now Nintendo is also seeking to get a subpoena to unmask the leaker, ostensibly to “protect its rights”, which will almost certainly involve going after the leaker with every legal tactic the company can muster. This despite the all of that context above about what was and was not included in the leak.

Now, I can certainly understand why Nintendo is upset about the leak. It has a book to sell and scans from that book showing up on the internet is irritating. I would argue that those scans in no way replace a 204 page physical artbook, and frankly might serve to actually generate more interest in the book and drive sales, but I can understand why the company might not see it that way.

In which case seeking to bury the links and content via the DMCA is the proper move, even if I think that only serves to generate more interest in the leaks themselves. The only real point of unmasking the leaker is to go after that individual. While Nintendo may still be within its rights to do so, that certainly feels like overkill to say the least.

Referencing the notices sent to Discord in respect of the “copyright-protected and unreleased special edition art book for The Legend of Zelda: Tears of the Kingdom” the company highlights a Discord channel and a specific user.

“[Nintendo of America] is requesting the attached proposed subpoena that would order Discord Inc. …to disclose the identity, including the name(s), address(es), telephone number(s), and e-mail addresses(es) of the user Julien#2743, who is responsible for posting infringing content that appeared at the following channel Discord channel Zelda: Tears of the Kingdom..[..].

As we’ve said in the past, unmasking anonymous speakers on the internet ought to come with a very high bar over which the requester should need to jump. Do some scans from an artbook temporarily appearing on the internet really warrant this unmasking? Is there real, demonstrable harm here? Especially when this appears to be something of a fishing expedition?

Information available on other platforms, Reddit in particular, suggests that the person Nintendo is hoping to identify is the operator of the Discord channel and, at least potentially, the person who leaked the original content.

A two-month-old comment on the origin of the leak suggests the source was “a long time friend.” A comment in response questioned why someone would get a friend “fired for internet brownie points?”

There are an awful lot of qualifiers in there. And if this is just Nintendo fishing for a leaker for which it has no other evidence, then the request for the subpoena should be declined by the court.

Filed Under: anonymity, artbook, copyright, dmca, leaks, subpoena, zelda
Companies: discord, nintendo

National Guardsman Arrested For Leaking Top Secret Ukraine War Documents On Discord

from the who-could-have-possibly-seen-this-coming dept

So, we’re just handing out top secret security clearance to everyone, I guess. It was clear from the documents posted to Discord (before spreading everywhere), the person behind them would soon be located.

The folded security briefings were obviously smuggled out of secure rooms in someone’s pocket and then photographed carelessly, in one case on top of a hunting magazine. I mean, that narrows it down to people who still buy stuff printed on physical media, a number that shrinks exponentially by the day.

On top of that, the entry level for the leaked info — much of it related to the current invasion of Ukraine by Russia — was Discord, which no one has considered to be the equivalent of Signal or any other secure site for the dissemination of sensitive material.

The DOJ and Pentagon obliquely admitted that, despite some obvious clues, this hunt for the leak source might take some time. In its own estimation, the Defense Department estimated “thousands” of government employees might have access to these briefings and other national security documents. But for it to end up here (if, in fact, the government has actually gotten its man) is both surprising and a bit depressing.

Jack Teixeira, a 21-year-old member of the Massachusetts Air National Guard, was arrested by federal authorities Thursday in connection to the investigation of classified documents that were leaked on the internet.

FBI agents took Teixeira into custody earlier Thursday afternoon “without incident,” Attorney General Merrick Garland announced in brief remarks at the Department of Justice, which has been conducting a criminal investigation into the matter.

We’re apparently letting an army of weekend contributors — a division of the military best known for sandbag deployment and shooting college students — access sensitive information pertaining to a war taking place halfway around the world that they’re in no danger of being deployed to.

Perhaps this is the unintended consequence of de-siloing of intel after investigations showed the government’s ability to keep secrets from itself contributed to its inability to prevent the 9/11 attacks. Or perhaps this is the government taking a lackadaisical approach to operational security, assuming it can absorb any exposure and/or adequately punish anyone taking advantage of the government’s willingness to grant security clearance to nearly anyone remotely involved in national security.

These are still criminal allegations. But whoever was behind the leaks wasn’t doing this to serve the public good, at least not if other members of the Discord server these documents first appeared in are to be believed. Teixeira apparently dumped classified docs there because it was easy to do and he hoped these multiple federal law violations would secure him the friendship of other server members.

The Washington Post’s long report of the origin of these leaks paints a pretty disturbing picture about the person behind them.

The young member was impressed by OG’s seemingly prophetic ability to forecast major events before they became headline news, things “only someone with this kind of high clearance” would know. He was by his own account enthralled with OG, who he said was in his early to mid-20s.

“He’s fit. He’s strong. He’s armed. He’s trained. Just about everything you can expect out of some sort of crazy movie,” the member said.

In a video seen by The Post, the man who the member said is OG stands at a shooting range, wearing safety glasses and ear coverings and holding a large rifle. He yells a series of racial and antisemitic slurs into the camera, then fires several rounds at a target.

While “OG” periodically made claims he wanted other server members to “see” how the US government “really works,” he also espoused conspiracy theories and often expressed his anger that members weren’t showing enough interest in his posts. One member of this server (Thug Shaker Central, itself a bit of a racial slur) decided to post these to another Discord server. It spread from there, finally surfacing on social media sites where anyone could view them, rather than just server members.

That an air guardsman would have this access is a bit of shock, as is the lack of internal controls at whatever base employed him. More shocking is the fact the government didn’t discover this leak until after thousands of people had seen them, after they spread from Discord to Telegram to Twitter. The DOJ will definitely try to make Teixeira’s head roll, but the Pentagon has to be doing some headhunting of its own.

Whatever happens, this isn’t someone leaking documents as a service to the public. From all appearances, these leaks were motivated by a desire to win respect from online peers in a closed group. Not that it matters. An espionage prosecution doesn’t allow defendants to present public service arguments in their defense. And this case, unlike most we have covered here, doesn’t seem to have that crucial element that might justify the exposure of extremely sensitive information — especially information related to an invasion that has the possibility to result in nuclear weapon deployment and/or a Third World War. This wasn’t a selfless act. This was self-promotion.

Filed Under: clout, jack teixeira, leaks
Companies: discord

New York Wants To Destroy Free Speech Online To Cover Up For Their Own Failings Regarding The Buffalo Shooting

from the elect-better-people dept

Back in May, it seemed fairly obvious how all of this was going to go down. Following on the horrific mass murder carried out at a supermarket in Buffalo, we saw NY’s top politicians all agree that the real blame… should fall on the internet and Section 230. It had quickly become clear that NY’s own government officials had screwed up royally multiple times in the leadup to the massacre. The suspect had made previous threats, which law enforcement mostly brushed off. And then, most egregiously, the 911 dispatcher who answered the call about the shooting, hung up on the caller. And we won’t even get into a variety of other societal failings that resulted in all of this. No, the powers that be have decided to pin all the blame on the internet and Section 230.

To push this narrative, and to avoid taking any responsibility themselves, NY’s governor Kathy Hochul had NY Attorney General Letitia James kick off a highly questionable “investigation” into how much blame they could pin on social media. The results of that “investigation” are now in, and would you believe it? AG James is pretty sure that social media and Section 230 are to blame for the shooting! Considering the entire point of this silly exercise was to deflect blame and put it towards everyone’s favorite target, it’s little surprise that this is what the investigation concluded.

Hochul and James are taking victory laps over this. Here’s Hochul:

“For too long, hate and division have been spreading rampant on online platforms — and as we saw in my hometown of Buffalo, the consequences are devastating,” Governor Hochul said. “In the wake of the horrific white supremacist shooting this year, I issued a referral asking the Office of the Attorney General to study the role online platforms played in this massacre. This report offers a chilling account of factors that contributed to this incident and, importantly, a road map toward greater accountability.”

Hochul is not concerned about the failings of law enforcement officials, nor the failings of mental health efforts. Nor the failings of efforts to keep unwell people from accessing weapons for mass murder. Nope. It’s the internet that’s to blame.

James goes even further in her statement, flat out blaming freedom of speech for mass murder.

“The tragic shooting in Buffalo exposed the real dangers of unmoderated online platforms that have become breeding grounds for white supremacy,” said Attorney General James.

The full 49 page report is full of hyperbole and insisting that the use of forums by people doing bad things is somehow proof that the forums themselves caused the people to be bad. The report puts tremendous weight on the claims of the shooter himself, an obviously troubled individual, who insists that he was “radicalized” online. The report’s authors simply assume that this is accurate, and that it wasn’t just the shooter trying to push off the responsibilities for his own actions.

Incredibly, the report has an entire section that highlights how residents of Buffalo feel that social media should be held responsible. But, that belief that social media is to blame is… mostly driven by misleading information provided by the very same people creating this report in order to offload their own blame. Like, sure, if you keep telling people that social media is to blame, don’t be surprised when they parrot back your talking points. But that doesn’t mean those are meaningful or accurate.

There are many other oddities in the report. The shooter apparently set up a Discord server, with himself as the only member, where he wrote out a sort of “diary” of his plans and thinking. The report seems to blame Discord for this, even though this is no different than opening a local notepad and keeping notes there, or writing them down by hand on a literal notepad. I mean, what is this nonsense:

By restricting access to the Discord server only to himself until shortly before the attack, he ensured to near certainty that his ability to write would not be impeded by Discord’s content moderation.

Discord’s content moderation operates dually at the individual user and server level, and generally across the platform. The Buffalo shooter had no incentive to operate any server-level moderation tools to moderate his own writing. But the platform’s scalable moderation tools also did not stop him from continuing to plan his mass violence down to every last detail.

[….]

But without users or moderators apart from the shooter himself to view his writings, there could be no reports to the platform’s Trust and Safety Team. In practice, he mocked the Community Guidelines, writing in January 2022, “Looks like this server may be in violation of some Discord guidelines,” quoting the policy prohibiting the use of the platform for the organization, promotion, or support of violent extremism, and commenting with evident sarcasm, “uh oh.” He continued to write for more than three and a half more months in the Discord server, filling its virtual pages with specific strategies for carrying out his murderous actions.

He used it as a scratchpad. How do you blame Discord for that?!? If he’d done the same thing in a physical notebook, would AG James be blaming Moleskine for selling him a notebook? This just all seems wholly disconnected from reality.

The report also blames YouTube, because the shooter watched a video on how to comply with NY gun laws. As if that can lead to blame?

One of the videos actually demonstrates the use of an attachment to convert a rifle to use only a fixed magazine in order to comply with New York and other states’ assault weapons bans. The presenter just happens to mention that the product box itself notes that the device can be removed with a drill.

The more you read in the report, the more it becomes obvious just how flimsy James’/Hochul’s argument is that social media is to blame. Here’s the report admitting that he didn’t do anything obviously bad on Reddit:

Like the available Discord comments, the content of most of these Reddit posts is largely exchanging information about the pros and cons of certain brands and types of body armor and ammunition. They generally lack context from which it could have been apparent to a reader that the writer was planning a murderous rampage. One comment, posted about a year ago, is chilling in retrospect; he asks with respect to dark-colored tactical gear, “in low light situations such as before dusk after dawn and at nighttime it would provide good camouflage, also maybe it would be also good for blending in in a city?” It is difficult to say, however, that this comment should have been flagged at the time it was made

The report also notes how all these social media sites sprung into action after the shooting — something helped along because of Section 230, and acts as if this is a reason to reform 230. Indeed, while the report complains that they were still able to find a few images and video clips from the attack, the numbers were tiny and clearly suggest that barely any slipped through. But, this report — again prepared by a NY state gov’t which had law enforcement check on the shooter and do nothing about it — suggests that not being perfect in their moderation is a cause for alarm:

For the period May 20, 2022 to June 20, 2022, OAG investigators searched a number of mainstream social networks and related sites for the manifesto and video of the shooting. Despite the efforts these platforms made at moderating this content, we repeatedly found copies of the video and manifesto, and links to both, on some of the platforms even weeks after the shooting. The OAG’s findings most likely represent a mere fraction of the graphic content actually posted, or attempted to be posted, to these platforms. For example, during the course of nine weeks immediately following the attacks, Meta automatically detected and removed approximately 1 million pieces of content related to the Buffalo shooting across its Facebook and Instagram platforms. Similarly, Twitter took action on approximately 5,500 Tweets in the two weeks following the attacks that included still images or videos of the Buffalo shooting, links to still images and videos, or the shooter’s manifesto. Of those, Twitter took action on more than 4,600 Tweets within the first 48 hours of the attack

When we found graphic content as part of these efforts, we reported it through user reporting tools as a violation of the platform’s policy. Among large, mainstream platforms, we found the most content containing video of the shooting, or links to video of the shooting, on Reddit (17 instances), followed by Instagram (7 instances) and Twitter (2 instances) during our review period. We also found links to the manifesto on Reddit (19 instances), the video sharing site Rumble (14 instances), Facebook (5 instances), YouTube (3 instances), TikTok (1 instance), and Twitter (1 instance). Response time varied from a maximum of eight days for Reddit to take down violative content to a minimum of one day for Facebook and YouTube to do so.

We did not find any of this content on the other popular online platforms we examined for such content, which included Pinterest, Quora, Twitch, Discord, Snapchat, and Telegram, during our review period. That is not to say, however, that it does not exist on those platforms.

In other words, sites like Twitter and Facebook took down thousands to millions of people reposting this content and single digit reposts may have slipped through the content moderation systems… and NY’s top politicians think this is a cause for concern?

I mean, honestly, it is difficult to read this report and think that social media is a problem. What the report actually shows is that social media was, at best, tangential to all of this, and when the shooter and his supporters tried to share and repost content associated with the attack, the sites were pretty good (if not absolutely perfect) about getting most of it off the platform. So it’s absolutely bizarre to read all of that and then jump to the “recommendations” section, where they act as if the report showed that social media is the main cause of the shooting, and just isn’t taking responsibility.

It’s almost as if the “recommendations” section was written prior to the actual investigation.

The report summary from Hochul leaves out how flimsy the actual report is, and insists it proves four things the report absolutely does not prove:

  1. Fringe platforms fuel radicalization: this is entirely based on the claims of the shooter himself, who has every reason to blame others for his action. The report provides no other support for this.
  2. Livestreaming has become a tool for mass shooters: again, the “evidence” here is that this guy did it… and so did the Christchurch shooter in 2019. Of course (tragically, and unfortunately) there have been a bunch of mass shootings between now and then, and the vast, vast majority of them do not involve livestreaming. To argue that there’s any evidence that livestreaming is somehow connected to mass shootings is beyond flimsy.
  3. Mainstream platforms moderation policies are inconsistent and opaque. Again, the actual report suggests otherwise. It shows (as we highlighted above) that the mainstream platforms are pretty aggressive in taking down content associated with a mass shooting, and relatively quick at doing so.
  4. Online platforms lack accountability. What does accountability even mean here? This prong is used to attack Section 230, ignoring that it’s Section 230 that enabled these companies to build up tools and processes in their trust & safety departments to react to tragedies like this one.

The actual recommendations bounce back and forth between “obviously unconstitutional restrictions on speech” and “confused and nonsensical” (some are both). Let’s go through each of them:

  1. Create Liability for the Creation and Distribution of Videos of Homicides: This is almost certainly problematic under the 1st Amendment. You may recall that law enforcement types have been calling for this sort of thing for ages, going back over a decade. Hell, we have a story from 2008 with NY officials calling for this very same thing. It’s all nonsense. Videos of homicides are… actual evidence. Criminalizing the creation and distribution of evidence of a crime seems like a weird thing for law enforcement to be advocating for. It’s almost as if they don’t want to take responsibility. Relatedly, this would also criminalize taking videos of police shooting people. Which, you know, probably is not such a good idea.
  2. Add Restrictions to Livestreaming: I remind you that the report mentions exactly two cases of livestreamed mass murders: this one in Buffalo and the one in 2019 in Christchurch, New Zealand. That is not exactly proof that livestreaming is deeply connected with mass murder. The suggestion is completely infeasible, demanding “tape delays” on livestreaming, so that… it is no longer livestreaming. They also demand ways to “identify first-person violence before it can be widely disseminated.” And I’d like a pony too.
  3. Reform Section 230: Again, the actual report shows how the various platforms did a ton to get rid of content glorifying the shooter. Yes, a few tiny things slipped through… just as the shooter slipped through New York police review when he was previously reported for threatening violence. But, Hochul and James are sure that 230 is a problem. They demand that “an online platform has the initial burden of establishing that its policies and practices were reasonably designed.” This is effectively a repeal of 230 (as I’ll explain below).
  4. Increase Transparency and Strengthen Moderation: As we’ve discussed at length, many of these transparency mandates are actually censorship demands in disguise. Also, reforming Section 230 as they want would not strengthen moderation, it would weaken it by making it that much more difficult to actually adapt to bad actors on the site. The same is likely true of most transparency mandates, which make it more difficult to adapt to changing threats, because the transparency requirements slow everyone down.

I want to call out, again, why the “reasonably designed” bit of the “reform 230” issue is so problematic. Again, this requires people to actually understand how Section 230 works. Section 230’s main benefit is the procedural benefit of getting frivolous, vexatious cases tossed out early. If you condition 230 protections on proving “reasonableness,” you literally take away the entire benefit of 230. Because, now, every time there’s a lawsuit, you first have to go through the expensive, and time consuming process of proving your policies are reasonable. And, thus, you lose all of the procedural benefits of 230 and are left fighting nuisance lawsuits constantly. The idea makes no sense at all.

Worse, it again greatly limits the ability of sites to adapt and improve their moderation efforts, because now every single change that they make needs to go through a careful legal review before it will get approved, and then every single change will open them up to a new legal challenge that these new policies are somehow “unreasonable.” The entire “reasonableness” scheme incentivizes companies to not fix moderation and to not adapt and strengthen moderation, because any change to your policies creates the risk of liability, and the need to fight long and expensive lawsuits.

So, to sum all this up: we have real evidence that NY state failed in major ways with regards to the Buffalo shooter. Instead of owning that, NY leadership decided to blame social media, initiating this “investigation.” The actual details of the investigation show that social media had very, very little to do with this shooting at all, and where it was used, it was used in very limited ways. It also shows that social media sites were actually extremely fast and on the ball in removing content regarding the shooting, while a very, very, very tiny bit of content may have slipped through the filtering process, it was hugely successful.

And yet… the report still blames social media, insists a bunch of false things are true, and then makes a bunch of questionable (unconstitutional) recommendations, along with recommendations to effectively take away all of Section 230’s benefits… which would actually make it that much more difficult for websites to respond to future events and future malicious actors.

It’s all garbage. But, of course, it’s just politicians grandstanding and deflecting from their own failings. Social media and Section 230 are a convenient scapegoat, so that’s what we get.

Filed Under: 1st amendment, blame, buffalo, content moderation, kathy hochul, letitia james, livestreaming, mass murder, new york, section 230
Companies: discord, reddit, twitch, youtube

Did The 5th Circuit Just Make It So That Wikipedia Can No Longer Be Edited In Texas?

from the bang-up-job,-andy dept

I wrote up an initial analysis of the 5th Circuit’s batshit crazy ruling re-instating Texas’s social media content moderation law last week. I have another analysis of it coming out shortly in another publication (I’ll then write about it here). A few days ago, Prof. Eric Goldman did his own analysis as well, which is well worth reading. It breaks out a long list of just flat-out errors made by Judge Andy Oldham. It’s kind of embarrassing.

But there is one point in the piece that seemed worth calling out and highlighting. There is something of an open question as to what platforms technically fall under Texas’ law. The law defines “social media platform” as follows:

“Social media platform” means an Internet website or application that is open to the public, allows a user to create an account, and enables users to communicate with other users for the primary purpose of posting information, comments, messages, or images. The term does not include:

> (A) an Internet service provider as defined by Section 324.055 > (B) electronic mail; or > (C) an online service, application, or website: > > > (i) that consists primarily of news, sports, entertainment, or other information or content that is not user generated but is preselected by the provider; and > > (ii) for which any chat, comments, or interactive functionality is incidental to, directly related to, or dependent on the provision of the content described by Subparagraph (i)

The operative “anti-censorship” provision only applies to such social media platforms that have “more than 50 million active users in the United States in a calendar month.” Leaving aside that no one really knows how many active users they truly have, the definition above sweeps in a lot more companies than people realize.

In its filings in the case, Texas had claimed that the only companies covered by the law were Facebook, Twitter, and YouTube. Judge Andy Oldham, in his ridiculous ruling, stated that “the plaintiff trade associations represent all the Platforms covered by HB 20.”

But, from the definition above, that’s clearly false. First off, it’s not even clear if Twitter actually qualifies. As we’ve learned (oh so painfully), Twitter no longer even reports its “monthly active users,” but instead chooses to release its “monetizable daily active users” which is not even close to the same thing. When it last did post info on its monthly active users, apparently it was only 38 million — meaning it might not even be subject to the anti-censorship provisions of the law!

But also, there are other platforms which are not members of either trade association, and yet still qualify under the definition above. Law professor Daphne Keller put together a list of public information on internet company sizes for Senate testimony earlier this year, and it’s a useful guide.

One name that stands out: Wikipedia. According to Keller’s estimate, it has more than 97 million monthly active users on the site. It meets the definition under the law. It’s a website that is open to the public, allows a user to create an account and enables users to communicate with other users for the primary purpose of posting information, comments, messages, or images.

It doesn’t meet any of the exceptions. It’s not an ISP. It does not provide email. It does not consist “primarily” about news, sports, entertainment or “other information or content that is not user generated.” Wikipedia is all user generated. And the interactive nature of the site is not incidental to the service. It’s the whole point.

So… Wikipedia qualifies.

Now… how does Wikipedia comply?

Under the law, Wikipedia cannot “censor” based on “the viewpoint of the user.” But, Wikipedia is constantly edited by users. Even if you were to claim that a user chose to edit an entry because of the “viewpoint” of the content, how would Wikipedia even prevent that?

Wikipedia must also create (I’m already laughing) an email address where users can send complaints and a whole “complaint system.”

I don’t see how that can happen.

Anyway, it’s possible this means that Wikipedia can no longer stop people from adding more and more content (true or not) to Judge Andy Oldham’s profile, because having users take it down would potentially violate the law (but don’t do that: vandalizing Wikipedia is always bad, even if you’re trying to make a point).

The entire law is based on the idea that all moderation takes place by the company itself, and not by users.

It’s also possible that Reddit is swept up under the law (it’s unclear if they have enough US users, but it’s close), and again, I don’t see how it can comply. Moderation there is multi-layered, but there is user voting, which certainly might be based on viewpoints. There are admin level moderation decisions (so, under this law, Reddit might not have been able to ban a bunch of abusive subreddits). But, each subreddit has its own rules and its own moderators. Will individual subreddit moderation run afoul of this law? Can subreddits even still operate?

¯\_(ツ)_/¯

No one knows!

Discord might also be close to the trigger line and again, I don’t see how it could comply, since each Discord server has its own administrators and moderators.

On Twitter, someone noted that the job board, Indeed.com claims to have over 250 million unique visitors every month. That was as of 2020, yet some more recent numbers show it much higher, with the latest monthly numbers (from May of this year) showing over 650 million visits. Visits and users are not the same, but it’s not difficult to see how that turns into over 50 million active users in the US.

And… that creates more problems, as the lawyer noted to me on Twitter, if someone now posts a job opening on Indeed that violates the EEOC by saying certain races shouldn’t apply, well, under the Texas law, Indeed would have to leave that ad up (though, under the EEOC they’d have to take it down).

This just part of the reason we have a dormant commerce clause in the Constitution that should have gotten this law tossed even earlier, but alas…

Anyway, if the law does actually go into effect, we’re going to discover lots of nonsense like this. But that’s because the Texas legislature, the Texas executive branch, and foolish judges like Andy Oldham don’t actually understand any of this. They’re just real angry that Donald Trump got banned from Twitter for being an ass.

Filed Under: 5th circuit, andy oldham, content moderation, hb 20, wikipedia
Companies: discord, indeed, reddit, wikipedia

Kids Use Discord Chat To Track Predator Teacher’s Actions; Under California’s Kids Code, They’d Be Blocked

from the be-careful-how-you-"protect"-those-children dept

It’s often kind of amazing at how much moral panics by adults treat kids as if they’re completely stupid, and unable to do anything themselves. It’s a common theme in all sorts of moral panics, where adults insist that because some bad things could happen, they must be prevented entirely without ever considering that maybe a large percentage of kids are capable enough to deal with the risks and dangers themselves.

The Boston Globe recently had an interesting article about how a group of middle school boys were able to use Discord to successfully track the creepy, disgusting, and inappropriate shit one of their teachers/coaches did towards their female classmates, and how that data is now being used in an investigation of the teacher, who has been put on leave.

In an exclusive interview with The Boston Globe, one of the boys described how in January 2021,he and his friends decided to start their “Pedo Database,” to track the teacher’s words and actions.

There’s even a (redacted) screenshot of the start of the channel.

The kids self-organized and used Discord as a useful tool for tracking the problematic interactions.

During COVID, as they attended class online, they’d open the Discord channel on a split-screen and document the teacher’s comments in real time:

“You all love me so choose love.”

“You gotta stand up and dance now.”

Everyone “in bathing suits tomorrow.”

Once they were back in class in person, the boys jotted down notes to add to the channel later: Flirting with one girl. Teasing another. Calling the girls “sweetheart” and “sunshine.” Asking one girl to take off her shoes and try wiggling her toes without moving her pinkies.

“I felt bad for [the girls] because sometimes it just seems like it was a humiliating thing,” the boy told the Globe. “He’d play a song and he’d make one of them get up and dance.”

When the school year ended, the boys told incoming students about the Discord channel and encouraged them to keep tabs on the teacher. All in all, eight boys were involved, he said.

Eventually, the teacher was removed from the school and put on leave, after the administration began an investigation following claims that “the teacher had stalked a pre-teen girl at the middle school while he was her coach, and had been inappropriate with other girls.”

The article notes that there had been multiple claims in the past against the teacher, but that other teachers and administrators long protected the teacher. Indeed, apparently the teacher bragged about how he’d survived such complaints for decades. And that’s when the kids stepped up and realized they needed to start doing something themselves.

“I don’t think there was a single adult who would ever — like their parents, my mom, like anybody in the school — who had ever really taken the whole thing seriously before,” he added.

The boy’s mother contacted Conlon, and now the “Pedo Database” is in the hands of the US attorney’s Office, the state Department of Children, Youth, and Families, the state Department of Education, and with lawyer Matthew Oliverio, who is conducting the school’s internal investigation.

“I did not ever think this would actually be used as evidence, but we always had it as if it was,” said the boy, who is now 15 and a student at North Kingstown High School. “So I’m glad that we did, even though it might have seemed like slightly stupid at times.”

So, here we have kids who used the internet to keep track of a teacher accused of preying on children. Seems like a good example of helping to protect children.

Yet, it seems worth noting that under various “protect the children” laws, this kind of activity would likely be blocked. Already, under COPPA, it’s questionable if the kids should even be allowed on Discord. Discord, like many websites, limits usage in its terms of service to those 13 years or older. That’s likely in an attempt to comply with COPPA. But, the article notes that the kids started keeping this database as 6th graders, when they were likely 11-years old.

Also, under California’s AB 2273, Discord likely would have been more aggressive in banning them, as it would have had to employ much more stringent age verification tools that likely would have barred them from the service entirely. Also, given the other requirements of the “Age Appropriate Design Code,” it seems likely that Discord would be doing things like barring a chat channel described as a “pedo database.” A bunch of kids discussing possible pedophilia? Clearly that should be blocked as potentially harmful.

So, once again, the law, rather than protecting kids, might have actually put them more at risk, and done more to actually protect adults who were putting kids’ safety at risk.

Filed Under: ab 2273, age appropriate design code, kids, kids code, teachers
Companies: discord

NY Launches Ridiculous, Blatantly Unconstitutional ‘Investigations’ Into Twitch, Discord; Deflecting Blame From NY’s Own Failings

from the that's-not-how-any-of-this-works dept

I recognized that lots of people are angry and frustrated over the mass murdering jackass who killed ten people at a Buffalo grocery store last weekend. I’m angry and frustrated about it as well. But, the problem with anger and frustration is that it often leads people to lash out in irrational ways, and to “do something” even if that “something” is counterproductive and destructive. In this case, we’ve already seen politicians and the media trying to drive the conversation away from larger issues around racism, mental health, law enforcement, social safety nets and more… and look for something to blame.

While they seem to recognize that they can’t actually blame news outlets that have fanned the flames of divisiveness and bigotry and hatred — because of the 1st Amendment — for whatever reason, they refuse to apply that basic recognition to newer media, such as video games and the internet.

We already discussed how NY’s governor, Kathy Hochul, seemed really focused on blaming internet companies for her own state’s failures to stop the shooter, and now her Attorney General, Letitia James, has made it official: she’s opening investigations into Twitch, 4chan, 8chan, and Discord, claiming that those were the platforms used by the murderer. James notes that she’s doing this directly in response to a request from Hochul.

“The terror attack in Buffalo has once again revealed the depths and danger of the online forums that spread and promote hate,” said Attorney General James. “The fact that an individual can post detailed plans to commit such an act of hate without consequence, and then stream it for the world to see is bone-chilling and unfathomable. As we continue to mourn and honor the lives that were stolen, we are taking serious action to investigate these companies for their roles in this attack. Time and time again, we have seen the real-world devastation that is borne of these dangerous and hateful platforms, and we are doing everything in our power to shine a spotlight on this alarming behavior and take action to ensure it never happens again.”

It has been reported that the shooter posted online for months about his hatred for specific groups, promoted white supremacist theories, and even discussed potential plans to terrorize an elementary school, church, and other locations he believed would have a considerable community of Black people to attack. Those postings included detailed information about plans to carry out an attack in a predominantly Black neighborhood in Buffalo and his visits to the site of the shooting in the weeks prior. The shooter also streamed the attack on another social media platform, which was accessible to the public, and posted a 180-page manifesto online about his bigoted views.

She claims that these investigations are authorized by the very broad law granting the AG the powers to investigate issues related to “the public peace, public safety and public justice.” Except, the 1st Amendment does not allow regulation of speech, and that’s what this investigation actually is.

Imagine the (quite reasonable) outrage if James announced she was opening an investigation into Fox News. Or, if you’re on the other side of the political aisle, imagine if Texas AG Ken Paxton announced an investigation into MSNBC. You’d immediately argue that those were politically motivated intimidation techniques, designed to suppress the free speech rights of those organizations.

The same is true here.

Or, if you’re going to argue that these websites are somehow different than news channels, let’s try this on for size. If you’re okay with James doing this investigation, are you similarly okay with Paxton investigating Discord, Facebook, Twitter and other such sites for groups forming to help women get an abortion? Or how would you feel if Florida’s Ashley Moody began investigating these sites for helping schoolchildren get access to books that are being banned.

You’d be correctly outraged, as you should in either case.

Anything that you could possibly “blame” any of these sites for is obviously protected by the 1st Amendment. First off, it’s almost guaranteed that none of these organizations had detailed knowledge of this one terrible person’s screeds and planning. Even in a world absent Section 230, the lack of actual knowledge by these platforms would mean that they could not be held liable, under the 1st Amendment.

Then again, we’re in a world where we do have Section 230, and that further makes this plan for an investigation ridiculous, because it seems quite clear that this investigation is an attempt to hold websites liable for the speech of one of its users. And that’s not allowed under 230.

Of course, you might argue that it’s not an attempt to hold them liable for his speech, but his murderous actions. But you’d still be wrong, because he didn’t use any of these websites to murder people. He may have used them to talk about whatever hateful ideology he has, and his plans, but that’s not (in any way) the same thing.

Meanwhile, it’s difficult to look at this and not think that AG James and Governor Hochul are hyping this all up to deflect from their own government’s failings. It’s now been widely reported that the shooter had made previous threats that law enforcement investigated. It’s also been reported that the weapon he used in the shooting included a high capacity magazine that is illegal in NY. Also, and this may be the most damning of all: there are reports that someone in the grocery store called 911 and the dispatcher HUNG UP ON THEM.

In other words, there appear to be multiple examples of how NY’s own law enforcement failed here. And I guess it’s not surprising that the Governor and the highest law enforcement officer of the state would rather pin the blame elsewhere, than reflect on how they, themselves, failed.

But, that lack of introspection is how we continue failing.

Filed Under: 1st amendment, blame, buffalo, free speech, kathy hochul, letitia james, new york, section 230, shooting
Companies: 4chan, 8chan, discord, twitch

Did Twitch Violate Texas’ Social Media Law By Removing Mass Murderer’s Live Stream Of His Killing Spree?

from the you-asked-for-this-texas dept

As you’ve no doubt heard, on Saturday there was yet another horrific shooting, this one in Buffalo, killing 10 people and wounding more. From all current evidence, the shooter, a teenager, was a brainwashed white nationalist, spewing nonsense and hate in a long manifesto that repeated bigoted propaganda found in darker corners of the internet… and on Fox News’ evening shows. He also streamed the shooting rampage live on Twitch, and apparently communicated some of his plans via Discord and 4chan.

Twitch quickly took down the stream and Discord is apparently investigating. All of this is horrible, of course. But, it seems worth noting that it’s quite possible Twitch’s removal could violate Texas’ ridiculously fucked up social media law. Honestly, the only thing that might save the two companies (beyond the fact that it’s unlikely someone would go to court over this… we think) is that both Twitch and Discord might be just ever so slightly below the 50 million average monthly US users required to trigger the law. But that’s not entirely clear (another reason why this law is stupid: it’s not even clear who is covered by it).

A year ago, Discord reported having 150 million monthly active users, though that’s worldwide. The question is how many of them are in the US. Is it more than a third? Twitch apparently has a very similar 140 million monthly active users globally. At least one report says that approximately 21% of Twitch’s viewership is in the US. That same report says that Twitch’s US MAUs are at 44 million.

Of course the Texas law, HB20, defines user quite broadly, and also says once you have over 50 million in a single month you’re covered. So it’s quite possible both companies are covered.

Focusing on Twitch: taking down the streamer’s account might violate the law. Remember that the law says that you cannot “censor” based on viewpoint. And anyone in the state of Texas can bring a lawsuit claiming they were deprived of content based on viewpoint. Some will argue back that a livestream of a killing spree isn’t about viewpoint, but remember, this idiot teenager made it clear he was doing this as part of his political views. At the very least, there’s a strong argument that any effort to take down his manifesto (if not the livestream) could be seen as violating the law.

And just to underline that this is what the Texas legislature wanted, you may recall that we wrote about a series of amendments that were proposed when this law was being debated. And one of the amendments said that the law would not block the removal of content that “directly or indirectly promotes or supports any international or domestic terrorist group or any international or domestic terrorist acts.” AND THE LEGISLATURE VOTED IT DOWN.

So, yes, the Texas legislature made it abundantly clear that this law should block the ability of website to remove such content.

And, due to the way the law is structured, it’s not just those who were moderated who can sue, but anyone who feels their “ability to receive the expression of another person” was denied over the viewpoint of the speaker. So, it appears that a white nationalist in Texas could (right now) sue Twitch and demand that it reinstate the video, and Twitch would have to defend its reasons for removing the video, and convince a court it wasn’t over “viewpoints” (or that Twitch still has fewer than 50 million monthly average users, and that it has never passed that threshold).

Seems kinda messed up either way.

Of course, I should also note that NY’s governor is already suggesting (ridiculously) that Twitch should be held liable for not taking the video down fast enough.

Gov. Hochul said the fact that the live-stream was not taken down sooner demonstrates a responsibility those who provide the platforms have, morally and ethically, to ensure hate cannot exist there. She also said she hopes it will also demonstrate a legal responsibility for those providers.

“The fact that this act of barbarism, this execution of innocent human beings could be live-streamed on social media platforms and not taken down within a second says to me that there is a responsibility out there … to ensure that such hate cannot populate these sites.”

So, it’s possible that Twitch could face legal fights in New York for being too slow to take down the video and in Texas for taking down the video at all.

It would be kind of nice if politicians on both sides of the political aisle remembered how the 1st Amendment actually works, and focused the blame on those actually responsible, not the social media tools that are used to communicate.

Filed Under: buffalo, content moderation, hb20, mass murder, racism, shooting, social media, texas, white nationalist
Companies: discord, twitch

The Latest Moral Panic Focuses On Discord

from the stop-the-moral-panics dept

Techno moral panics are back in fashion, it seems. There have been multiple (misleading) stories about “kids and social media“, and then there are always attempts to dive into specific “new” services. Last fall, it was all about the kids and their TikTok challenges. But, Tiktok is so last year. So now CNN is back again, and this time the target of its moral panic is Discord. It has a whole scary article about “the dark side of Discord for teens.”

Except if you replace “Discord” with any other ways that teens talk to each other the story wouldn’t be much different. I’m reminded of earlier freakouts about instant messaging. Or the widespread moral panic in the 1980s about kids in day care. Or how about the moral panic of kids who went to raves in the 1980s and 90s. Everywhere you look, they all seem to have the same kind of pattern. This new thing is putting kids at risk and something must be done!

The CNN piece does include some harrowing stories of teens who were approached by strangers on Discord. But it makes no effort to examine how widespread this actually is, or if it’s any different or more prevalent than any other situation involving kids. Obviously it’s bad if kids are put at risk via the internet, but the solution to that is not to attack a single tool. Because if it’s not Discord it’s going to be a different app. People talk to other people. And sometimes those people are not good.

In the past people used telephones to talk to others, and I can assure you that in the olden days some adults made inappropriate phone calls to children. We should never think that’s okay, but we similarly shouldn’t blame telephones for that. We should blame the adults and hold them liable.

Indeed, the CNN piece lumps together a wide variety of “harms” as if they are all the same and can be dealt with the same way, even though that’s nonsense:

CNN Business spoke to nearly a dozen parents who shared stories about their teenagers being exposed to self-harm chats, sexually explicit content and sexual predators on the platform, including users they believed were older men seeking inappropriate pictures and videos.

Being exposed to sexual predators is an entirely different category of problem from “self-harm chats” or even “sexually explicit content.” But CNN (conveniently) lumps them all together, and focuses mostly on those predators, the most sensational aspect of the story, rather than figuring out how big of an issue it actually is.

As for “self-harm chats” that can mean a wide variety of things. Often, teenagers are exploring complicated emotions, and researching things is part of that process. As we wrote in our case study about kids and eating disorders, the research actually shows that allowing kids to read about it often helped to get them to realize they had a problem, rather than driving them towards more harm. Hiding all that content doesn’t change that. As for “sexually explicit content” — again, that can mean a lot of things, and if we’re talking about teenagers, you kind of have to expect that some of them are likely trying to understand and explore their own sexuality. That’s not to say it should be a free for all — obviously. But, both of those may be cases of teenagers being teenagers and trying to figure out who they are.

That’s quite different from being preyed upon.

Similarly, the thing that is starkly absent from the CNN piece is any sense of parental responsibility. And by that I don’t mean parental surveillance. So much of the CNN piece seems to hint that if only Discord enabled parents to constantly spy on their kids, this wouldn’t be a problem. But that’s not helpful. Kids need to learn how to handle challenging situations — in the same sense that parents should teach their kids how to have some level of street smarts for when they will be walking alone on the streets, parents need to teach their kids to be digitally smart: to know how to avoid problems online and how to respond should they come across something they shouldn’t.

But, rather than tell that story, it’s easier to write a whole scare story about how “Discord is dangerous for kids.” It’s lazy reporting and it leads to really bad overreactions by politicians and parents. CNN: do better.

Filed Under: kids, moral panic, online safety, teenagers
Companies: discord