aiding and abetting – Techdirt (original) (raw)

Supreme Court Leaves 230 Alone For Now, But Justice Thomas Gives A Pretty Good Explanation For Why It Exists In The First Place

from the breathe dept

Our long national wait for how the Supreme Court would rule regarding Section 230 is over, and the answer is… we need to keep waiting. The headlines note, correctly, that the court punted the matter. But there are other elements of the actual rulings that are kind of interesting and could bode well for the future of the internet and Section 230.

As you’ll likely recall, back in October, the Supreme Court surprised a lot of people by taking two sorta related cases regarding the liability of social media sites, Gonzalez v. Google, and Twitter v. Taamneh. Even though both cases were ruled on by the 9th Circuit in the same opinion, and had nearly identical fact patterns (terrorists did an attack overseas, family of a victim sued social media to try to hold them liable for the attacks because social media allowed terrorist organizations to have accounts on social media), only one (Gonzalez) technically dealt with Section 230. For unclear reasons, even though there was some discussion of 230 in the Taamneh case, the ruling was more specifically about whether or not Twitter was liable for violating JASTA (the Justice Against Sponsors of Terrorism Act).

Both cases sought cert from the Supreme Court, but again in an odd way. The family in Gonzalez challenged the 9th Circuit’s ruling that their case was precluded by Section 230, but kept changing the actual question they were asking the Supreme Court to weigh in on, bouncing around from whether recommendations took you out of 230, to whether algorithms took you out of 230, to (finally) whether the creation of thumbnail images (?!?!?!?) took you out of 230. For Taamneh, Twitter sought conditional cert, basically saying that if the court was going to take Gonzalez, it should also take Taamneh. And that’s what the court did. Though I’m still a bit confused that they held separate oral arguments for both cases (on consecutive days) rather than combining the two cases entirely.

And the end result suggests that the Supreme Court is equally confused why it didn’t combine the cases. And also, why it took these cases in the first place.

Indeed, the fact that these rulings came out in May is almost noteworthy on its own. Most people expected that, like most “big” or “challenging” cases, these would wait until the very end of the term in June.

Either way, the final result is a detailed ruling in Taamneh by Justice Clarence Thomas, which came out 9 to 0, and a per curiam (whole court, no one named) three pager in Gonzalez that basically says “based on our ruling in Taamneh, there’s no underlying cause of action in Gonazalez, and therefore, we don’t have to even touch the Section 230 issue.”

The general tenor of the response from lots of people is…. “phew, Section 230 is saved, at least for now.” And that’s not wrong. But I do think there’s more to this than just that. While the ruling(s) don’t directly address Section 230, I’m somewhat amazed at how much of Thomas’s ruling in Taamneh, talking about common law aiding and abetting, basically lays out all of the reasons why Section 230 exists: to avoid applying secondary liability to third parties who aren’t actively engaged in knowingly trying to help someone violate the law.

Much of the ruling goes through the nature of common law aiding and abetting, and what factors are conditions are necessary to find a third party liable, and basically says the standards are high. It can’t be mere negligence or recklessness. And Justice Thomas recognizes that if you make secondary liability too broad it will sweep in all sorts of innocent bystanders.

Importantly, the concept of “helping” in the commission of a crime—or a tort—has never been boundless. That is because, if it were, aiding-and-abetting liability could sweep in innocent bystanders as well as those who gave only tangential assistance. For example, assume that any assistance of any kind were sufficient to create liability. If that were the case, then anyone who passively watched a robbery could be said to commit aiding and abetting by failing to call the police. Yet, our legal system generally does not impose liability for mere omissions, inactions, or nonfeasance; although inaction can be culpable in the face of some independent duty to act, the law does not impose a generalized duty to rescue.

The crux then:

For these reasons, courts have long recognized the need to cabin aiding-and-abetting liability to cases of truly culpable conduct. They have cautioned, for example, that not “all those present at the commission of a trespass are liable as principals” merely because they “make no opposition or manifest no disapprobation of the wrongful” acts of another.

Those statements are actually the core of why 230 exists in the first place: so that we put the liability on the party who actively and knowingly participated in the violative activity. Thomas spends multiple pages explaining why this general principle makes a lot of sense, which is nice to hear. Again, Thomas concludes this section by reinforcing this important point:

The phrase “aids and abets” in §2333(d)(2), as elsewhere, refers to a conscious, voluntary, and culpable participation in another’s wrongdoing.

If that language sounds vaguely familiar, that’s because it’s kind of like the language the 9th Circuit used in saying that Reddit didn’t violate FOSTA last fall, because it wasn’t making deliberate actions to aid trafficking.

Having established that basic, sensible, framework, Thomas moves on to apply it to the specifics of Taamneh, and finds it clear that there’s no way the plaintiffs have shown that social media did anything that gets anywhere within the same zip code as what’s required for aiding and abetting. Because all they did was create a platform that anyone could use.

None of those allegations suggest that defendants culpably “associate[d themselves] with” the Reina attack, “participate[d] in it as something that [they] wishe[d] to bring about,” or sought “by [their] action to make it succeed.” Nye & Nissen, 336 U. S., at 619 (internal quotation marks ommitted). In part, that is because the only affirmative “conduct” defendants allegedly undertook was creating their platforms and setting up their algorithms to display content relevant to user inputs and user history. Plaintiffs never allege that, after defendants established their platforms, they gave ISIS any special treatment or words of encouragement. Nor is there reason to think that defendants selected or took any action at all with respect to ISIS’ content (except, perhaps, blocking some of it).13 Indeed, there is not even reason to think that defendants carefully screened any content before allowing users to upload it onto their platforms. If anything, the opposite is true: By plaintiffs’ own allegations, these platforms appear to transmit most content without inspecting it.

From there, he notes that just because a platform can be used for bad things, it doesn’t make sense to hold the tool liable, again effectively making the argument for why 230 exists:

The mere creation of those platforms, however, is not culpable. To be sure, it might be that bad actors like ISIS are able to use platforms like defendants’ for illegal—and sometimes terrible—ends. But the same could be said of cell phones, email, or the internet generally. Yet, we generally do not think that internet or cell service providers incur culpability merely for providing their services to the public writ large. Nor do we think that such providers would normally be described as aiding and abetting, for example, illegal drug deals brokered over cell phones—even if the provider’s conference-call or video-call features made the sale easier.

I’ve seen some people raise concerns that the language in the above paragraph opens up an avenue for SCOTUS to pull a “social media is a common carrier, and therefore we can force them to host all speech” but I’m not sure I actually see that in the language at all. Generally speaking, email and “the internet generally” are not seen as common carriers, so I don’t see this statement as being a “social media is a common carrier” argument. Rather it’s a recognition that this principle is clear, obvious, and uncontroversial: you don’t hold a platform liable for the speech of its users.

From there, Thomas also completely shuts down the argument that “algorithmic recommendations” magically change the nature of liability:

To be sure, plaintiffs assert that defendants’ “recommendation” algorithms go beyond passive aid and constitute active, substantial assistance. We disagree. By plaintiffs’ own telling, their claim is based on defendants’ “provision of the infrastructure which provides material support to ISIS.” App. 53. Viewed properly, defendants’ “recommendation” algorithms are merely part of that infrastructure. All the content on their platforms is filtered through these algorithms, which allegedly sort the content by information and inputs provided by users and found in the content itself. As presented here, the algorithms appear agnostic as to the nature of the content, matching any content (including ISIS’ content) with any user who is more likely to view that content. The fact that these algorithms matched some ISIS content with some users thus does not convert defendants’ passive assistance into active abetting. Once the platform and sorting-tool algorithms were up and running, defendants at most allegedly stood back and watched; they are not alleged to have taken any further action with respect to ISIS.

Again, I’ve seen some concerns that this language opens up some potential messiness about AI and “neutrality,” but I’m actually pretty pleased with the language used here, which avoids saying “neutral” (a completely meaningless word in the context of algorithms whose entire purpose is to recommend stuff) and talks about providing general tools that just try to provide any user with results that match their interests.

Basically, my read on this is that the court is effectively saying that if you create algorithms that are just designed to take inputs and provide outputs based on those inputs, you’re in the clear. The only hypothetical where you might face some liability is if you designed an algorithm to deliberately produce violative content, like an AI tool whose sole job is to defame people (defAIMe?) or to take any input and purposefully try to convince you to engage in criminal acts. Those seem unlikely to actually exist in the first place, so the language above actually seems, again, to be pretty useful.

The ruling again doubles down on the fact that there was nothing specific to the social media sites that was deliberately designed to aid terrorists, and that makes the plaintiff’s argument nonsense:

First, the relationship between defendants and the Reina attack is highly attenuated. As noted above, defendants’ platforms are global in scale and allow hundreds of millions (or billions) of people to upload vast quantities of information on a daily basis. Yet, there are no allegations that defendants treated ISIS any differently from anyone else. Rather, defendants’ relationship with ISIS and its supporters appears to have been the same as their relationship with their billion-plus other users: arm’s length, passive, and largely indifferent. Cf. Halberstam, 705 F. 2d, at 488. And their relationship with the Reina attack is even further removed, given the lack of allegations connecting the Reina attack with ISIS’ use of these platforms.

Second, because of the distance between defendants’ acts (or failures to act) and the Reina attack, plaintiffs would need some other very good reason to think that defendants were consciously trying to help or otherwise “participate in” the Reina attack. Nye & Nissen, 336 U. S., at 619 (internal quotation marks omitted). But they have offered no such reason, let alone a good one. Again, plaintiffs point to no act of encouraging, soliciting, or advising the commission of the Reina attack that would normally support an aidingand-abetting claim. See 2 LaFave §13.2(a), at 457. Rather, they essentially portray defendants as bystanders, watching passively as ISIS carried out its nefarious schemes. Such allegations do not state a claim for culpable assistance or participation in the Reina attack.

Also important, the court makes it clear that a “failure to act” can’t actually trigger liability here:

Because plaintiffs’ complaint rests so heavily on defendants’ failure to act, their claims might have more purchase if they could identify some independent duty in tort that would have required defendants to remove ISIS’ content. See Woodward, 522 F. 2d, at 97, 100. But plaintiffs identify no duty that would require defendants or other communication-providing services to terminate customers after discovering that the customers were using the service for illicit ends. See Doe, 347 F. 3d, at 659; People v. Brophy, 49 Cal. App. 2d 15, 33–34 (1942).14 To be sure, there may be situations where some such duty exists, and we need not resolve the issue today. Even if there were such a duty here, it would not transform defendants’ distant inaction into knowing and substantial assistance that could establish aiding and abetting the Reina attack.

Is there the possibility of some nonsense sneaking into the second half of that paragraph? Eh… I could see some plaintiffs’ lawyers trying to make cases out of it, but I think the courts would still reject most of them.

Similarly, there is some language around hypothetical ways in which secondary liability could apply, but the Court is pretty clear that there has to be something beyond just providing ordinary services to reach the necessary bar:

To be sure, we cannot rule out the possibility that some set of allegations involving aid to a known terrorist group would justify holding a secondary defendant liable for all of the group’s actions or perhaps some definable subset of terrorist acts. There may be, for example, situations where the provider of routine services does so in an unusual way or provides such dangerous wares that selling those goods to a terrorist group could constitute aiding and abetting a foreseeable terror attack. Cf. Direct Sales Co. v. United States, 319 U. S. 703, 707, 711–712, 714–715 (1943) (registered morphine distributor could be liable as a coconspirator of an illicit operation to which it mailed morphine far in excess of normal amounts). Or, if a platform consciously and selectively chose to promote content provided by a particular terrorist group, perhaps it could be said to have culpably assisted the terrorist group….

In those cases, the defendants would arguably have offered aid that is more direct, active, and substantial than what we review here; in such cases, plaintiffs might be able to establish liability with a lesser showing of scienter. But we need not consider every iteration on this theme. In this case, it is enough that there is no allegation that the platforms here do more than transmit information by billions of people, most of whom use the platforms for interactions that once took place via mail, on the phone, or in public areas.

And from there, the Court makes a key point: just because some bad people use a platform for bad purposes, it doesn’t make the platform liable, and (even better) Justice Thomas highlights that any other holding would be a disaster (basically making the argument for Section 230 without talking about 230).

The fact that some bad actors took advantage of these platforms is insufficient to state a claim that defendants knowingly gave substantial assistance and thereby aided and abetted those wrongdoers’ acts. And that is particularly true because a contrary holding would effectively hold any sort of communication provider liable for any sort of wrongdoing merely for knowing that the wrongdoers were using its services and failing to stop them. That conclusion would run roughshod over the typical limits on tort liability and take aiding and abetting far beyond its essential culpability moorings.

Thus, based on all this, the court says the 9th Circuit ruling that allowed the Taamneh case to move forward was clearly mistaken, and sends it back to the Court. Specifically, it dings the 9th for having “misapplied the ‘knowing’ half of ‘knowing and substantial assistance.’”

At the very very end, the ruling does mention questions regarding Google and payments to users, and whether or not that might reach aiding and abetting. But, importantly, that issue isn’t really before the court, because the plaintiffs effectively dropped it. It’s possible that the issue could live on, but again, I don’t see how it becomes problematic.

Overall, this was kind of a weird case and a weird ruling. SCOTUS seems to have recognized they never should have taken the case in the first place, and this ruling effectively allowed them to back out of making a ruling on 230 that they would regret. However, instead, Justice Thomas, of all people, more or less laid out all of the reasons why 230 exists and why we want that in place, to make sure that liability applies to the party actually making something violative, rather than the incidental tools used in the process.

Separately, it does seem at least marginally noteworthy that, while not directly addressing Section 230 (and explicitly saying they wouldn’t rule on the issue today), Thomas didn’t also file a concurrence with the Gonzalez ruling begging for more 230 cases. As you may know, Thomas seemed to skip no opportunity to file random concurrences on issues unrelated to 230 to muse broadly on 230 and how he had views on the law. And here, he didn’t. Rather he wrote a ruling that sounds kinda like it could be a defense of Section 230. Maybe he’s learning?

In the end, this result is probably about as good as we could have hoped for. It leaves 230 in place, doesn’t add any really dangerous dicta that can lead to abuse (as far as I can tell).

It also serves to reinforce a key point: contrary to the belief of many, 230 is not the singular law that protects internet websites from liability. Lots of other things do as well. 230 really only serves as an express lane to get to the same exact result. That’s important, because it saves money, time, and resources from being wasted on cases that are going to fail in the end anyway. But it doesn’t mean that changing or removing 230 won’t magically make companies liable for things their users do. It won’t.

Finally, speaking about money, time, and resources, a shit ton of all three were spent on briefs from amici for the Gonzalez case, in which dozens were filed (including one from us). And… the end result was a three page per curiam basically saying “we’re not going to deal with this one.” The end result is good, and maybe it wouldn’t have been without all those briefs. However, that was an incredible amount of effort that had to be spent for the Supreme Court to basically say “eh, we’ll deal with this some other time.”

The Supreme Court might not care about all that effort expended for effectively nothing, but it does seem like a wasteful experience for nearly everyone involved.

Filed Under: aiding and abetting, gonzalez v. google, intermediary liability, knowledge, section 230, supreme court, taamneh, terrorist attacks
Companies: google, twitter

Supreme Court Takes Section 230 Cases… Just Not The Ones We Were Expecting

from the well,-this-is-not-great dept

So, plenty of Supreme Court watchers and Section 230 experts all knew that this term was going to be a big one for Section 230… it’s just that we all expected the main issue to be around the Netchoice cases regarding Florida and Texas’s social media laws (those cases will likely still get to SCOTUS later in the term). There were also a few other possible Section 230 cases that I thought SCOTUS might take on, but still, the Court surprised me by agreeing to hear two slightly weird Section 230 cases. The cases are Gonzalez v. Google and Twitter v. Taamneh.

There are a bunch of similar cases, many of which were filed by two law firms together, 1-800-LAW-FIRM (really) and Excolo Law. Those two firms have been trying to claim that anyone injured by a terrorist group should be able to sue internet companies because those terrorist groups happened to use those social media sites. Technically, they’re arguing “material support for terrorism,” but the whole concept seems obviously ridiculous. It’s the equivalent of the family of a victim of ISIS suing Toyota after finding out that some ISIS members drove Toyotas.

Anyway, we’ve been writing about a bunch of these cases, including both of the cases at issue here (which were joined at the hip by the 9th Circuit). Most of them get tossed out pretty quickly, as the court recognizes just how disconnected the social media companies are from the underlying harm. But one of the reasons they seem to have filed so many such cases all around the country was to try to set up some kind of circuit split to interest the Supreme Court.

The first case (Gonzalez) dealt with ISIS terrorist attacks in Paris in 2015. The 9th Circuit rejected the claim that Google provided material support to terrorists because ISIS posted some videos to YouTube. To try to get around the obvious 230 issues, Gonzalez argued that YouTube recommended some of those videos via the algorithm, and those recommendations should not be covered by 230. The second case, Taamneh, was… weird. It has a somewhat similar fact pattern, but dealt with the family of someone who was killed by an ISIS attack at a nightclub in Istanbul in 2017.

The 9th Circuit tossed out the Gonzalez case, saying that 230 made the company immune even for recommended content (which is the correct outcome) but allowed the Taamneh case to move forward, for reasons that had nothing to do with Section 230. In Taamneh, the district court initially dismissed the case entirely without even getting to the Section 230 issue by noting that Taamneh didn’t even file a plausible aiding-and-abetting claim. The 9th Circuit disagreed, said that there was enough in the complaint to plead aiding-and-abetting, and sent it back to the district court (which could then, in all likelihood, dismiss under Section 230). Oddly (and unfortunately) some of the judges in that ruling issued concurrences which meandered aimlessly, talking about how Section 230 had gone too far and needed to be trimmed back.

Gonzalez appealed the issue regarding 230 and algorithmic promotion of content, while Twitter appealed the aiding and abetting ruling (noting that every other court to try similar cases found no aiding and abetting).

Either way, the Supreme Court is taking up both cases and… it might get messy. Technically, the question the Supreme Court is asked to answer in the Gonzalez case is:

Whether Section 230(c)(1) of the Communications Decency Act immunizes interactive computer services when they make targeted recommendations of information provided by another information content provider, or only limits the liability of interactive computer services when they engage in traditional editorial functions (such as deciding whether to display or withdraw) with regard to such information.

Basically: can we wipe out Section 230’s key liability protections for any content recommended? This would be problematic. The whole point of Section 230 is to put the liability on the proper party: the one actually speaking. Making sites liable for recommendations creates all of the same problems that making them liable for hosting would — specifically, requiring them to take on liability for content they couldn’t possibly thoroughly vet before recommending it. A ruling in favor of Gonzalez would create huge problems for anyone offering search on any website, because a “bad” content recommendation could lead to liability, not for the actual content provider, but for the search engine.

That can’t be the law, because that would make search next to impossible.

For what it’s worth, there were some other dangerously odd parts of the 9th Circuit’s Gonzalez rulings regarding Section 230 that are ripe for problematic future interpretation, but those parts appear not to have been included in the cert petition.

In Taamneh, the question is focused on the aiding and abetting question, but ties into Section 230, because it asks if you can hold a website liable for aiding and abetting if they try to remove terrorist content but a plaintiff argues they could have been more aggressive in weeding out such content. There’s also a second question of whether or not you can hold a website liable for an “act of intentional terrorism” when the actual act of terrorism had nothing whatsoever to do with the website, and was conducted off of the website entirely.

(1) Whether a defendant that provides generic, widely available services to all its numerous users and “regularly” works to detect and prevent terrorists from using those services “knowingly” provided substantial assistance under 18 U.S.C. § 2333 merely because it allegedly could have taken more “meaningful” or “aggressive” action to prevent such use; and (2) whether a defendant whose generic, widely available services were not used in connection with the specific “act of international terrorism” that injured the plaintiff may be liable for aiding and abetting under Section 2333.

These cases should worry everyone, especially if you like things like searching online. My biggest fear, honestly, is that this Supreme Court (as it’s been known to do) tries to split the baby (which, let us remember, kills the baby) and says that Section 230 doesn’t apply to recommended content, but that the websites still win because the things on the website are so far disconnected from the actual terrorist acts.

That really feels like the kind of solution that the Roberts court might like, thinking that it’s super clever when really it’s just dangerously confused. It would open up a huge pandora’s box of problems, leading to all sorts of lawsuits regarding any kind of recommended content, including search, recommendation algorithms, your social media feeds, and more.

A good ruling (if such a thing is possible) would be a clear statement that of course Section 230 protects algorithmically rated content, because Section 230 is about properly putting liability on the creator of the content and not the intermediary. But we know that Justices Thomas and Alito are just itching to destroy 230, so we’re already down two Justices to start.

Of course, given that this court is also likely to take up the NetChoice cases later this term, it is entirely possible that next year the Supreme Court may rules that (1) websites are liable for failing to remove certain content (in these two cases) and(2) websites can be forced to carry all content.

It’ll be a blast figuring out how to make all that work. Though, some of us will probably have to do that figuring out off the internet, since it’s not clear how the internet will actually work at that point.

Filed Under: aiding and abetting, algorithms, gonzalez, isis, recommendations, section 230, supreme court, taamneh, terrorism, terrorism act
Companies: google, twitter

How The Dobbs Decision Will Lead To Attacks On Free Speech; Or, Why Democrats Need To Stop Undermining Free Speech

from the protect-free-speech-now dept

We’ve talked about the unfortunate bipartisan attacks on free speech, which are best understood as attempts to control the narrative. Republicans have been attacking free speech in multiple ways — from trying to ban books and take away teacher autonomy to trying to compel websites to host content against their will. Democrats, on the flip side, have focused on ridiculous attempts to force websites to monitor and control speech. Both of these are bad in their own ways and both are attacks on free speech. In both cases, they seem to be about trying to force others to view the world the same way they do, and that’s the whole reason that we have the 1st Amendment around: to prevent that sort of nonsense.

This post is mainly targeted at those among you who support the Democrats’ position that we need to hold companies liable for the speech of their users. We’ve seen bills at both the federal and the state levels, trying to force companies to take down certain speech. And when people point out the 1st Amendment problems with these bills, we often hear some nonsense in response about “fire in a crowded theater.”

But, in a post Dobbs world, this shit is a lot more serious, and Democrats providing justification for outright government-backed censorship is a real problem. Senator Ron Wyden highlighted this just before the Dobbs decision came out, noting that Republicans who successfully got Roe v. Wade overturned were absolutely coming for websites and speech next. Here’s what Wyden wrote:

_In coming months well-funded anti-choice extremists will launch a coordinated campaign to deluge websites and social media companies with lawsuits over user speech in Republican-led states where just seeking information about an abortion could become illegal. Just as anti-abortion activists worked to attack reproductive rights in statehouses across the nation, these fundamentalists will use the same playbook of coordinated laws and legal actions against the online speech of those they dislike. They’ve already targeted libraries and bookstores over LGBTQ books and classified health care for trans youths as child abuse.

You could say he was prescient. Or you could just say that he was observing the obvious next steps. And, now that Dobbs is here, exactly what he predicted seems likely to be happening. As the NY Times pointed out, one of the next big fights over abortion may be over the 1st Amendment. Specifically, that article highlights that just shortly before the Dobbs decision came down, the National Right to Life Committee released a “model” state law for a post-Roe world that directly aims to criminalize speech online:

A top anti-abortion lobbying group, the National Right to Life Committee, recently proposed model legislation for states that would make it a crime to pass along information “by telephone, the internet or any other medium of communication” that is used to terminate a pregnancy.

Many states essentially did just that before Roe v. Wade was decided in 1973. And it is not clear whether courts will find that the protections afforded to speech in the Constitution still apply to abortion rights supporters as they look to circumvent the raft of new restrictions.

And, as Ashton Lattimore points out at Prism, such laws would put all sorts of free speech concepts at risk:

In the United States, there’s a long history of efforts to silence information concerned with the rights of marginalized people, and that’s always included the work of journalists. In the 19th century, for example, Congress passed a “gag rule” to prevent abolitionists from petitioning against slavery, and southern states passed laws that outlawed anti-slavery speech entirely. Critically, both historically and today, speech suppression laws not only hand bad-faith actors the tools of criminalization and fines to silence those they disagree with, but they can also normalize physical violence. Indeed, violence against journalists was widespread in the 19th century, and—crucially—not confined to the places where such anti-slavery speech was criminalized: In 1837, a pro-slavery mob killed abolitionist newspaperman Elijah Lovejoy and destroyed his printing press in the “free” state of Illinois, while the following year in Philadelphia, a similarly-minded mob burnt down the abolitionist meeting space Pennsylvania Hall, which also housed the offices of abolitionist newspaper The Pennsylvania Freeman. Even after slavery was abolished, journalists faced constant threats to their safety for daring to accurately report on injustices like lynching, foremost among them being Ida B. Wells. And even in the present day, it’s clear that speech-suppressive laws are part of a larger constellation of practices that embolden violence against the groups they target. Witness the spate of anti-gay and anti-trans violence in the wake of Florida’s “Don’t Say Gay” law, and the primarily Black and brown teachers who’ve faced harassment, violence, and even death threats following “anti-CRT” suppression of discussions about racial injustice in American society. Now, with a law specifically targeting abortion-related speech, the risks are especially dire since so many of the journalists leading the way on reproductive rights and justice reporting are women of color, who already face disproportionate harassment.

While there are many potential problems with this model law, the attack on speech shows up here, making it against the law to:

knowingly or intentionally hosting or maintaining an internet website, providing access to an internet website, or providing an internet service, purposefully directed to a pregnant woman who is a resident of this state, that provides information on how to obtain an illegal abortion, knowing that the information will be used, or is reasonably likely to be used, for an illegal abortions;

That would be a direct attack on free speech and the 1st Amendment. And, normally I’d say that it’s unlikely that courts would allow this. But, seeing how this particular arrangement of Justices seems willing to bend over backwards to justify absolute nonsense to remove rights — especially around abortion — it’s not difficult to imagine the Supreme Court magically finding the “exception” it needs to make these kinds of laws constitutional.

And, for what it’s worth, South Carolina has already introduced legislation that is modeled on this bill, and which would seek to punish websites. Other states are almost certainly going to follow.

And this is why Democrats need to stop handing Republicans the exact ammunition they need to attack free speech like this. I won’t even bother asking where the “oh, cancel culture is the biggest threat to free speech” people to speak up here because we all know they won’t.

But, Democrats who have been whining about “misinformation” and how social media has to be more aggressive censors, or who trot out the “fire in a crowded theater” line are simply playing into the censors hands here. They’re opening the door to this kind of nonsense and effectively justifying it.

Evan Greer and Lia Holland from Fight for the Future have an excellent companion piece to the Wyden piece above, noting that Section 230 is the last line of defense for abortion speech online. Democrats who are still attacking Section 230 today (including President Biden) are simply handing Republicans the tools they need to enable laws like the NLRC one above to criminalize speech.

Section 230 is the last line of defense keeping reproductive health care support, information, and fundraising online. Under Section 230, internet platforms that host and moderate user-generated content cannot generally be sued for that content. Section 230 is not absolute. It does not provide immunity if the platform develops or creates the content, and it does not provide immunity from the enforcement of federal criminal laws. But, crucially, it does protect against criminal liability from state laws.

This means that as Section 230 exists today, a lawsuit from an anti-abortion group concerning speech about reproductive health care or a criminal proceeding launched by a forced-birth state attorney general would be quickly dismissed. If Section 230 is weakened, online platforms like GoFundMe and Twitter, web hosting services, and payment processors like PayPal and Venmo will face a debilitating and expensive onslaught of state law enforcement actions and civil lawsuits claiming they are violating state laws. Even if these lawsuits ultimately fail, without Section 230 as a defense to get them dismissed quickly they will become enormously expensive, even for the largest platforms.

Forced-birth extremists are litigious, well resourced, and ideologically motivated. Tech companies care about making money. Rather than spending tens of millions fighting in court, many online platforms will instead “race to the bottom” and comply with the most restrictive state laws. They’ll change their own rules on what they allow, massively restricting access to information about abortion.

But, incredibly, the message doesn’t seem to be getting through. Just this week, I received an angry hate mail from someone who insisted that my support for free speech was the real problem, and that it enabled disinformation online. But history has shown that government suppression of speech ends up silencing the marginalized and the powerless. And we see that with the NLRC model bill.

Free speech is essential at this time. Section 230 protects websites from the kinds of lawsuits that the NLRC bill would use to flood the system, and it’s why it’s so crucial that it remain in place. Removing it, and allowing states to pass laws putting liability on websites for speech the states don’t like is bad no matter who is doing it. One hopes that at least someone within the Democratic party has enough sense to look at this model bill and realize that their own party’s position is likely to make that more possible.

Filed Under: 1st amendment, abortion, aiding and abetting, dobbs, free speech, model bill, section 230, supreme court
Companies: nlrc

Encrypted Phone Seller Facing Criminal Charges Fights Back, Says Sky Global Isn't Complicit In Customers' Illegal Acts

from the a-possible-exercise-in-futility-but-perhaps-still-a-worthwhile-one? dept

Over the past couple of years, the US government — working with law enforcement agencies around the world — has managed to shut down cell phone services it alleges were sold to members of large criminal associations. These prosecutions have allowed the DOJ to push the narrative that encrypted communications are something only criminals want or need.

But in multiple cases, encryption was never a problem. Investigators were able to target criminal suspects with malware, allowing access to communications and data. The FBI, along with Australian law enforcement, actually ran a compromised encrypted chat service for three years, allowing the agencies to round up a long list of suspects from all over the world.

One of those targeted by these long-running investigations was Canada-based encrypted phone provider Sky Global. The DOJ claimed the company sold phones to criminals and even secured an indictment that charged Sky CEO with assisting in the distribution of at least five kilos of cocaine (this amount triggers mandatory minimum 10-year sentences). This claim was based not on CEO Francois Eap’s own actions, but the alleged actions of his customers, some of whom engaged in drug dealing.

That was added to RICO allegations Eap is facing. But, so far, the US government has made no attempt to extradite Francois Eap and put him on trial. A new report by Joseph Cox for Motherboard possibly shows why the government is hesitant to make this any more of a federal case than it already is. Sky Global is fighting back in court, submitting internal communications that show it made efforts to assist in investigations and refused to remotely wipe phones it could tie to criminal activity.

Sky has filed a motion seeking the return of seized domains [PDF]. In it, the company details how it cooperated with law enforcement and attempted to deter criminals from buying its phones. At this point, Sky Global is effectively dead, even though prosecutors have yet to secure a conviction of Eap or anyone else employed by the company. The DOJ has seized all of the company’s websites, making it nearly impossible to continue to sell phones or provide services to existing customers.

The exhibits included with Sky’s filing show in new detail the effort Sky took to remove criminals from its platform.

“You may not knowingly sell or otherwise provide the Products and Services to any Customer for illicit, illegal or criminal use,” a Sky terms of use document reads.

“This ECC ID has been flagged for breaching our terms of service. It will be deactivated immediately,” one email sent by a Sky support worker to a user reads. The message from Sky points to a section of the company’s terms of use that does mention non-permitted uses such as promoting criminal activity.

Another email sent by Sky’s chief operating officer to what appears to be a Sky distributor says that one of the distributor’s agents has violated the company’s terms of service because of their willingness to sell the Sky ECC product to someone wanting to use it for illicit activity, as well as other violations.

The filing also includes emails sent to customers, letting them know Sky would not assist in remote wipes of devices seized by law enforcement. Other emails show customer service reps rejecting purchasers who made statements suggesting they were going to use the phones to engage in criminal activity.

The DOJ says this means nothing. It claims the company’s statements in emails and attempts to remain unaware of customers and their intentions give it nothing more than “plausible deniability.” But, of course, that’s how plausible deniability works. And the government should know this better than the criminals it’s going after. If deniability is plausible, it means a jury may not return a guilty verdict. And government agencies have long engaged in practices that allow them to plausibly deny they have violated rights or engaged in retaliation against whistleblowers, FOIA requesters, etc. Either seize the cake or eat it, DOJ. Don’t try to do both.

There appears to be some actual deniability contained in all the plausibility.

Other documents show Sky employees declining to wipe devices when asked to do so by resellers.

_“Hello. Please delete this ECC ID, the police have it,” one message sent to Sky reads. “PLEASE HELP!!! Two customers have problems with the police. Their devices were confiscated,” another adds. In both cases, Sky declined to wipe the phones, according to the emai_ls.

But there’s still a lot of grey area. Former employees said it was impossible to not know the phones were being purchased and used by criminals. But steps were still taken to prevent employees from wiping phones seized from alleged criminals and to deter third-party retailers from using language that suggested the company would be complicit in the destruction of criminal evidence.

And it went further than that. According to Sky’s lawyer, the company also terminated accounts associated with criminal activity, which goes above and beyond the “plausible deniability” accusations made by the DOJ.

There’s a major caveat to all of this, though.

But all of the documents which Sky provided and Motherboard reviewed which show enforcement of Sky’s policies were created in 2019 or later, after the U.S. prosecution of Phantom Secure’s Ramos.

This was probably a wise reaction. But it was a reaction, nonetheless. If the government can show the company was far more complicit in criminal activity in the past, it will still be able to prosecute. But given its lack of interest in pursuing Francois Eap’s extradition from Canada, it appears the DOJ might be satisfied with prosecuting cases involving Sky phone users, rather than charges against the company itself. Unfortunately, the DOJ has the power to drag this process out forever, effectively denying Sky the opportunity to remain solvent even though its CEO (and others with indictments) are still presumably innocent. Then it just becomes a war of attrition and the DOJ has infinite time and resources.

Filed Under: aiding and abetting, crime, doj, encrypted phones, encryption, francois eap, rico
Companies: sky global

New Texas Abortion Law Likely To Unleash A Torrent Of Lawsuits Against Online Education, Advocacy And Other Speech

from the though-230-will-help dept

In addition to the drastic restrictions it places on a woman’s reproductive and medical care rights, the new Texas abortion law, SB8, will have devastating effects on online speech.

The law creates a cadre of bounty hunters who can use the courts to punish and silence anyone whose online advocacy, education, and other speech about abortion draws their ire. It will undoubtedly lead to a torrent of private lawsuits against online speakers who publish information about abortion rights and access in Texas, with little regard for the merits of those lawsuits or the First Amendment protections accorded to the speech. Individuals and organizations providing basic educational resources, sharing information, identifying locations of clinics, arranging rides and escorts, fundraising to support reproductive rights, or simply encouraging women to consider all their options now have to consider the risk that they might be sued for merely speaking. The result will be a chilling effect on speech and a litigation cudgel that will be used to silence those who seek to give women truthful information about their reproductive options.

SB8, also known as the Texas Heartbeat Act, encourages private persons to file lawsuits against anyone who “knowingly engages in conduct that aids or abets the performance or inducement of an abortion.” It doesn’t matter whether that person “knew or should have known that the abortion would be performed or induced in violation of the law,” that is, the law’s new and broadly expansive definition of illegal abortion. And you can be liable even if you simply intend to help, regardless, apparently, of whether an illegal abortion actually resulted from your assistance.

And although you may defend a lawsuit if you believed the doctor performing the abortion complied with the law, it is really hard to do so. You must prove that you conducted a “reasonable investigation,” and as a result “reasonably believed” that the doctor was following the law. That’s a lot to do before you simply post something to the internet, and of course you will probably have to hire a lawyer to help you do it.

SB8 is a “bounty law”: it doesn’t just allow these lawsuits, it provides a significant financial incentive to file them. It guarantees that a person who files and wins such a lawsuit will receive at least $10,000 for each abortion that the speech “aided or abetted,” plus their costs and attorney’s fees. At the same time, SB8 may often shield these bounty hunters from having to pay the defendant’s legal costs should they lose. This removes a key financial disincentive they might have had against bringing meritless lawsuits.

Moreover, lawsuits may be filed up to six years after the purported “aiding and abetting” occurred. And the law allows for retroactive liability: you can be liable even if your “aiding and abetting” conduct was legal when you did it, if a later court decision changes the rules. Together this creates a ticking time bomb for anyone who dares to say anything that educates the public about, or even discusses, abortion online.

Given this legal structure, and the law’s vast application, there is no doubt that we will quickly see the emergence of anti-choice trolls: lawyers and plaintiffs dedicated to using the courts to extort money from a wide variety of speakers supporting reproductive rights.

And unfortunately, it’s not clear when speech encouraging someone to or instructing them how to commit a crime rises to the level of “aiding and abetting” unprotected by the First Amendment. Under the leading case on the issue, it is a fact-intensive analysis, which means that defending the case on First amendment grounds may be arduous and expensive.

The result of all of this is the classic chilling effect: many would-be speakers will choose not to speak at all for fear of having to defend even the meritless lawsuits that SB8 encourages. And many speakers will choose to take down their speech if merely threatened with a lawsuit, rather than risk the law’s penalties if they lose or take on the burdens of a fact-intensive case even if they were likely to win it.

The law does include an empty clause providing that it may not be “construed to impose liability on any speech or conduct protected by the First Amendment of the United States Constitution, as made applicable to the states through the United States Supreme Court’s interpretation of the Fourteenth Amendment of the United States Constitution.” While that sounds nice, it offers no real protection—you can already raise the First Amendment in any case, and you don’t need the Texas legislature to give you permission. Rather, that clause is included to try to insulate the law from a facial First Amendment challenge—a challenge to the mere existence of the law rather than its use against a specific person. In other words, the drafters are hoping to ensure that, even if the law is unconstitutional—which it is—each individual plaintiff will have to raise the First Amendment issues on their own, and bear the exorbitant costs—both financial and otherwise—of having to defend the lawsuit in the first place.

One existing free speech bulwark—47 U.S.C. § 230 (“Section 230”)—will provide some protection here, at least for the online intermediaries upon which many speakers depend. Section 230 immunizes online intermediaries from state law liability arising from the speech of their users, so it provides a way for online platforms and other services to get early dismissals of lawsuits against them based on their hosting of user speech. So although a user will still have to fully defend a lawsuit arising, for example, from posting clinic hours online, the platform they used to share that information will not. That is important, because without that protection, many platforms would preemptively take down abortion-related speech for fear of having to defend these lawsuits themselves. As a result, even a strong-willed abortion advocate willing to risk the burdens of litigation in order to defend their right to speak will find their speech limited if weak-kneed platforms refuse to publish it. This is exactly the way Section 230 is designed to work: to reduce the likelihood that platforms will censor in order to protect themselves from legal liability, and to enable speakers to make their own decisions about what to say and what risks to bear with their speech.

But a powerful and dangerous chilling effect remains for users. Texas’s anti-abortion law is an attack on many fundamental rights, including the First Amendment rights to advocate for abortion rights, to provide basic educational information, and to counsel those considering reproductive decisions. We will keep a close eye on the lawsuits the law spurs and the chilling effects that accompany them. If you experience such censorship, please contact info@eff.org.

Originally published to the EFF Deeplinks blog.

Filed Under: abortion, advocacy, aiding and abetting, free speech, sb8, snitch, texas

Where Texas' Social Media Law & Abortion Law Collide: Facebook Must Keep Up AND Take Down Info On Abortion

from the fucking-idiots dept

It’s always astounding to me how little most policymakers consider how many of the policies they push for contradict one another. On Wednesday, the Texas Senate easily approved its version of HB20, the blatantly unconstitutional bill that tries to prevent social media websites from moderating content that Texas Republicans want kept up — explicitly saying that Facebook must leave up vaccine misinformation, terrorist content, and Holocaust denialism. While the bill does include some language to suggest that some content can be moderated, it puts a ton of hurdles up to block that process. Indeed, as the bill makes clear, it does not want Facebook to moderate anything.

The legislature finds that:

> (1) each person in this state has a fundamental interest in the free exchange of ideas and information, including the freedom of others to share and receive ideas and information; > > (2) this state has a fundamental interest in protecting the free exchange of ideas and information in this state;

Of course, you may have also heard the other big of news out of Texas this week, which is that after the Supreme Court refused to block it, Texas’ extreme anti-choice law has gone into effect, more or less banning all abortions. But the law goes even further than that. It also says you cannot “aid and abet” someone getting an abortion, and “aiding and abetting” is defined quite broadly under the law:

Any person, other than an officer or employee of a state or local governmental entity in this state, may bring a civil action against any person who:

> knowingly engages in conduct that aids or abets the performance or inducement of an abortion, including paying for or reimbursing the costs of an abortion through insurance or otherwise, if the abortion is performed or induced in violation of this subchapter, regardless of whether the person knew or should have known that the abortion would be performed or induced in violation of this subchapter

This is bizarre on multiple levels. First, it’s allowing anyone to sue anyone else, claiming that they “aided and abetted” an illegal abortion if they merely “induced” someone to get an abortion.

So… let’s say that someone posted to a Facebook group, telling people how to get an abortion. Under Texas’s social media law — remember “each person in this state has a fundamental interest in the free exchange of ideas and information” — Facebook is expected to keep that information up. However, under Texas’ anti-choice law — remember, anyone can sue anyone for “inducing” an abortion — Facebook theoretically faces liability for leaving that information up.

So who wins out? Well, it should be that both bills are found to be unconstitutional, so it doesn’t matter. But we’ll see whether or not the courts recognize that. Section 230 should also protect Facebook here, since it pre-empts any state law that tries to make the company liable for user posts, which in theory the abortion law does. The 1st Amendment should also backstop both of these, noting that (1) Texas’ social media law clearly violates Facebook’s 1st Amendment rights, and (2) the broad language saying anyone can file civil suit against anyone for somehow convincing someone to get an abortion also pretty clearly violates the 1st Amendment. Update: As has been pointed out, the abortion law does say, explicitly, that the aiding and abetting rule should not apply to 1st Amendment protected speech, so there is something of an escape hatch here, and the state can say that it never intended the law to target speech as “aiding and abetting.” I don’t see that as making much of a difference in the long run because (1) the 1st Amendment already protects such speech so you don’t need a law to say that and (2) it’s unlikely to stop people from suing over speech that they claim is aiding and abetting…

But, until the courts actually rule on this, we don’t just have a mess, we have a contradictory mess thanks to a Texas legislature (and governor) that is so focused on waging a pointless culture war against “the libs” that they don’t even realize how their own bills conflict with one another.

Filed Under: 1st amendment, abortion, aiding and abetting, content moderation, free speech, hb20, section 230, texas
Companies: facebook

North Dakota's New Anti-230 Bill Would Let Nazis Sue You For Reporting Their Content To Twitter

from the i-just-can't-even dept

Earlier this month, we wrote about how various Republicans in state legislatures were introducing blatantly unconstitutional bills that tried to do away with Section 230 and which all attempted to block the ability of websites to do any content moderation. Many of the bills were nearly identical (and may have come from Chris Sevier, the profoundly troubled individual, who somehow keeps convincing state legislators to introduce blatantly unconstitutional bills that attack speech online). One of the bills we mentioned was from North Dakota. Lawyer Akiva Cohen points out that the North Dakota bill has been updated… and (incredibly) made even more blatantly unconstitutional.

Most notably, the new amendment from Rep. Tom Kading, would not only gut Section 230, but would stop any website from doing any moderation of any user for their viewpoints. Any viewpoints. Anywhere (even off platform). And then… it adds in a private cause of action, saying that would allow a user to sue any website for moderation:

That says:

A user residing in, doing business in, sharing expression in, or receiving expression in this state may bring a civil action in any court of this state against a social media platform or interactive computer service for violation of this chapter against the user, and upon finding the defendant has violated or is violating the user’s rights under this chapter, the court shall award:

1. Declaratory relief; 2. Injunctive relief; 3. Treble damages or, at the plaintiff’s option, statutory damages of up to fifty thousand dollars; and 4. Costs and reasonable attorney’s fees.

That’s already bad, but it gets worse, because it also creates a private cause of action against anyone “aiding and abetting” the moderation:

That one says:

A user residing in, doing business in, sharing expression in, or receiving expression in this state may bring a civil action in any court of this state against any person who aids or abets a violation of this chapter against the user, and upon finding the defendant has violated or is violating the user’s rights under this chapter, the court shall award:

1. Declaratory relief; 2. Injunctive relief; 3. Treble damages or, at the plaintiff’s option, statutory damages of up to fifty thousand dollars; and 4. Costs and reasonable attorney’s fees.

In other words, if you report a Nazi to Twitter, the Nazi can sue you for $50,000. Plus attorney’s fees. What the actual fuck are they doing up there in North Dakota? And has it eaten their brains?

The only saving grace of this disastrously unconstitutional bill is that it moots itself. That’s because it also has a clause that says that it “does not subject a social media platform or interactive computer service to any remedy or cause of action from which the social media platform or interactive computer service is protected by federal law.”

So, um… Section 230 is federal law and it protects against literally everything in this bill. In other words, the only thing this bill serves as is a weird poison pill that if Section 230 is repealed or otherwise modified, then it might allow anyone in North Dakota to sue users for reporting their content to a social media platform.

Jerry Lambe, over at Law & Crime, reached out to Rep. Kading to ask about this bill and Kading’s response is so ridiculous that it calls into question how this guy got elected.

?Social media may still censor within the constraints of Section 230. For example censorship of obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable content is completely appropriate under the bill,? he said in an email to Law&Crime. ?If the neo-Nazi was censored for such, then the bill would not apply. Though section 230 gives broad protection, it does not protect against censorship outside the scope noted or prohibit regulation if consistent with the section. The bill does not affect any reporting actions.?

As Ari Cohn points out, this is both incoherent and suggests that Kading has no clue about how Section 230 or the 1st Amendment actually work. The 1st Amendment is what gives websites the right to remove whatever content they want. Section 230 just helps them get out of lawsuits over those removals faster. On top of that, the list that Kading mentions from “obscene” to “otherwise objectionable” is only in Section (c)(2) of the law, which almost never shows up in court cases. Courts have made it clear that Section (c)(1), which has no such limitations, is what enables cases to be dismissed regarding moderation choices.

You’d think that maybe someone like Kading would have bothered to learn some of this before (1) introducing a bill or (2) responding to a reporter’s question about the bill. But apparently, that’s not the kind of state elected official Tom Kading is.

North Dakota citizens: stop electing censorial, ignorant legislators who want to attack the 1st Amendment.

Filed Under: 1st amendment, aiding and abetting, content moderation, free speech, nazis, north dakota, reporting, section 230, tom kading

Kickass Torrents Creator Can't Get Criminal Case Tossed Out

from the moving-onward dept

A year ago, we noted that Kickass Torrents had received the Megaupload treatment, getting hit with criminal charges and having its owner, Artem Vaulin, arrested in a foreign country (in this case, Poland). As we noted in looking over the original complaint, there were some significant concerns (similar to the ones we had with the Megaupload case) concerning whether or not running a service that other people used to infringe could possibly make you guilty of criminal copyright infringement.

The key issue: there is no “secondary liability” concept in criminal copyright law. There is such a thing in civil copyright law, whereby if you’re found to be “inducing” copyright infringement (via clear and deliberate statements and steps) you can be found to have to have infringed — but that’s not the case for criminal law. The case against Vaulin (as with Kim Dotcom) tried to get around this by arguing a few different things, which we’ll discuss below. Vaulin is fighting extradition in Poland, but in the meantime, had asked the federal court in Illinois to drop the case already, due to the failure to show actual criminal infringement by Vaulin.

Such an effort was always going to be at least something of a long shot, as courts will tend to give a lot of deference to the DOJ, and now Judge John Lee has rejected the request (as was first reported by the Hollywood Reporter).

The first issue the judge had to review was whether Vaulin could even bring his motion in the first place. The DOJ, playing hardball, argued that because Vaulin was fighting extradition to a country that he was not from and had no connection to, he shouldn’t be allowed to make any motions in court under the “fugitive disentitlement doctrine” (again, there are similarities here to the DOJ declaring Kim Dotcom a fugitive for fighting extradition — but that was in a separate effort to get to keep all of Dotcom’s assets). As the name suggests, “fugitive disentitlement doctrine” says that those who are running from the law can’t show up in court to make their arguments. And… that makes some amount of sense for actual “fugitives” who are hiding and no one knows how to find them. But that’s an entirely different situation when you’re fighting against extradition to a country you have no connection to.

Unfortunately, the court agrees with the DOJ that the fugitive disentitlement doctrine can apply here, and thus Vaulin’s motion is rejected on that basis alone. It cites a few cases on this — though most do appear to involve people who are more like actual fugitives in that they left the US to escape US law enforcement. However, the court does find one case, In re: Kashamu that involved a Nigerian national who was resisting extradition in a drug smuggling case. That case is precedent, and while I think it’s decided incorrectly, the district court can’t just ignore that precedent. And thus:

Based on these authorities, and in light of the principles undergirding the doctrine, the Court is persuaded that the elements of the fugitive disentitlement doctrine are met in this case. All three principles that the Supreme Court discussed in Degen?enforcement and mutuality, redressing the indignity of absence, and encouraging voluntary surrender?are implicated here. As long as Vaulin is in Poland, he is not within the Court?s reach. And, as far as the Court is aware, he is actively resisting extradition efforts. His attorneys represented at the most recent status hearing that there is a ?real possibility? that he will agree to appear here, but also indicated that he is actively appealing the Polish courts? decision to extradite him, a process which could take years. Thus, insofar as Vaulin is interested in participating here, he appears willing to do so only from a safe distance.

That said, the court recognizes that the Supreme Court has urged courts to apply fugitive disentitlement “with caution” so it actually digs into the merits of the motion to dismiss. And that’s where things get troublesome. There are a few different arguments the court has to respond to, so we’ll take them in turn. The first is whether or not criminal conduct occurred in the United States at all. Vaulin and Kickass Torrents are not in the US and thus they argue that if there was any criminal activity it can’t be tied to the US. The DOJ argues in response that Vaulin is still guilty of “aiding and abetting” criminal copyright infringement in the US. And the court agrees:

But the core theory underlying the indictment is that Vaulin aided, abetted, and conspired with users of his network to commit criminal copyright infringement in the United States. The first paragraph of the indictment, which is incorporated throughout, alleges that ?[m]illions? of Vaulin?s users resided in the United States…. The indictment goes on to allege that these users ?uploaded? and ?download[ed]? content,… and ?obtain[ed] [ ] desired infringed copyrighted content,?…. When viewed in a light most favorable to the Government, as the Court must do at this preliminary stage, the indictment alleges acts of domestic infringement.

But… that totally misses the argument that Vaulin is actually making. No one denies that there were people in the US who used the platform for infringement. But just because people are using the platform for infringement, doesn’t make it criminal infringement. For something to be criminal copyright infringement it has to reach a much higher bar than just “people downloaded stuff.” That may suffice (with certain caveats) for civil infringement, but criminal requires a lot more which the court ignores in that paragraph. The court also says that because Kickass Torrents had some servers in Chicago, “overt acts in furtherance of the conspiracy occurred in the United States.” I’m troubled by the fact that the court completely brushes past the differences in civil and criminal copyright infringement here, because it’s a big difference.

The second argument proffered by Vaulin, is that torrent tracker files are not the copyright-covered work, and thus downloading or distributing torrent files cannot lead to criminal liability. The court claims that this argument “misunderstands the indictment.” But… that’s wrong. I’d argue the judge’s reply “misunderstands the argument.” Here’s what the judge says:

The indictment is not concerned with the mere downloading or distribution of torrent files. Granted, the indictment describes these files and charges Vaulin with operating a website dedicated to hosting and distributing them… But the protected content alleged to have been infringed in the indictment is a number of movies and other copyright-protected media that users of Vaulin?s network purportedly downloaded and distributed without authorization from the copyright holders…. The indictment describes the torrent files merely as a means of obtaining the copyrighted movies and other media.

But… uh… that totally misses the point of the argument. The torrent files are not copyright-covered content. Nowhere does Vaulin or his site distribute or reproduce copyright-covered content. Many, I’m sure, will argue that this is semantics and nitpicking, but it’s actually quite important. Yes, a torrent file can then be used to infringe, but it’s the end user potentially doing the infringement — in the same way that a VCR can be used to infringe, but it’s not Sony who is held liable for that infringement (even in the civil sense, let alone the criminal). This is the very basis of intermediary liability. But the court skips over all that and says “yeah, but there’s infringement.” Well, no shit there’s infringement. But the question is who’s actually doing the infringement. Because if it’s not Vaulin, then, this case has a problem. And the court misses that entire argument and just says “there’s lots of infringement.” That’s… bad.

This part though, does make one claim that, if true, would be a lot of trouble for Vaulin: that there’s evidence that Vaulin may have also run some direct download websites. If the evidence shows that Vaulin/Kickass Torrents itself was hosting infringing content, then that could be a much bigger deal and make the case a lot more legit. But even here, the court kind of breezily brushes by this fairly important point, and lumps Vaulin in with “his co-defendants’ distribution of copyrighted content through direct download websites.” So, it’s at least unclear to me if the claim is that Vaulin himself ran direct download websites (which would be very bad for Vaulin) or that some unnamed “others” did so, in which case, the specifics matter a great deal.

Next up is the key argument, which we noted up top, about there being no “secondary liability” in criminal copyright infringement. The DOJ responds — and the court accepts — that they’re not actually charging him with secondary liability for criminal copyright infringement, but rather “conspiracy” and “aiding and abetting” for criminal copyright infringement. Here, unfortunately, I disagree with the argument that Vaulin’s lawyers made that the statute on “aiding and abetting” simply doesn’t apply at all to copyright law (even though there is some debate among scholars on this). As I’ve noted in the past, I do think it does apply, but that the standards for aiding and abetting are much different than the standard for basic secondary liability. Aiding and abetting requires “a tight nexus between the mental state of the defendant and the ultimate criminal act committed by another.” Merely providing a platform that many people use — and very rarely used for criminal copyright infringement (again, a much higher standard) — makes it difficult to see how Vaulin could qualify as aiding and abetting.

But the court ignores all of that, focusing on the question of whether or not there even is aiding and abetting for criminal copyright law — and saying that there is. And, because of that, there’s no discussion of whether or not what Vaulin did qualifies as actual aiding and abetting. It’s just… assumed. Now, it’s possible that this is (1) because the argument wasn’t raised by Vaulin or (2) this is not the stage to raise that issue, and perhaps that’s true. But it does feel like the court wandered down a side path here, and ignored the larger question.

Vaulin makes one final argument on this issue — that if conspiracy and aiding and abetting do apply to criminal copyright law, they should be “void for vagueness.” This is kind of a “shoot the moon” argument, saying that it’s so vague that people couldn’t understand if they were “aiding and abetting” criminal copyright infringement. Here, the court mainly relies on the fact that criminal copyright law is clear. But… again, that ignores whether or not it’s clear what “aiding and abetting” criminal copyright law is. Instead, the court pivots and says that Vaulin’s “behavior” — such as moving Kickass Torrents to new domains — showed that he knew what he was doing was illegal. There’s a little bit of a logical leap there — as one could just as easily say that he was moving domains and kept the site running because he believed it was perfectly legal, and he felt the foreign court orders were wrong… but… that’s probably a weak overall argument.

There’s one last argument, which is that the government never shows anyone who actually criminally infringed on copyright. It claims that Vaulin aided and abetted such infringement… but not the actual infringement. The court basically says “this doesn’t matter” or, at the very least, it doesn’t matter at this stage of the process. At trial, it notes, the DOJ will have to show infringement.

All in all, this isn’t a hugely surprising ruling — but it is disappointing. It appears to get distracted and sidetracked and confused on a few different issues, without clearly addressing the actual underlying arguments. As happens all to often in copyright cases, the issues get blurry when judges start focusing on “but… all this infringement is happening.” That may be true, but the question is who is actually liable for it, and whether or not it’s actually criminal. And that requires a much higher bar, and the court fails to actually show that those bars are cleared. That doesn’t bode well for Vaulin.

Filed Under: aiding and abetting, artem vaulin, copyright, criminal copyright, doj, fugitive disentitlement, secondary liability
Companies: kickass torrents

Kickass Torrents Gets The Megaupload Treatment: Site Seized, Owner Arrested And Charged With Criminal Infringement

from the because-of-course dept

So just as the US government itself is accused of being engaged in massive copyright infringement itself, the Justice Department proudly announces that it has charged the owner of Kickass Torrents with criminal copyright infringement claims. The site has also been seized and the owner, Artem Vaulin, has been arrested in Poland. As with the original Kim Dotcom/Megaupload indictment, the full criminal complaint against Vaulin is worth reading.

As with the case against Dotcom/Megaupload, the DOJ seems to ignore the fact that there is no such thing as secondary liability in criminal infringement. That’s a big concern. Even though Kickass Torrents does not host the actual infringing files at all, the complaint argues that Vaulin is still legally responsible for others doing so. But that’s not actually how criminal copyright infringement works. The complaint barely even shows how Vaulin could be liable for the infringement conducted via Kickass Torrents.

But, of course, that doesn’t matter because the guy at Homeland Security Investigations (formerly: ICE: Immigrations & Customs Enforcement) just spoke to the MPAA and the MPAA said that Kickass Torrents had no permission to link to their content. Yes, link.

As part of the investigation, I have communicated with representatives of the Motion Picture Association of America (MPAA) regarding this investigation. The representatives provided me with information the MPAA had developed about KAT, among other websites. The representatives stated that the MPAA closely monitors KAT and that a significant portion of the movies available on KAT are protected by copyright. The representatives also specified that the MPAA has not granted permission to KAT to index, link, frame, transmit, retransmit, provide access to, or otherwise aid or assist those who distribute and reproduce infringing copies of copyrighted motion picture or television content of MPAA members.

Here’s the thing: most of those things listed above are not rights granted by the copyright act. The copyright act is pretty specifically limited to a few rights, including reproduction and distribution. But, again, note the games played in the complaint: “index, link, frame, transmit, retransmit, provide access to” don’t directly infringe on the stated copyright exclusive rights (yes, there are some cases where some of the above may infringe on some of the exclusive rights, but it’s not particularly cut and dry). So instead, the government tosses in this “otherwise aid or assist those who distribute and reproduce infringing copies of copyrighted motion picture or television content.”

So, you see, once again, the government is creating a form of secondary liability for copyright infringement that does not exist in the law. That’s a problem. Because that’s not how criminal copyright law works. At all.

Furthermore, the complaint goes on about how KAT, as it calls Kickass Torrents, rejected DMCA takedown notices for a variety of reasons, but leaves out the fact that KAT is not an American company and is not under the jurisdiction of US laws. So I’m not entirely clear why US copyright laws apply here. The best they can do is note that they found a few servers that were apparently in Chicago.

The complaint spends lots of time on the fact that KAT makes a fair bit of money from advertising revenue. But, again, I’m not entirely clear how that’s relevant to the claim of criminal copyright infringement. The implicit argument is clearly “people go to KAT to get infringing content, the site makes advertising from all that traffic, thus the revenue is ill-gotten gains.” But… again that relies on the idea that KAT itself is engaged in criminal behavior. Creating a popular tool for finding content — some of which may be infringing — and then making money from advertising, are separate things. It seems wrong to make this weird if->then conditional assumption that just because the site made lots of money it was infringing.

No one is suggesting that Kickass Torrents was not regularly used by individuals to infringe on copyrights. It was. A lot. And you can argue how horrible that is and how it was killing Hollywood and all that — but the specifics here do matter. The same arguments were made about the VCR for years. After all, the MPAA insisted that it was used exclusively to infringe on content for years until they finally realized that it was a good idea to release content for the home video market. And, again, the US government isn’t allowed to make up criminal liability concepts that aren’t actually in the law. They, and their supporters, of course will now argue that it’s not about secondary liability, but about “aiding and abetting.” But that argument doesn’t fly either. The standards for aiding and abetting are much more involved — and would require that the actual infringement be criminal. But that won’t fly, because the individuals downloading via Kickass Torrents weren’t violating criminal copyright law themselves.

In other words, the DOJ is trying to argue that helping a bunch of people engaged in civil copyright infringement magically turns into criminal aiding and abetting. But that’s not how the law works.

Meanwhile, the DOJ’s press release on this is filled with all the usual insane bluster:

“Copyright infringement exacts a large toll, a very human one, on the artists and businesses whose livelihood hinges on their creative inventions,” said U.S. Attorney Fardon. “Vaulin allegedly used the Internet to cause enormous harm to those artists. Our Cybercrimes unit at the U.S. Attorney?s Office in Chicago will continue to work with our law enforcement partners around the globe to identify, investigate and prosecute those who attempt to illegally profit from the innovation of others.”

Funny. Is he also going to charge the US Navy for its massive copyright infringement? Or is that not the kind of copyright infringement harm Fardon goes after?

“Vaulin is charged with running today?s most visited illegal file-sharing website, responsible for unlawfully distributing well over $1 billion of copyrighted materials,” said Assistant Attorney General Caldwell. “In an effort to evade law enforcement, Vaulin allegedly relied on servers located in countries around the world and moved his domains due to repeated seizures and civil lawsuits. His arrest in Poland, however, demonstrates again that cybercriminals can run, but they cannot hide from justice.”

The 1billionofcopyrightedmaterialsisanicetouch,butagainrepresentsmerelytheestimatedcoverprice,notanyactuallossestotheindustry.NotthattheDOJwantstoadmitthat.Butthenextguyisevenworse,nolongerjustclaimingthatover1 billion of copyrighted materials is a nice touch, but again represents merely the estimated cover price, not any actual losses to the industry. Not that the DOJ wants to admit that. But the next guy is even worse, no longer just claiming that over 1billionofcopyrightedmaterialsisanicetouch,butagainrepresentsmerelytheestimatedcoverprice,notanyactuallossestotheindustry.NotthattheDOJwantstoadmitthat.Butthenextguyisevenworse,nolongerjustclaimingthatover1 billion was distributed, but directly stating that Vaulin stole $1 billion.

“Artem Vaulin was allegedly running a worldwide digital piracy website that stole more than $1 billion in profits from the U.S. entertainment industry,” said Executive Associate Director Edge. “Protecting legitimate commerce is one of HSI?s highest priorities. With the cooperation of our law enforcement partners, we will continue to aggressively bring to justice those who enrich themselves by stealing the creative work of U.S. artists.”

Aren’t law enforcement people supposed to actually know the law? There was no stealing. There may have been copyright infringement using the tool that Vaulin built, but that’s not stealing.

“Investigating cyber-enabled schemes is a top priority for CI,” said Chief Weber. “Websites such as the one seized today brazenly facilitate all kinds of illegal commerce. Criminal Investigation is committed to thoroughly investigating financial crimes, regardless of the medium. We will continue to work with our law enforcement partners to unravel this and other complex financial transactions and money laundering schemes where individuals attempt to conceal the true source of their income and use the Internet to mask their true identity.”

Illegal commerce? It was basically a search engine for free content. What illegal commerce happened there?

Yes, yes, lots of infringement happened via the site. No one denies that. But having law enforcement folks stand up and make clueless statements like this suggest they don’t even understand what Kickass Torrents did, and they just want to puff themselves up and look good for Hollywood.

Meanwhile: does anyone really believe that this move will cause anyone who used KAT to suddenly go back to purchasing movies?

Filed Under: aiding and abetting, artem vaulin, copyright, criminal copyright infringement, dhs, doj, hsi, ice, secondary liability
Companies: kickass torrents, mpaa

Whether Or Not Mississippi Attorney General Jim Hood Is In Hollywood's Pocket, He Sure Doesn't Understand Free Speech Or The Internet

from the let's-take-this-slowly dept

We already discussed the rather unbelievable (in that they are, literally, unbelievable) claims from Mississippi Attorney General Jim Hood that he didn’t know he was working with the MPAA’s top outside lawyer when he had that same lawyer, Tom Perrelli of Jenner & Block, spend time prepping him for a meeting with Google in which he attacked Google’s practices, and further when he signed his name to a ~4,000 word letter to Google that Perrelli wrote, attacking Google’s practices. He just assumed that Perrelli — who probably charges more per hour than you can possibly imagine — was doing it to help Hood out, rather than for a client. And he expects everyone to believe that, even though at the same time Hood himself had called one of the MPAA’s top lobbyists to discuss Google. And, further, he doesn’t appear to think there’s anything wrong that his political mentor, Mike Moore, who helped get him his job as Attorney General (Moore was in that job before Hood), just happened to take a cushy lobbying job paid for by Hollywood companies right around the same time. It’s all a giant coincidence.

But if Hood is going to take a step back and reflect on just how bad this looks, he sure isn’t showing it. Instead, he’s coming out swinging, holding apress conference in which he appears focused on revealing his own ignorance of the law and technology (with a special focus on his vast desire to censor the internet — and anyone who criticizes him). This was held yesterday, prior to Google’s filing this morning challenging Hood’s subpoena (the one the MPAA knew was coming). While the Google filing discussed in that previous post detailed many of the problems with Hood’s legal theories, the press conference displayed an astounding lack of understanding of the law, of search engines and of basic technology.

Take a look:

He kicks it off by blaming Google for the story, arguing that it was Google who sifted through the hacked Sony emails and found these rather damning results. That’s rather insulting to the first reporters who found those emails — mainly the folks at TorrentFreak and The Verge who really kicked the whole thing off, revealing the MPAA’s plans to fund Hood’s investigation into Google. It’s interesting that he flat out claims that Google did this, considering that later (we’ll get to this), he argues that it’s defamatory for anyone to suggest that he’s in the pocket of the MPAA. Apparently, blind speculation is fine, but informed explanation based on the leaked documents is defamation.

He then suggests not only that it was Google sorting through the emails, but also that this is illegal. He claims that the law says it’s okay for reporters — but perhaps not companies. Huh?

I want to talk about a story that’s been pushed out by a large corporation called Google. I mean, they pushed this story out. They rifled through the emails that were stolen from Sony. And, you know, I equate it to rifling through someone’s stolen property. You know, someone goes in your house and steals your filing cabinet and your clothes and the drawer and sets it out on the road. Do they have the right to go through it? Certainly there’s case law saying the media has the right to publish this type of thing. But, you know, companies like Google that have pushed this story, putting it out to blogs, and it winds up, you know, they’re trying to do a story about Sony working with other industries, not just within their industry, to try to do something about intellectual property theft, and Attorneys General is unfair, to say the least, that they try to spin it, by feeding the NY Times documents and emails and things like that, to indicate somehow that Attorneys General just came to this because the motion picture industry just got involved in it.

So, again, he implies that Google was the one who rifled through the documents, and that it’s possibly illegal — and further suggests that giving actual evidence to reporters at the NY Times is, at the very least “unfair.” Hood has a very interesting understanding of journalism and fairness. But, note, of course, that he doesn’t deny what the NY Times actually reported — that he, as Attorney General — took a ~4,000 word letter from the MPAA’s top attorney and sent it to Google almost entirely unchanged.

Instead, this press conference is about attacking Google and free speech at every opportunity. Every single opportunity. He kicks it off by talking about (what else?) child pornography (because that’s the copyright industry’s go to moral panic button — even though all it’s done is actually made the child porn situation even worse by making it more difficult to find and deal with abusers). And then he excitedly talks about getting banks to stop doing business with sites, and then easily slides back and forth between child porn and copyright infringement as if you can block one you can easily block the other. This is an old argument of the copyright industries, and it’s simply wrong. Child porn can be identified, because it’s inherent in the image itself. Copyright, on the other hand, isn’t about the material, but the use. You can’t tell, just by looking at a picture, a song or a movie if it’s infringing or not. You can’t tell if it’s licensed. Or if it’s fair use. Or if it’s public domain. But Hood doesn’t seem to care. Despite no legal basis, he declares certain sites to be flat out illegal, and says that Google needs to block them completely. He even seems to recognize the lack of a legal basis, because he says that Google should just “team up with a non-profit” who will “make a list” and Google should censor that list.

Let’s just be clear about what Attorney General Jim Hood is asking for here: he’s asking for a censorship list of sites that need to be blocked without a legal review or court order.

And, then, from child porn to copyright… he moves on to drugs.

Here’s an example. Just today, my investigators typed in the words “buy drugs.” [points to screen] And if you look right here, on this list, this is Google, what do you find here: Silkroad.org! That is an illegal drug site that was taken down by the federal government. And here’s CanadianDrugs.com. That is the company, from which our investigators bought drugs online…. They know about that website, because I wrote them a letter and said here’s where we made these purchases online. But yet, even today, some kid in Mississippi, types in “buy drugs,” they’re going to find a way to buy some.

Huh. First off, Hood is flat out wrong about Silk Road. It was never at SilkRoad.org (that appears to be some author’s site). It was a Tor hidden site. It never had a URL like that. Second, the site that’s actually at the top of the list (SilkRoadDrugs.org) is a blog/news site about Silk Road and other types of hidden markets. This actually demonstrates the very serious problem with Hood’s argument. What he’s doing here is calling for the flat out censorship of a news publication for writing about hidden markets online (and the legal issues related to them). Hood and his staffers are so clueless that they can’t tell the difference between an actual online drug market and a news site. And yet, they think that Google should just automatically censor sites on his say so? That’s not how it works.

Furthermore, this highlights the absolute stupidity of trying to demand that Google ban “illegal” websites. In this case, you could never find Silk Road via Google (no matter what Hood wrongly claims) because it never had a Googleable URL. Instead, you could only find websites that then explain how to get to Silk Road. And that’s exactly what would happen if you banned other sites. In its place would pop up informational sites — protected by the First Amendment — that explain how to get to the sites that Google was told to block. Thus, blocking those sites doesn’t solve the problem in the slightest.

Second, the site that the search links to is not “CanadianDrugs.com,” but “CanadaDrugs.com,” (though CanadianDrugs leads to the same site) which is a site that is actually backed by PharmacyChecker, the Canadian International Pharmacy Association and the Manitoba International Pharmacists Association as being a legitimate online pharmacy. Now, yes, some can argue about the issue of importing drugs into the US from Canada, but importing legitimate drugs from Canada is very different from buying illegal drugs — and in fact, politicians from President Obama to Senator Patrick Leahy have pushed for allowing greater importation of drugs from Canada as a way to make drugs more affordable. Apparently, Jim Hood doesn’t want poor people to be able to buy cheaper drugs. Nice guy.

Third, and most importantly, Hood’s populist crap about kids from Mississippi being able to buy drugs online — that’s not because of Google. Hell, Hood himself can be blamed just as much as Google, because at this very press conference he announces publicly a website from which he claims he was able to buy illegal drugs. Think about that for a second. If you’re a kid in Mississippi looking to buy illegal drugs online, which are you going to do: do a random search for “buy drugs” on Google and then hope for the best. Or go directly to the website that the Attorney General of your state just said you can buy illegal drugs from. If the “crime” that Google has committed here is to point people to a site from which they can buy illegal drugs, hasn’t Attorney General Jim Hood violated the very same law (in an even worse fashion by flat out saying there are illegal drugs there)?

Either way, if the site is illegal, that’s not Google’s decision to make. Get a court order declaring the site illegal and present that to Google. And, of course, as mentioned above, that still won’t solve your problem, because it will just result in First Amendment protected websites telling users how to get to the sites that Google has banned.

Jim Hood doesn’t seem to understand any of this, and it shows a real lack of understanding of both the First Amendment and the internet.

That’s the kind of stuff we’re talking about. We’re talking about prescription drugs. Now, if they’re stealing music and movies and software, you know, the piracy issues, that’s bad! That’s a crime. And if Google is assisting them, they’re assisting in a crime.

Okay, you’d hope that an Attorney General would know the law, but it looks like he doesn’t know the first thing about the law here. First off, copyright infringement has both civil and criminal parts, but most copyright infringement is civil, not criminal (also, it’s infringement, not “stealing.”) Second, even if we were talking about criminal copyright infringement (which has a few requirements), to then argue that Google’s role of linking to websites when people ask for those sites equals “assisting” shows a complete lack of understanding of the concept of “aiding and abetting” in criminal law. Aren’t Attorneys General supposed to know this stuff? Just because a tool is used to commit a crime, it doesn’t make the provider of the tool guilty of assisting. Again, if that were the case, then Attorney General Jim Hood himself broke the law multiple times in this press conference when he named sites where you can buy drugs and download music (he talked a lot about MP3Skull, for example).

In fact, Google got caught. They paid a half a billion dollar fine. Do y’all remember that? They paid a five hundred million dollar fine to the federal government.

Except, no. Again, Hood is misrepresenting things. Google did pay a $500 million fine, but not for its search results. That was for advertisements in its AdWords program for a questionable pharmacy — and the details in that case involved an ad sales guy for Google who seemed to go out of his way to try to help an obviously up-to-no good federal informant set up ads for a pharmacy to sell illegal drugs. In that case, there was fairly clear evidence that the employee in question was directly assisting questionable behavior. But that’s very, very different from organic search results.

Once again, you’d hope that someone like an Attorney General bent on going after search engines would understand the difference between advertisements and organic search. But Hood doesn’t seem to know or care.

But you can still go on here and find all kinds of drugs. Heroin!

Is he really claiming that you can buy heroin on Google? Because that’s bullshit. I’m sure there are places that you can buy heroin online but the idea that Google is the reason people are buying heroin is insane. Not being particularly knowledgeable about the world of heroin, I just spent some time trying fairly hard to search Google for a way to buy heroin and came up empty, other than (again) some news websites that basically just tell you about hidden drug markets online (please point the DEA or DOJ to this paragraph should they show up at my door asking why I’ve been doing those searches).

From there, Hood launches into a full-on assault on the internet. Or rather, his ignorant view of the internet and the law.

The internet, as I’ve been talking about… is the future of crime.

What does that even mean?

Their motto is do no evil, but all I’m finding as I work with them is that they’re pushing evil. They’re pushing evil. They’re pushing other companies that deserve their respect. But more important to me, they’re creating a highway for my children and our children in Mississippi to buy drugs, to have human trafficking, to buy fake IDs. I mean we have examples of how you can go online right now and buy fake IDs.

Hey, Jim Hood, stop confusing “Google” with “the internet.” Google is a search engine. It finds what’s on the internet. It is not the internet. It is not responsible for what people find or what they’re looking for. This is a basic concept. It would help if you learned it.

We’ve got YouTube videos of how to buy a fake ID.

Yes, that’s also known as protected speech and it’s done by people who are not Google. I’m sure there are books you can buy about fake IDs as well, but you don’t blame the US Postal Service for delivering them when someone buys one.

Google says “well, our system, we can’t track that.” Well, we found that isn’t true. Because they take down this “prescription drug without a prescription” autocomplete.

Yes, here Hood seems to be conflating a variety of things. Yes, Google can and does edit the autocompletes to take down language that people complain about, but that’s very different than saying “this entire site is bad.” These are totally different things. Comparing the two suggests a level of technical ignorance that is kind of scary for someone so intent on censoring the internet. But, no worries, Hood’s got a solution: he supports blatant out and out censorship, like that found in Germany:

If you go to Germany and type in “Nazi” you can’t find anything.

First of all, that’s not true. Just try it. Second, even if it were, Germany’s requirements on Naziism would be a violation of our First Amendment here in the US — the kind of thing that an Attorney General should know about.

He goes on to insist this has nothing to do with SOPA or even filtering the internet… but that he just wants Google to filter these bad sites out. This gets back to the fundamental problem with supporters of censorship like Jim Hood. They think that “bad” is an objective thing, and that if they think a site is bad, that it’s “obviously” bad and thus it’s not censorship or filtering. Furthermore, Hood doesn’t even seem to recognize that Google has been pretty active in pushing down a variety of sites associated with infringement, to the point that various torrent sites almost entirely disappeared from the site. That’s what Hood was asking for, so why doesn’t he admit that Google actually basically did a bunch of what he asked?

From there, the press conference went to questions, and the first reporter points out the obvious: “I don’t know why you’re going after Google on this, rather than the actual websites.” To that, Hood said, again, that Google was “assisting” these websites, and claims that Google is making money from these sites advertising on Google. Yet, none of the examples he showed involved advertisements. They were all organic search. And, even if they were advertisements, Google is just a platform. Anyone can go in there and buy an ad saying just about anything. How is that Google’s fault?

The same reporter then pointed to the YouTube videos about buying fake IDs and said “would that not be a stifling of the First Amendment?” And Hood’s response:

No. Because Google’s not the government. They don’t owe a First Amendment… they should just say “we’re not going to do business with you website unless you clean up your act.”

He’s right that Google is not the government and Google can choose to block anything or set up its search results however. But the entire discussion here is about Hood — a government official — claiming that Google is breaking the law in not doing this. That is the First Amendment problem. And Hood doesn’t even seem to understand that. In fact, just minutes later, he insists “there’s going to be a court battle” to determine if Google has to block these sites. If that’s the case, then it very much involves the government stepping in and deciding a free speech issue.

The reporter — whoever he is, and he seems to fully understand all the things that Hood doesn’t — points out, again, that isn’t the real problem with the actual websites, not Google. And then asks “what law is Google actually violating here.” You can almost see the panic on Hood’s face as he recognizes his unchecked anger wasn’t about anything actually illegal. But he tries, valiantly, to come up with something:

Well, if you’re made aware of it, then you’re an accessory to it.

Again, huh? Pointing people to information is not an accessory to a crime. No matter how many times Hood wants to claim it, he’s wrong. Or, if he’s right, HE VIOLATED THAT SAME LAW in this very press conference. Hood then claims that Google has to know entire sites are illegal based on DMCA notices concerning some content on that site being infringing. Right. But under that theory, YouTube shouldn’t even exist, because Viacom insisted it was illegal based on DMCA notices but then failed in court.

This is the point. You don’t just get to point to a website and say “that’s illegal.” Go to court. Have an adversarial trial and then prove that a site is breaking the law. That’s fine. But that’s not what Hood is saying. He’s saying if he or the movie industry or the drug industry points to a website and says “this site is illegal,” then it should be blocked. But given how the entertainment industry has a long history of declaring new innovations illegal (the player piano, the radio, cable TV, the mp3 player, the VCR, YouTube, etc. etc. etc.), I’d much rather we don’t go down the road where a particular entrenched industry gets a veto on innovations.

From there, Hood goes on to attack the State Attorney General’s favorite bugaboo: Section 230 of the CDA. That law makes it clear that platform providers aren’t responsible for the actions of users. And this makes perfectly good common sense because you don’t blame AT&T for someone calling in a bomb threat, and you don’t blame Ford for its car being the getaway car. But Hood really wants to blame internet companies for the way people use them. Because companies are easier to go after than people actually doing wrong — and it generates much bigger headlines.

That was created because we wanted the internet to flourish. Companies like Google are using that shield as a sword. They’re saying you can’t come after us because we have immunity, even though we’re changing our algorithm, and we’re doing autocomplete. They’re doing all that. They have that immunity unless they change that information. And when they use that autocomplete that’s when they step into no man’s land. They don’t have that Section 230 protection.

First of all, no, Section 230 wasn’t created just because we wanted the internet to flourish. That was a side effect. Section 230 was created to put liability on the actual parties responsible rather than allowing grandstanders and ignorant people to go after service providers for the way people use their services. And Google isn’t saying that it has immunity just because of the law, but because it’s not the one doing the actions in question.

As for the whole autocomplete bit, Hood is again totally misrepresenting things. Section 230 actually does encourage companies to make edits like that, and specifically notes that companies can’t be held liable for making such choices because it wants to encourage companies to do exactly what Hood is talking about. Arguing that Google is liable for changing the autocomplete results or shifting the algorithm creates incentives for companies to do nothing at all to avoid being blamed for their own actions pointing to bad results. Hood’s own solution would make things much, much worse. Section 230(c)(2) makes this clear: “No provider or user of an interactive computer service shall be held liable on account of any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected.”

I know Hood doesn’t like the law, but the least he can do is understand it before misrepresenting it.

Then there’s this: Hood directly calls for an insider to blow the whistle on Google:

I want some insiders to come forward who have been done wrong by Google from the Valley. I’m putting out a call to anybody out there that has information on how Google has done illegal acts. I’m trying to reach out to those people in the Valley or wherever they are. Because it’s going to take an insider to bring this company to the point where it follows the law.

Right. So, throughout this talk, Hood slams anyone looking at Sony’s documents as “rifling through stolen information,” and then in the very same press conference asks people to leak him insider information from Google. Does the man have no self-awareness at all?

At the end, he then admits that his office doesn’t understand this stuff or have a budget — so he has to rely on the rather biased MPAA instead:

Reporter question: Is there a budget for legal support?

Hood: For our office? [shakes his head] You know, I’ve got investigators…. The answer to your question is No. We’re going to have to spend some money. We don’t have experts in the area of intellectual property theft. We’re going to have to rely on lawyers that have that type of expertise. So, the legislature doesn’t give me a budget of a million dollars just to do these kinds of major investigations, so we have to work with and rely on industries — and we’re going to work with them and their lawyers.

Uh, so there he is taking back basically everything he had previously said about the MPAA and admitting that he is relying on them and their lawyers? Going to industry for help is one thing. But having a very biased industry (with a known — if totally misguided — hatred for Google) not just driving the investigation but spending that million dollars on it and writing up the letters for Hood to send and prepping him for his meetings — that seems like a very different thing than just “talking to the victims” as Hood repeatedly claimed earlier in the interview.

Also, it appears that Hood knows fuck-all about defamation law:

Hood: Implications AGs have been paid off might even be actionable.

— Therese Apel (@TRex21) December 18, 2014

That came after the video above ended, so I’m not entirely sure of the exact context of the quote, but the idea that accurately reporting on leaked documents showing that the MPAA was funding the investigation and writing the key documents of a state Attorney General is not actionable in any way shape or form. And it’s doubly ridiculous given that in this very same conference, Hood himself made a variety of speculative statements claiming that Google was the one who went through the emails and “spun” the story to the press.

But is it really any surprise that an Attorney General who is relying on a massively funded MPAA investigation to try to stifle free speech online is now implicitly threatening those who report on it with defamation lawsuits? So not only is he trying to censor the internet, he’s trying to intimidate reporters into shutting up as well. Free speech be damned.

But it’s okay. It’s all for the children of Mississippi. And the headlines.

Filed Under: aiding and abetting, cda 230, copyright, free speech, illegal drugs, internet, jim hood, mississippi, piracy, secondary liability
Companies: google, mpaa