material support – Techdirt (original) (raw)

Stories filed under: "material support"

A Case Where The Courts Got Section 230 Right Because It Turns Out Section 230 Is Not Really All That Hard

from the helpful-precedent dept

Having just criticized the Second Circuit for getting Section 230 (among other things) very wrong, it’s worth pointing out an occasion where it got it very right. The decision in Force v. Facebook came out last year, but the Supreme Court recently denied any further review, so it’s still ripe to talk about how this case could, and should, bear on future Section 230 litigation.

It is a notable decision, not just in terms of its result upholding Section 230 but in how it cut through much of the confusion that tends to plague discussion regarding Section 230. It brought the focus back to the essential question at the heart of the statute: who imbued the content at issue with its allegedly wrongful quality? That question is really is the only thing that matters when it comes to figuring out whether Section 230 applies.

This case was one of the many seeking to hold social media platforms liable for terrorists using them. None of them have succeeded, although for varying reasons. For instance, in Fields v. Twitter, in which we wrote an amicus brief, the claims failed but not for Section 230 reasons. In this case, however, the dismissal of the complaint was upheld on Section 230 grounds.

The plaintiffs put forth several theories about why Facebook should not have been protected by Section 230. Most of them tried to construe Facebook as the information content provider of the terrorists’ content, and thus not entitled to the immunity. But the Second Circuit rejected them all.

Ultimately the statute is simple: whoever created the wrongful content is responsible for it, not the party who simply enabled its expression. The only question is who created the wrongful content, and per the court, “[A] defendant will not be considered to have developed third-party content unless the defendant directly and ‘materially’ contributed to what made the content itself ‘unlawful.'” [p. 68].

Section 230 really isn’t any more complicated than that. And the Second Circuit clearly rejected some of the ways people often try to make it more complicated.

For one thing, it does not matter that the platform exercised editorial judgment over which user content it displayed. After all, even the very decision to host third-party content at all is an editorial one, and Section 230 has obviously always applied in the shadow of that sort of decision.

The services have always decided, for example, where on their sites (or other digital property) particular third-party content should reside and to whom it should be shown. Placing certain third-party content on a homepage, for example, tends to recommend that content to users more than if it were located elsewhere on a website. Internet services have also long been able to target the third-party content displayed to users based on, among other things, users’ geolocation, language of choice, and registration information. And, of course, the services must also decide what type and format of third-party content they will display, whether that be a chat forum for classic car lovers, a platform for blogging, a feed of recent articles from news sources frequently visited by the user, a map or directory of local businesses, or a dating service to find romantic partners. All of these decisions, like the decision to host third-party content in the first place, result in “connections” or “matches” of information and individuals, which would have not occurred but for the internet services’ particular editorial choices regarding the display of third-party content. We, again, are unaware of case law denying Section 230(c)(1) immunity because of the “matchmaking” results of such editorial decisions. [p. 66-67]

Nor does it matter that the platforms use algorithms to help automate editorial decisions.

[P]laintiffs argue, in effect, that Facebook’s use of algorithms is outside the scope of publishing because the algorithms automate Facebook’s editorial decision-making. That argument, too, fails because “so long as a third party willingly provides the essential published content, the interactive service provider receives full immunity regardless of the specific edit[orial] or selection process.” [p. 67]

Even if the platform uses algorithms to decide whether to make certain content more “visible,” “available,” and “usable,” that does not count as developing the content. [p. 70]. Nor does simply letting terrorists use its platform to make it a partner in the creation of their content. [p. 65]. The court notes that in cases where courts have found platforms liable as co-creators of problematic content, they had played a much more active role in the development of specific instances of problematic expression than simply enabling it.

Employing this “material contribution” test, we held in FTC v. LeadClick that the defendant LeadClick had “developed” third parties’ content by giving specific instructions to those parties on how to edit “fake news” that they were using in their ads to encourage consumers to purchase their weight-loss products. LeadClick’s suggestions included adjusting weight-loss claims and providing legitimate-appearing news endorsements, thus “materially contributing to [the content’s] alleged unlawfulness.” [We] also concluded that a defendant may, in some circumstances, be a developer of its users’ content if it encourages or advises users to provide the specific actionable content that forms the basis for the claim. Similarly, in Fair Housing Council v. Roommates.Com, the Ninth Circuit determined that?in the context of the Fair Housing Act, which prohibits discrimination on the basis of sex, family status, sexual orientation, and other protected classes in activities related to housing?the defendant website’s practice of requiring users to use pre-populated responses to answer inherently discriminatory questions about membership in those protected classes amounted to developing the actionable information for purposes of the plaintiffs’ discrimination claim. [p. 69]

Of course, as the court noted, even in Roommates.com, the platform was not liable for any and all potentially discriminatory content supplied by its users.

[I]t concluded only that the site’s conduct in requiring users to select from “a limited set of pre-populated answers” to respond to particular “discriminatory questions” had a content-development effect that was actionable in the context of the Fair Housing Act. [p. 70]

Also, woven throughout the decision the court also included an extensive discussion, [see, e.g., p. 65-68], about that perpetual red herring: the term, “publisher,” which keeps creating confusion about the scope of the law. One of the most common misconceptions about Section 230 is that it hinges on some sort of “platform v. publisher” distinction, immunizing only “neutral platforms” and not anyone who would qualify as a “publisher.” People often mistakenly believe that a “publisher” is the developer of the content, and thus not protected by Section 230. In reality, however, for purposes of Section 230 platforms and publishers are actually one and the same, and therefore all protected by it. As the court explains, the term “publisher” just stems from the understanding of the word as “one that makes public,” [p. 65], which is the essential function of what a platform does to distribute others’ speech, and that distribution is not the same thing as creation of the offending content. Not even if the platform has made editorial decisions with respect to that distribution. Being a publisher has always entailed exercising editorial judgment over what content to distribute and how, and, as the court makes clear, it is not suddenly a basis for denying platforms Section 230 protection.

Filed Under: 2nd circuit, carl force, material support, section 230, terrorism
Companies: facebook

Losing Streak Continues For Litigants Suing Social Media Companies Over Violence Committed By Terrorists

from the twelve-straight-losses-to-open-the-season dept

According to Eric Goldman’s count (and he would know), this is the 12th ridiculous “blame Twitter for terrorism” lawsuit to be tossed by a federal court. The dubious legal theory — one so dubious it has yet to find any judicial takers — is that Twitter and other social media platforms “allow” terrorists to converse and radicalize and do other terrorist things. What no one has successfully alleged is that Twitter, Facebook, etc. are directly or indirectly responsible for terrorist attacks.

This lawsuit was one of the dumbest. The brain geniuses at Excolo Law convinced a client this would be a winning strategy: claim the shooting of some cops by a shooter in Dallas was Twitter’s fault because possibly the shooter thought terrorist group Hamas was pretty cool. 96 pages of lawsuit and this was the tenuous allegation plaintiffs Jesus Retana and Andrew Moss thought might finally prove social media companies are providing material support to terrorists.

Micah Johnson was radicalized by HAMAS’ use of social media. This was the stated goal of HAMAS. Johnson then carried out the deadly attacks in Dallas. Conducting terrorist acts in the United States via radicalized individuals is a stated goal of HAMAS.

Not only did the lawsuit fail to include anything linking Twitter to the killing of Dallas police officers, it failed to include anything linking the shooter to Hamas. That didn’t stop Excolo Law from claiming that the only thing propelling the shooter to start killing Dallas police officers was Hamas’ social media presence, aided and abetted by Twitter.

As Goldman points out, the court “expressly does not reach the Section 230 defense.” That’s not because it’s not a good defense. It’s because the lawsuit — and the law firm shoveling as many of these into federal courts as possible — is so awful.

The court opens its dismissal [PDF] by noting the string of courtroom failures Excolo Law (and 1-800-LAW-FIRM) doesn’t seem to be interested in discussing when pursuing another lost cause in a federal court.

This case is the latest in a string of lawsuits that Plaintiffs’ lawyers have brought in an attempt to hold social media platforms responsible for tragic shootings and attacks across this country—by alleging that the platforms enabled international terrorist organizations to radicalize the attacks’ perpetrators. In fact, Plaintiffs’ lawyers brought a suit in the Northern District of California, Pennie v. Twitter, Inc., 281 F. Supp. 3d 874 (N.D. Cal. 2017), concerning the same Dallas shooting this Court is confronted with here, albeit with different plaintiffs. The court in that case dismissed the claims with prejudice, finding that there was no connection between the shooting and Hamas, the terrorist organization at issue. Id. at 892. Yet, Plaintiffs’ counsel made no mention of that case in their briefing; counsel discussed the case only after the Court questioned about it at oral argument.

The court then notes it can do its own research if the law firm isn’t willing to discuss past work that hews super-closely to the case at hand. GTFO, says the Texas federal court.

The Court dismisses this lawsuit with prejudice. Although the complaint here alleges additional facts not found in Pennie, the complaint nonetheless suffers from many of the same deficiencies discussed in Pennie. Plaintiffs here have not and after multiple attempts, clearly cannot connect Hamas to the Dallas shooting.

Need more? No link between the cop killer and Hamas:

Simply put, the SAC does not allege any facts that show that Hamas radicalized Johnson to commit the Dallas attack, not to mention by using Defendants’ websites. Plaintiffs’ injuries,therefore, were not “by reason of” Hamas, or Defendants’ alleged support of Hamas.

No link between the claimed violation of the ATA (Anti-Terrorism Act) and the Dallas shooting, either:

Plaintiffs’ secondary liability claims fail for an additional, yet similar, reason: Plaintiffs do not allege that the Dallas shooting was an act of international terrorism.

[…]

[T]he SAC is devoid of allegations connecting Hamas to the shooting, even after it occurred. There is no transnational component to Johnson’s planning and execution of the shooting. Instead, this tragic shooting appears to be an act of domestic terrorism.

The case is dismissed with prejudice, continuing Excolo Law’s losing streak. This obviously won’t keep the firm from trying again, not as long as it can convince victims of violence they have a shot at extracting a large settlement from social media companies. Sure, it hasn’t worked yet. But that can only mean Excolo, et al are due for a win! Right?

Filed Under: andrew moss, intermediary liability, jesus retana, material support, section 230, social media, terrorists
Companies: 1-800-lawfirm, excolo law, twitter

While Trump Complains About Facebook Takedowns, Facebook Is Helping Trump Take Down Content He Doesn't Like

from the oh,-look-at-that dept

You might have noticed in the last week or two that President Trump has suddenly jumped on the silly bandwagon suggesting that internet platforms like Facebook and Twitter don’t have a right to kick people off of their platforms. There have been a bunch of misleading tweets he’s made, but we’ll just post this one that kicked it all off:

In it, Trump says:

I am continuing to monitor the censorship of AMERICAN CITIZENS on social media platforms. This is the United States of America — and we have what’s known as FREEDOM OF SPEECH! We are monitoring and watching, closely!!

Of course, “FREEDOM OF SPEECH” in the American context only applies to the government interfering with the rights of people to express themselves, and has no bearing on companies choosing to kick off people who it finds problematic. Indeed, part of the 1st Amendment is that it provides the platforms — as private entities — the right to determine who they associate with and who they don’t.

But a new Wired article suggests that there’s a striking contrast here, in that Facebook has someone who is quick to respond and to shut down the accounts of those designated by Trump’s government as undesirable. It’s difficult not to read this as somewhat hypocritical. The issue relates to another story we discussed last month, in which the Trump White House declared Iran’s IRGC a “foreign terrorist organization.” The Islamic Revolutionary Guards Corps, is basically Iran’s military/security/law enforcement wing — and this is the first time that a governmental organization has been declared a foreign terrorist organization in the US. And Facebook immediately accepted this claim from the White House and banned any related accounts:

The day after Trump?s move, Instagram, a Facebook property, blocked the accounts of high-ranking Revolutionary Guard officers. And the next week The New York Times reported that Fishman had said Facebook would have zero tolerance for any group the US deems a terrorist organization.

In short, at the same time as Trump is incorrectly referencing the 1st Amendment with regards to Facebook’s private moderation decisions, his own White House is effectively able to dictate to Facebook what accounts need to be taken down:

So basically Trump can tell Facebook to de-platform any part of any foreign government?including, presumably, an entire foreign government?and Fishman, along with Facebook CEO Mark Zuckerberg, will reply with a crisp salute? Under Facebook?s current policy, that would seem to be the case.

Wired’s Robert Wright asks Facebook’s “global head of counterterrorism policy,” Brian Fishman, to defend this, and Fishman basically says “I’m just following orders”

When I asked Fishman to justify this policy, he said it?s designed to keep Facebook on the right side of the law, which prohibits Americans from providing ?material support? to any group deemed a ?Foreign Terrorist Organization.?

But, I replied, the law goes on to spell out the things that would constitute ?material support,? and none of them sound much like ?letting these groups post on your social media platform.? Fishman said, ?I?m not a lawyer. I?m a policy guy.?

You can understand why Facebook might wish to avoid falling afoul of material support for terrorism laws (though the few attempts to hit social media companies with this law have all failed miserably), but the end result is this bizarre situation in which the President is whining about blocking accounts on Facebook (which are actions by a private company in which the government has no say), while his own government is using its (questionable powers) to have accounts banned on Facebook (which potentially do have more actual 1st Amendment implications).

Filed Under: brian fishman, content moderation, donald trump, foreign terrorist organization, iran, irgc, material support, social media

Another Attempt To Tie Twitter To Terrorist Acts And Another Dismissal With Prejudice

from the Definition-Of-Insanity,-PLLC dept

“A series of lawsuits,” the court calls it. This is the ongoing work of 1-800-LAW-FIRM and Excolo Law — two firms that specialize in bringing losing lawsuits to federal courts. It’s a series of lawsuits and a series of losses. An unbroken string of dismissals at both the district and appellate levels — all in response to the firms’ attempts to hold social media companies responsible for the acts of terrorists.

Mandy Palmucci — a victim of the terrorist attacks in Paris, France — filed an incredibly long lawsuit (121 pages!) last year with the assistance of these two law firms. She needn’t have bothered. This one joins the pile of rejected complaints passing through the federal court system. (h/t John Roddy)

The only thing notable about this latest loss is how irritated Judge William H. Orrick seems to be with these lawsuits that keep landing in his court. Handling one of these lawsuits twice appears to have dug deep into Judge Orrick’s reserves of patience. From the decision [PDF]:

In two decisions – Fields v. Twitter, Inc., 217 F. Supp. 3d 1116 (N.D. Cal. 2016) and Fields v. Twitter, Inc., 200 F. Supp. 3d 964 (N.D. Cal. 2016) – I concluded that surviving family members of government contractors killed by an ISIS-identified terrorist could not pursue claims for direct liability under the ATA (or related state law claims) because there was no proximate cause “between Twitter’s provision of accounts to ISIS and the deaths of” plaintiffs’ family members. Id. at 1127. I also held that Twitter was immune from liability for its provision of services to users (even terrorist users) under Section 230 of the Communications Decency Act (CDA), 47 U.S.C. § 230(c)).

The judge points out the Appeals Court reached the same conclusions, but more expeditiously. It decided the plaintiffs’ “proximate cause” claims were so weak it didn’t even need to discuss Section 230 immunity.

Then the judge sends a not-too-subtle message to the law firms pushing these baseless lawsuits

Following the Fields decisions, materially similar direct liability claims have been rejected by numerous judges in this District and elsewhere. See Clayborn v. Twitter, Inc., 17-CV-06894- LB, 2018 WL 6839754 (N.D. Cal. Dec. 31, 2018); Copeland v. Twitter, Inc., 352 F. Supp. 3d 965, 17-CV-5851-WHO (N.D. Cal. 2018); Taamneh v. Twitter, Inc., 343 F. Supp. 3d 904, 17-CV04107-EMC (N.D. Cal. 2018); Cain v. Twitter Inc., 17-CV-02506-JD, 2018 WL 4657275 (N.D. Cal. Sept. 24, 2018); Gonzalez v. Google, Inc., 335 F. Supp. 3d 1156, 16-CV-03282-DMR (N.D. Cal. 2018) (Gonzalez II); Gonzalez v. Google, Inc., 282 F. Supp. 3d 1150 (N.D. Cal. Oct. 23, 2017) (Gonzalez I); Pennie v. Twitter, Inc., 281 F. Supp. 3d 874, 17-CV-00230-JCS (N.D. Cal. Dec. 4, 2017); see also Crosby v. Twitter, Inc., 303 F. Supp. 3d 564 (E.D. Mich. March 30, 2018).

Given the short and whatever’s the opposite of “illustrious” history of these lawsuits, the judge asked the plaintiff why he should be bothered to allow the case to proceed.

In light of the similarities between Palmucci’s theories of liability and factual allegations here and those in Copeland et al v. Twitter, Inc. et al., No. 17-CV-05851-WHO and Fields v. Twitter, No. 16-CV-0213-WHO, I issued an Order on November 30, 2018, requiring plaintiff to “file a supplemental brief not exceeding five pages identifying what material facts differentiate this case from the facts pleaded in Copeland, Fields” and two other decisions from this District, Cain v. Twitter Inc., No. 17-CV-02506-JD and Gonzalez v. Google, Inc., 16-CV-03282-DMR.

And received, “Ummmmmm… because?” for a reply:

Palmucci was given an opportunity to explain why – in light of the caselaw identified above – her case should continue. She declined, essentially admitting that no additional facts could be alleged that might state her claims under the ATA or state law.

Dismissed with prejudice. That means there will be no re-filing of this lawsuit. Just the inevitable appeal — one that will be headed to an appeals court that’s already found these lawsuits baseless. Another rejection awaits, and a bit more of the courts’ time will be wasted by a couple of opportunistic law firms that have discovered a way to make money without actually being of any use to their clients.

Filed Under: mandy palmucci, material support, section 230, william orrick
Companies: 1-800-law-firm, excolo law, twitter

Another Lawsuit And Another Loss For Plaintiffs Trying To Make Twitter Pay For Terrorism

from the redefining-'social-media-strategy' dept

This flow of especially pointless lawsuits doesn’t appear be drying up — fed mainly from the (revenue) streams maintained by 1-800-LAW-FIRM and Excolo Law. Neither does the flow of courtroom losses. These two firms are responsible for most of the lawsuits we’ve covered that attempt to hold social media companies responsible for international acts of terrorism.

The legal theory behind the suits is weak. Attempting to avoid Section 230 immunity, the suits posit that the presence of terrorists on social media platforms is a violation of various federal laws targeting terrorist organizations. Section 230 defenses have been raised by Twitter, Facebook, et al, but these usually aren’t addressed by the courts because there’s not enough in the terrorism law-related arguments to keep the suits alive.

According to Eric Goldman — who has snagged the latest dismissal [PDF] — this is the seventh time a federal court has tossed one of these suits. If you’re familiar with the other cases we’ve covered, you know what’s coming. The California federal court’s decision quotes Ninth Circuit precedent from a similar lawsuit that said plaintiffs have to show a direct relationship between social media services’ action and the act of terrorism prompting the lawsuit. In this case, the complaint fails to do so.

In Fields, the Ninth Circuit addressed what is meant by the phrase “by reason of an act of international terrorism.” It began by noting that the “‘by reason of’ language requires a showing of proximate causation.” Fields, 881 F.3d at 744. It rejected the plaintiffs’ contention that “proximate causation is established under the ADA when a defendant’s ‘acts were a substantial factor in the sequence of responsible causation,’ and the injury at issue ‘was reasonably foreseeable or anticipated as a natural consequence.’” Id. Instead, it held that, “to satisfy the ATA’s ‘by reason of’ requirement, a plaintiff must show at least some direct relationship between the injuries that he or she suffered and the defendant’s acts.”4 Id. (emphasis added).

And, although the facts of this case are a little different than the cited decision, the allegations in the plaintiff’s lawsuit undermine its arguments about direct or proximal responsibility.

The instant case is somewhat different from Fields in that, here, Plaintiffs have made one allegation suggesting that Mr. Masharipov’s attack was in one way causally affected by ISIS’s presence on the social platforms. Specifically, Plaintiffs allege that Mr. Masharipov was “radicalized by ISIS’s use of social media.” FAC ¶ 493. However, this conclusory allegation is insufficient to support a plausible claim of proximate causation.

Plaintiffs do not allege that Mr. Masharipov ever saw any specific content on social media related to ISIS. Nor are there even any factual allegations that Mr. Masharipov maintained a Facebook, YouTube, and/or Twitter account. Furthermore, there are allegations in the complaint suggesting that there were other sources of radicalization for Mr. Masharipov. See, e.g., FAC ¶ 337 (alleging that Mr. Masharipov “had previously received military training with al-Qaeda in Afghanistan in 2011”); see also Iqbal, 556 U.S. at 678 (stating that, “[w]here a complaint pleads facts that are ‘merely consistent with’ a defendant’s liability, it ‘stops short of the line between possibility and plausibility of “entitlement to relief”’”). Finally, a direct relationship is highly questionable in light of allegations suggestive of intervening or superseding causes – in particular, Plaintiffs have alleged that, after becoming radicalized, Mr. Masharipov would have a “year-long communication and coordination [with] Islamic State emir Abu Shuhada” to carry out the Reina attack. FAC ¶ 334. Moreover, Plaintiffs fail to allege any clear or direct linkage between Defendants’ platforms and the Reina attack.

The allegations under another anti-terrorism law are no better. This argument posits the existence of terrorist-owned accounts is the same thing as providing support for terrorist acts or organizations. The court again finds the allegations don’t approach the legal requirements for liability.

Here, Plaintiffs have failed to allege that Defendants played a major or integral part in ISIS’s terrorist attacks; for example, there are no allegations that ISIS has regularly used Defendants’ platforms to communicate in support of terrorist attacks. Also, for factor (4), i.e., the defendant’s relation to the principal wrongdoer, the Halberstam court indicated that a close relationship or a relationship where the defendant had a position of authority could weigh in favor of substantial assistance. Here, there is no real dispute that the relationship between Defendants and ISIS is an arms’-length one – a market relationship at best. Rather than providing targeted financial support,[…] Defendants provided routine services generally available to members of the public. As to factor (5), i.e., the defendant’s state of mind, the Halberstam court indicated that, where the defendant “showed he was one in spirit” with the principal wrongdoer, id., that could also weigh in favor of substantial assistance. Cf. NAACP v. Claiborne Hardware Co., 458 U.S. 886, 920 (1982) (noting that, “[f]or liability to be imposed by reason of association alone, it is necessary to establish that the group itself possessed unlawful goals and that the individual held a specific intent to further those illegal aims”). But here there is no allegation that Defendants have any intent to further ISIS’s terrorism.

The entire suit — including state claims for wrongful death and emotional distress — are dismissed with prejudice. The only thing left for the plaintiffs to do is appeal, and this decision quotes generously from this jurisdiction’s appellate decision in a similar case, which should hopefully deter them from wasting any more of the Ninth Circuit’s time.

Filed Under: cda 230, intermediary liability, material support, section 230, terrorism
Companies: twitter

Federal Court Dumps Another Lawsuit Against Twitter For Contributing To Worldwide Terrorism

The lawsuits against social media companies brought by victims of terrorist attacks continue to pile up. So far, though, no one has racked up a win. Certain law firms (1-800-LAW-FIRM and Excolo Law) appear to be making a decent living filing lawsuits they’ll never have a chance of winning, but it’s not doing much for victims and their families.

The lawsuits attempt to route around Section 230 immunity by positing the existence of terrorists on social media platform is exactly the same thing as providing material support for terrorism. But this argument doesn’t provide better legal footing. No matter what approach is taken, it’s still plaintiffs seeking to hold social media companies directly responsible for violent acts committed by others.

Eric Goldman has written about another losing effort involving one of the major players in the Twitter terrorism lawsuit field, Excolo Law. Once again, the plaintiffs don’t present any winning arguments. The California federal court doesn’t even have to address Section 230 immunity to toss the case. The Anti-Terrorism Act allegations are bad enough to warrant dismissal.

Here’s what the court has to say about the direct liability arguments:

The FAC [First Amended Complaint] plausibly alleges that ISIS used Twitter to reach new followers, raise funds, and incite violence. It also adequately alleges that Twitter knew ISIS-affiliated users were posting these communications, and that it made only minimal efforts to control them.

Nevertheless, the direct relationship link is missing. Most of the allegations are about ISIS’s use of Twitter in general. The relatively few allegations involving Twitter that are specific to the attacks that killed plaintiffs’ family members also provide little more than generic statements that some of alleged perpetrators of the attacks were “active” Twitter users who used the platform to follow “ISIS-affiliated Twitter accounts” and otherwise “communicate with others.” Nothing in the FAC rises to the level of plausibly alleging that plaintiffs were injured “by reason of” Twitter’s conduct. Consequently, the direct ATA claims are dismissed.

The indirect liability route doesn’t fare any better:

The FAC does not allege that Twitter was “aware” that it was “assuming a role in” the terrorists’ attacks in Europe. See id. It does not allege that Twitter encouraged ISIS to carry out these attacks or even knew about them before they occurred. At most, the FAC alleges that Twitter should have known ISIS was planning an attack and that it ignored the possible consequences of letting terrorists operate on its platform. That is in effect an allegation of recklessness, but JASTA [Justice Against Sponsors of Terrorism Act] requires more. Although plaintiffs are correct that Congress referred in its statement of findings and purpose to those who “knowingly or recklessly contribute material support or resources” to terrorists, JASTA § 2(a)(6) (emphasis added), the plain language of Section 2333 reaches only those “knowingly providing substantial assistance.” 18 U.S.C. § 2333(d)(2). This clear statutory text controls.

There’s no plausible conspiracy claim either. If this argument was given credence by the court, Twitter would be a co-conspirator in any criminal activity carried out by its users.

Nothing in the FAC establishes an agreement, express or otherwise, between Twitter and ISIS to commit terrorist attacks. Plaintiffs point to Twitter’s terms of service that every user is subject to, but while that clearly is an “agreement,” it is hardly relevant to a terrorist conspiracy. No other plausible agreement is mentioned in the FAC.

The plaintiffs are given one more chance to amend their complaint, but these are allegations that can’t be massaged into victorious arguments. The problem that continues to be talked around in these lawsuits is that you cannot hold a social media platform responsible for the actions of its users. If the plaintiffs drop the ATA arguments, they’re just going to run into Section 230 immunity. While the acts of terrorism were horrific and drastically affected the lives of the families of those killed, suing Twitter, Facebook, et al over these acts doesn’t do anything for the plaintiffs but take time and money away from those who’ve already lost loved ones.

I’m not suggesting the law firms engaging in these lawsuits are lying to plaintiffs about their chances or encouraging futile litigation. I certainly would hope they aren’t engaged in any of the above because that would mean they’re preying on hurting people to earn income. But this steady stream of lawsuits — much of it emanating from two law firms — seems to suggest a level of intellectual dishonesty that’s especially unseemly given the underlying circumstances.

Filed Under: blame, cda 230, intermediary liability, lawsuits, material support, social media, terrorism
Companies: excolo law, twitter

Once Again, Court Rejects Silly Claims That YouTube Provided Material Support For Terrorists

from the that's-not-how-this-works dept

For the past few years we’ve been covering a whole series of cases, most of them filed by (I’m not making this up) a silly law firm by the name of 1-800-Law-Firm, trying to argue that various big internet companies provided material support to ISIS or other terrorists, and therefore owe tons of money to surviving relatives of people killed by ISIS or other terrorist organizations. There have been lawsuits against Twitter, Facebook and Google/YouTube. So far, all of these lawsuits have failed miserably — as they should.

Even if the plaintiffs could show that these platforms actively enabled terrorists to use their platform (which they do not, as all of them proactively look to remove terrorist related content), none of the cases makes an even half-hearted attempt to connect the (very unfortunate) deaths of their relatives to any actual content on these platforms. The lawsuits are basically “these bad people use Twitter/Facebook/YouTube, these people killed my relative, thus, those platforms owe me millions of dollars.” That, of course, is not how the law works.

In the case filed against Google, it was filed by a relative of someone who was killed in the horrific Paris attacks a few years back. The court had already thrown out the case last year, but allowed a third amended complaint to be filed, which has now been rejected as well (hat tip to Eric Goldman for blogging about this as well).

As with every other such case, the court relied on CDA 230 in throwing it out last year, but the lawyers tried again with an amended complaint, and have failed again. The new complaint made the same four claims the earlier filing did, and added two more, insisting that CDA 230 does not apply to any of them. Once again, the court says the old claims are easily barred under CDA 230:

Claims one through four seek to impose liability on Google for knowingly permitting ISIS and its followers to post content on YouTube…. These claims still ?inherently require[] the court to treat [Google] as the publisher or speaker of [third-party] content? because they ?require recourse to that content to establish liability or implicate a defendant?s role . . . in publishing or excluding third party communications.? … The [Third Amended Complaint] TAC alleges that Google ?knowingly provided? its YouTube platform and other services to ISIS, and that ISIS ?embraced and used? YouTube ?as a powerful tool for terrorism,? allowing it ?to connect its members and to facilitate [its] ability to communicate, recruit members, plan and carry out attacks, and strike fear in its enemies.?… It further alleges that Google ?refuse[d] to actively identify ISIS YouTube accounts? or to make ?substantial or sustained efforts to ensure that ISIS would not re-establish the accounts using new identifiers.? … Claims one through four allege that Google violated the material support statutes by permitting ISIS and its supporters to publish harmful material on YouTube, and by failing to do enough to remove that content and the users responsible for posting the material. These claims target Google?s decisions whether to publish, withdraw, exclude, or alter content, which is ?precisely the kind of activity for which section 230 was meant to provide immunity.? Roommates, 521 F.3d at 1170. ?[A]ny activity that can be boiled down to deciding whether to exclude material that third parties seek to post online is perforce immune under section 230.? Id. at 1170-71. Claims one through four remain ?inextricably bound up with the content of ISIS?s postings, since their allegations describe a theory of liability based on the ?essential? role that YouTube has played ?in the rise of ISIS to become the most feared terrorist organization in the world.?? …

In a contorted argument, Plaintiffs assert that they may rely upon third party content to support their claims without ?running afoul of Section 230.?… Although they do not tie this theory to any particular claim in the TAC, it appears to be their response to the court?s determination that the claims in the SAC were ?inextricably bound up with the content of ISIS?s postings.? … Plaintiffs offer a lengthy discussion of In re Incretin-Based Therapies Products Liability Litigation, 721 Fed. Appx. 580, 583 (9th Cir. 2017). There, the Ninth Circuit reversed a trial court?s determination that certain discovery was irrelevant to whether federal law preempted the state law claims, where the discovery was relevant to the merits of the state law claims themselves. According to Plaintiffs, In re Incretin-Based Therapies supports their argument that ?[e]vidence which would support? a finding that Google violated the ATA ?is not the same as holding [Google] responsible for the content of third parties.? … Plaintiffs? argument is hard to follow and not persuasive. In re Incretin-Based Therapies does not address section 230(c)(1) or the ATA. Plaintiffs? argument appears to be a variation on their previous contention that their claims are based upon ?Google?s provision of the means for ISIS followers to self-publish content, rather than challenging the actual content itself.? … The court rejected this argument, holding that Plaintiffs? allegations in the SAC were inconsistent with their attempt to avoid section 230 immunity by ?divorc[ing] ISIS?s offensive content from the ability to post such content.?… So too here. Plaintiffs do not allege that they have been harmed by the mere provision of the YouTube platform to ISIS and its followers. Instead, they allege that ?ISIS uses YouTube as a tool and a weapon of terrorism,? and that ISIS recruits, plans, incites, instructs, threatens, and communicates its terror message on YouTube. … Plaintiffs? claims ?are not premised solely on the theory that Google provided a publishing or communication platform to ISIS; they are further grounded in the allegation that Google failed to prevent ISIS from using YouTube to transmit its hateful message, which resulted in great harm.?….

In sum, the court concludes that Plaintiffs? claims one through four seek to treat Google as the publisher or speaker of ISIS?s YouTube videos.

Then, there’s an attempt to get around CDA 230 by saying that YouTube “recommends” ISIS videos (which it does not, but that’s not the issue at this stage of the game). Like many others in the past, here the lawyers tried to use the Roommates.com ruling to argue around CDA 230. If you don’t recall, in the Roommates case, the court rules that while most of Roommate’s activities were protected by CDA 230, one part that was not protected was a pulldown menu allowing people to prefer certain races — violating the Fair Housing Act. Since Roommates.com itself created the content in that pulldown, which violated the law, it was held liable for that content alone. Here, the lawyers tried to claim that YouTube recommending videos qualified for that kind of Roommates treatment. Except this is self-evidently wrong. Because Roommates only applied to content specifically created by the platform, while CDA 230 explicitly exempts editorial judgments by a platform. And recommending videos is clearly the latter, rather than the former:

Plaintiffs? ?new? theory fares no better this time around. The TAC does not contain any allegation that Google ?materially contribut[ed]? in any way to the content of ISIS videos by promoting ISIS-related content. It does not allege that Google?s content recommendation features either created or developed ISIS content, or played any role at all in making ISIS?s terrorist videos objectionable or unlawful.

As Eric Goldman notes, the court does go on a really random tangent, talking about “neutral tools,” which is completely meaningless here.

As for the new claims — five and six — the court is still unimpressed, more or less noting that this is just the lawyers trying to rephrase earlier, rejected claims:

Plaintiffs? ?new? concealment claim does little more than restate the material support claims in a slightly different form. All of those claims are barred by section 230(c)(1) as discussed above. Based on the allegations in the TAC, at its core, Plaintiffs? concealment claim ultimately seeks to hold Google liable for allegedly failing to prevent ISIS and its supporters from using YouTube, and by failing to remove ISIS videos from YouTube. As with claims one through four, the concealment claim ?requires recourse to that content? to establish any causal connection between Google permitting ISIS to use YouTube and the Paris attack. The claim thus ?inherently requires the court to treat [Google] as the ?publisher or speaker?? of ISIS content.

Claim six has similar problems:

As with claim five, Plaintiffs? IEEPA claim is a restated version of their material support claims. It is based on the allegation that Google provided services to ISIS by permitting ISIS supporters to use the YouTube platform, including allowing supporters to post videos (?received property or interest in property of ISIS?) and utilize YouTube?s functions (including ?downloading or copying videos?). This claim ultimately seeks to hold Google liable for failing to prevent ISIS and its supporters from using YouTube and failing to remove ISIS-related content from YouTube. As with the prior claims, the IEEPA claim ?requires recourse to that content? to establish any causal connection between YouTube and the Paris attack, and ?inherently requires the court to treat [Google] as the ?publisher or speaker?? of ISIS content

The court also spends some time pointing out that there is no “proximate causation” between YouTube allowing ISIS to use the platform and the attacks in question (which is the key underlying point behind all of this outside of the CDA 230 issue). It points to the 9th Circuit ruling in another one of these cases, Fields v. Twitter, in which the court made it clear that just because some terrorists may have used a social network doesn’t mean anyone killed by terrorists gets to sue social networks.

The court does allow for the lawyers to try one more time on one narrow issue. They had claimed that Google shared revenue with ISIS via its YouTube advertising program, which the court finds entirely unconvincing and dismisses — but notes that since it gave the lawyers a chance to amend other complaints, but hadn’t yet done so on the revenue sharing question, they could try again on that, though the court is clearly skeptical.

As Plaintiffs have already been given an opportunity to amend the complaint to avoid CDA immunity, all of their claims other than the revenue sharing claims are dismissed with prejudice. Since the court cannot conclude that further amendment of Plaintiffs? revenue sharing claims would be futile, they are granted one final opportunity to amend those claims in a manner consistent with Rule 11.

Given that these lawyers have tried these arguments so many times, I’m guessing they won’t be giving up just yet. But it seems like this case is destined for the same end result as all the others.

Filed Under: cda 230, intermediary liability, material support, reynaldo gonzalez, section 230, terrorism
Companies: facebook, google, twitter, youtube

Another Ridiculous Lawsuit Hopes To Hold Social Media Companies Responsible For Terrorist Attacks

from the from-an-alternate-reality-where-Section-230-doesn't-exist dept

Yet another lawsuit has been filed against social media companies hoping to hold them responsible for terrorist acts. The family of an American victim of a terrorist attack in Europe is suing Twitter, Facebook, and Google for providing material support to terrorists. [h/t Eric Goldman]

The lawsuit [PDF] is long and detailed, describing the rise of ISIS and use of social media by the terrorist group. It may be an interesting history lesson, but it’s all meant to steer judges towards finding violations of anti-terrorism laws rather than recognize the obvious immunity given to third party platforms by Section 230.

When it does finally get around to discussing the issue, the complaint from 1-800-LAW-FIRM (not its first Twitter terrorism rodeo…) attacks immunity from an unsurprising angle. The suit attempts to portray the placement of ads on alleged terrorist content as somehow being equivalent to Google, Twitter, et al creating the terrorist content themselves.

When individuals look at a page on one of Defendants’ sites that contains postings and advertisements, that configuration has been created by Defendants. In other words, a viewer does not simply see a posting; nor does the viewer see just an advertisement. Defendants create a composite page of content from multiple sources.

Defendants create this page by selecting which advertisement to match with the content on the page. This selection is done by Defendants’ proprietary algorithms that select the advertisement based on information about the viewer and the content being. Thus there is a content triangle matching the postings, advertisements, and viewers.

Although Defendants have not created the posting, nor have they created the advertisement, Defendants have created new unique content by choosing which advertisement to combine with the posting with knowledge about the viewer.

Thus, Defendants’ active involvement in combining certain advertisements with certain postings for specific viewers means that Defendants are not simply passing along content created by third parties; rather, Defendants have incorporated ISIS postings along with advertisements matched to the viewer to create new content for which Defendants earn revenue, and thus providing material support to ISIS.

This argument isn’t going to be enough to bypass Section 230 immunity. According to the law, the only thing social media companies are responsible for is the content of the ads they place. That they’re placed next to alleged terrorist content may be unseemly, but it’s not enough to hurdle Section 230 protections. Whatever moderation these companies engage in does not undercut these protections, even when their moderation efforts fail to weed out all terrorist content.

The lawsuit then moves on to making conclusory statements about these companies’ efforts to moderate content, starting with an assertion not backed by the text of filing.

Most technology experts agree that Defendants could and should be doing more to stop ISIS from using its social network.

Following this sweeping assertion, two (2) tech experts are cited, both of whom appear to be only speaking for themselves. More assertions follow, with 1-800-LAW-FIRM drawing its own conclusions about how “easy” it would be for social media companies with millions of users to block the creation of terrorism-linked accounts [but how, if nothing is known of the content of posts until after the account is created?] and to eliminate terrorist content as soon as it goes live.

The complaint then provides an apparently infallible plan for preventing the creation of “terrorist” accounts. Noting the incremental numbering used by accounts repeatedly banned/deleted by Twitter, the complaint offers this “solution.”

What the above example clearly demonstrates is that there is a pattern that is easily detectable without reference to the content. As such, a content-neutral algorithm could be easily developed that would prohibit the above behavior. First, there is a text prefix to the username that contains a numerical suffix. When an account is taken down by a Defendant, assuredly all such names are tracked by Defendants. It would be trivial to detect names that appear to have the same name root with a numerical suffix which is incremented. By limiting the ability to simply create a new account by incrementing a numerical suffix to one which has been deleted, this will disrupt the ability of individuals and organizations from using Defendants networks as an instrument for conducting terrorist operations.

Prohibiting this conduct would be simple for Defendants to implement and not impinge upon the utility of Defendants sites. There is no legitimate purpose for allowing the use of fixed prefix/incremental numerical suffix name.

Take a long, hard look at that last sentence. This is the sort of assertion someone makes when they clearly don’t understand the subject matter. There are plenty of “legitimate purposes” for appending incremental numerical suffixes to social media handles. By doing this, multiple users can have the same preferred handle while allowing the system (and the users’ friends/followers) to differentiate between similarly-named accounts. Everyone who isn’t the first person to claim a certain handle knows the pain of being second… third… one-thousand-three-hundred-sixty-seventh in line. While this nomenclature process may allow terrorists to easily reclaim followers after account deletion, there are plenty of non-ominous reasons for allowing incremental suffixes.

That’s indicative of the lawsuit’s mindset: terrorist attacks are the fault of social media platforms because they’ve “allowed” terrorists to communicate. But that’s completely the wrong party to hold responsible. Terrorist attacks are performed by terrorists, not social media companies, no matter how many ads have been placed around content litigants view as promoting terrorism.

Finally, the lawsuit sums it all up thusly: Monitoring content is easy — therefore, any perceived lack of moderation is tantamount to direct support of terrorist activity.

Because the suspicious activity used by ISIS and other nefarious organizations engaged in illegal activities is easily detectable and preventable and that Defendants are fully aware that these organizations are using their networks to engage in illegal activity demonstrates that Defendants are acting knowingly and recklessly allowing such illegal conduct.

Unbelievably, the lawsuit continues from there, going past its “material support” Section 230 dodge to add claims of wrongful death it tries to directly link to Twitter, et al’s allegedly inadequate content moderation.

The conduct of each Defendant was a direct, foreseeable and proximate cause of the wrongful deaths of Plaintiffs’ Decedent and therefore the Defendants’ are liable to Plaintiffs for their wrongful deaths.

This is probably the worst “Twitter terrorism” lawsuit filed yet, but quite possibly exactly what you would expect from a law firm with a history of stupid social media lawsuits and a phone number for a name.

Filed Under: cda 230, isis, material support, section 230, social media, terrorism
Companies: 1-800-law-firm, facebook, google, twitter

Yet Another Lawsuit Hopes A Court Will Hold Twitter Responsible For Terrorists' Actions

from the law-firms-basically-setting-up-franchises dept

So, this is how we’re handling the War on Terror here on the homefront: lawsuit after lawsuit after lawsuit against social media platforms because terrorists also like to tweet and post stuff on Facebook.

The same law firm (New York’s Berkman Law Office) that brought us last July’s lawsuit against Facebook (because terrorist organization Hamas also uses Facebook) is now bringing one against Twitter because ISIS uses Twitter. (h/t Lawfare’s Ben Wittes)

Behind the law firm are more families of victims of terrorist attacks — this time those in Brussels and Paris. Once again, any criticism of this lawsuit (and others of its type) is not an attack on those who have lost loved ones to horrific acts of violence perpetrated by terrorist organizations.

The criticisms here are the same as they have been in any previous case: the lawsuits are useless and potentially dangerous. They attempt to hold social media platforms accountable for the actions of terrorists. At the heart of every sued company’s defense is Section 230 of the CDA, which immunizes them against civil lawsuits predicated on the actions and words of the platform’s users.

The lawsuits should be doomed to fail, but there’s always a chance a judge will construe the plaintiffs’ arguments in a way that either circumvents this built-in protection or, worse, issues a precedential ruling carving a hole in these protections.

The arguments here are identical to the other lawsuits: Twitter allegedly hasn’t done enough to prevent terrorists from using its platform. Therefore, Twitter (somehow) provides material support to terrorists by not shutting down (one of) their means of communication (fast enough).

The filing [PDF] is long, containing a rather detailed history of the rise of the Islamic State, a full rundown of the attacks in Brussels and Paris, and numerous examples of social media posts by terrorists. It’s rather light on legal arguments, but then it has to be, because the lawsuit works better when it tugs at the heartstrings, rather than addressing the legal issues head on.

The lawsuit even takes time to portray Twitter’s shutdown of Dataminr’s feed to US government surveillance agencies — as well as its policy of notifying users of government/law enforcement demands for personal information — as evidence of its negligence, if not outright support, of terrorist groups.

The problem with these lawsuits — even without the Section 230 hurdle — is that the only way for Twitter, Facebook, etc. to avoid being accused of “material support” for terrorism is to somehow predetermine what is or isn’t terrorist-related before it’s posted… or even before accounts are created. To do otherwise is to fail. Any content posted can immediately be reposted by supporters and detractors alike.

And that’s another issue that isn’t easily sorted out by platforms with hundreds of millions of users. Posts and tweets are just as often passed on by people who don’t agree with content, but arguments made in these lawsuits expect social media platforms to determine what intent is… and take action almost immediately. Any post or account that stays “live” for too long becomes a liability, should courts find in favor of these plaintiffs. It’s an impossible standard to meet.

These lawsuits ask courts to shoot the medium, rather than the messenger. They make about as much sense as suing cell phone manufacturers because they’re not doing enough to prevent terrorists from buying their phones and using them to communicate.

Filed Under: cda 230, isis, liability, material support, section 230, social media, terrorism
Companies: twitter

Court (Again) Tosses Lawsuit Seeking To Hold Twitter Accountable For ISIS Terrorism

from the that's-not-how-causation-works,-never-mind-Section-230... dept

At the beginning of this year, Tamara Fields — whose husband was killed by ISIS terrorists — sued Twitter for “providing material support” to the terrorist group. The actions underlying Fields’ lawsuit were undeniably horrific and tragic, but by no means provided any sort of legal basis for holding Twitter responsible for actions or speech undertaken by users of its service.

The lawsuit was dismissed in August, with the court pointing to Twitter’s Section 230 immunity and the lawsuit’s general lack of argumentative coherence. Perhaps recognizing that Section 230 (and common sense) would prevent Twitter from being held responsible for ISIS’s terrorist activities, Fields chose to approach the lawsuit from some novel angles. At some points, Twitter “provided material support” by allowing ISIS members to obtain accounts. At other points, it was Twitter’s inability to stop the spread of ISIS propaganda that was the issue.

The court invited Fields to file an amended complaint, hoping to obtain a coherent argument it could address with equal clarity. It didn’t get it. The amended complaint may be a bit more structured, but the court has again dismissed the lawsuit [PDF] on Section 230 grounds while also addressing the deficiencies of other arguments raised by Fields. (h/t Eric Goldman)

Fields tries to drill down the “provision of accounts” theory: that the ability of ISIS terrorists to obtain accounts somehow amounts to “material support” — or, in any case, should result in the removal of Twitter’s Section 230 immunity. The court says this argument makes no sense and, in fact, invites the court to engage in restriction of First Amendment-protected activity.

Plaintiffs’ provision of accounts theory is slightly different, in that it is based on Twitter’s decisions about whether particular third parties may have Twitter accounts, as opposed to what particular third-party content may be posted. Plaintiffs urge that Twitter’s decision to provide ISIS with Twitter accounts is not barred by section 230(c)(1) because a “content-neutral decision about whether to provide someone with a tool is not publishing activity.”

The court disagrees. There’s no way Twitter can act in a “content-neutral” manner and still deny accounts to ISIS members like the plaintiff believes it should. The only way to discover whether an account holder might be a terrorist or terrorist sympathizer is by examining the content they post.

Although plaintiffs assert that the decision to provide an account to or withhold an account from ISIS is “content-neutral,” they offer no explanation for why this is so and I do not see how this is the case. A policy that selectively prohibits ISIS members from opening accounts would necessarily be content based as Twitter could not possibly identify ISIS members without analyzing some speech, idea or content expressed by the would-be account holder: i.e. “I am associated with ISIS.” The decision to furnish accounts would be content-neutral if Twitter made no attempt to distinguish between users based on content – for example if they prohibited everyone from obtaining an account, or they prohibited every fifth person from obtaining an account. But plaintiffs do not assert that Twitter should shut down its entire site or impose an arbitrary, content-neutral policy. Instead, they ask Twitter to specifically prohibit ISIS members and affiliates from acquiring accounts – a policy that necessarily targets the content, ideas, and affiliations of particular account holders. There is nothing content-neutral about such a policy.

The plaintiff, despite amending her complaint, still takes a cake-and-eat-it-too approach when trying to twist ISIS terrorism into a Twitter-enabled activity. The court notes that the new complaint tries to push Twitter’s provision of accounts to terrorists as the linchpin of her case, but still spends far more time complaining about Twitter’s alleged moderation failures.

As discussed above, the decision to furnish an account, or prohibit a particular user from obtaining an account, is itself publishing activity. Further, while plaintiffs urge me to focus exclusively on those five short paragraphs, I cannot ignore that the majority of the SAC still focuses on ISIS’s objectionable use of Twitter and Twitter’s failure to prevent ISIS from using the site, not its failure to prevent ISIS from obtaining accounts. For example, plaintiffs spend almost nine pages, more than half of the complaint, explaining that “Twitter Knew That ISIS Was Using Its Social Network But Did Nothing”; “ISIS Used Twitter to Recruit New Members”; “ISIS Used Twitter to Fundraise”; and “ISIS Used Twitter To Spread Propaganda.” These sections are riddled with detailed descriptions of ISIS-related messages, images, and videos disseminated through Twitter and the harms allegedly caused by the dissemination of that content.

[…]

It is no surprise that plaintiffs have struggled to excise their content-based allegations; their claims are inherently tied up with ISIS’s objectionable use of Twitter, not its mere acquisition of accounts. Though plaintiffs allege that Twitter should not have provided accounts to ISIS, the unspoken end to that allegation is the rationale behind it: namely, that Twitter should not have provided accounts to ISIS because ISIS would and has used those accounts to post objectionable content.

Because of Fields’ inability to raise one (possibly) Section 230-dodging argument (provision of accounts) without relying heavily on one that specifically invokes Twitter’s immunity, the lawsuit is doomed to fail no matter how many times the complaint is rewritten or how many levels up it’s appealed.

In short, the theory of liability alleged in the [complaint] is not that Twitter provides material support to ISIS by providing it with Twitter accounts, but that Twitter does so by allowing ISIS to use Twitter “to send its propaganda and messaging out to the world and to draw in people vulnerable to radicalization.” SAC ¶ 41. Plaintiffs do not dispute that this theory seeks to treat Twitter as a publisher and is barred by section 230(c)(1).

Furthermore, there is nothing at all connecting Twitter to the murders committed by terrorists.

Even under plaintiffs’ proposed “substantial factor” test, see Oppo. at 11, the allegations in the SAC do not support a plausible inference of proximate causation between Twitter’s provision of accounts to ISIS and the deaths of Fields and Creach. Plaintiffs allege no connection between the shooter, Abu Zaid, and Twitter. There are no facts indicating that Abu Zaid’s attack was in any way impacted, helped by, or the result of ISIS’s presence on the social network. Instead they insist they have adequately pleaded proximate causation because they have alleged “(1) that Twitter provided fungible material support to ISIS, and (2) that ISIS was responsible for the attack in which Lloyd Fields, Jr. and James Damon Creach were killed.” Id. at 13. Under such an expansive proximate cause theory, any plaintiff could hold Twitter liable for any ISIS-related injury without alleging any connection between a particular terrorist act and Twitter’s provision of accounts. And, since plaintiffs allege that Twitter has already provided ISIS with material support, Twitter’s liability would theoretically persist indefinitely and attach to any and all future ISIS attacks. Such a standard cannot be and is not the law.

No doubt this decision will be appealed but it’s unlikely to find a court willing to cede as much ground on Section 230 as Fields would like it to, even with the series of bad Section 230-related decisions that have recently plagued the California court system.

Filed Under: isis, material support, material support for terrorism, section 230, tamara fields, terrorism
Companies: twitter