tamara fields – Techdirt (original) (raw)

Stories filed under: "tamara fields"

The Importance Of Defending Section 230 Even When It's Hard

from the preventing-tough-cases-from-making-bad-law dept

The Copia Institute filed another amicus brief this week, this time in Fields v. Twitter. Fields v. Twitter is one of a flurry of cases being brought against Internet platforms alleging that they are liable for the harms caused by the terrorists using their sites. The facts in these cases are invariably awful: often people have been brutally killed and their loved ones are seeking redress for their loss. There is a natural, and perfectly reasonable, temptation to give them some sort of remedy from someone, but as we argued in our brief, that someone cannot be an internet platform.

There are several reasons for this, including some that have nothing to do with Section 230. For instance, even if Section 230 did not exist and platforms could be liable for the harms resulting from their users’ use of their services, for them to be liable there would have to be a clear connection between the use of the platform and the harm. Otherwise, based on the general rules of tort law, there could be no liability. In this particular case, for instance, there is a fairly weak connection between ISIS members using Twitter and the specific terrorist act that killed the plaintiffs’ family members.

But we left that point to Twitter to ably argue. Our brief focused exclusively on the fact that Section 230 should prevent a court from ever even reaching the tort law analysis. With Section 230, a platform should never find itself having to defend against liability for harm that may have resulted from how people used it. Our concern is that in several recent cases with their own terrible facts, the Ninth Circuit in particular has found itself willing to make exceptions to that rule. As much as we were supporting Twitter in this case, trying to help ensure the Ninth Circuit does not overturn the very good District Court decision that had correctly applied Section 230 to dismiss the case, we also had an eye to the long view of reversing this trend.

The problem is, like the First Amendment itself, speech protections only work as speech protections when they always work. When one can find exemptions here and there, all of a sudden none of these protections are effective and it chills the speech of those who were counting on them because no one can be sure whether or not the speech will ultimately be protected. In the case of Section 230, that chilling arises because if the platforms cannot be sure whether they will be protected from liability in their users’ speech, then they will have to assume they are not. Suddenly they will have to make all the censoring choices with respect to their users’ content that Section 230 was designed to prevent, just to avoid the specter of potentially crippling liability.

One of the points we emphasized in our brief was how such an outcome flouts what Congress intended when it passed Section 230. As we said then, and will say again as many times as we need to, the point of Section 230 is to encourage the most beneficial online speech and also minimize the worst speech. To see how this dual-purposed intent plays out we need to look at the statute as a whole, beyond the part of it that usually gets the most attention, at Subsection (c)(1), which is about how platforms are immune from liability manifest in their users’ speech. There is also another equally important part of the statute, at Subsection (c)(2), that immunizes platforms from liability when they take steps to minimize harmful online content on their systems. This subsection rarely gets attention, but it’s important not to overlook, especially as people look at the effect of the first subsection and worry that it might encourage too much “bad” speech. Congress anticipated this problem and built in a remedy as part of a balanced approach to encourage the most good speech and least bad speech. The problem with now holding online services liable for bad uses of their platforms is that it distorts this balance, and in distorting this balance undermines both these goals.

We used the cases of Barnes v. Yahoo and Doe 14 v. Internet Brands to illustrate this point. Both of these are cases where the Ninth Circuit did make exemptions and found Section 230 not to apply to certain negative uses of Internet platforms. For instance, in Barnes Section 230 was actually found to apply to part of the claim directly relating to the speech in question, which was a good result, but the lawsuit also included a promissory estoppel claim, and the Court decided that because it was not directly related to liability arising from content it could go forward. The problem here was that Yahoo had separately promised to take down certain content, and so the Court found it potentially liable for not having lived up to its promise. But as we pointed out, the effect of the Barnes case was that now platforms never promise to take content down. Even though Congress intended for Section 230 to help Internet platforms perform a hygiene function to help keep the Internet free of the worst content, by discouraging platforms from going the extra mile it has instead had the opposite effect from the one Congress intended. That’s why courts should not continue to find reasons to limit Section 230’s applicability. Even if they think they have good reason to find one, that very justification itself will be better advanced when Section 230’s protection can be most robust.

We also pointed out that in terms of the other policy goal behind Section 230, to encourage more online speech, divining exemptions from Section 230’s coverage would undermine that goal as well. In this case the plaintiffs want providers to have to deny terrorists the use of their platforms. As a separate amicus brief by the Internet Association explained, platforms actually want to keep terrorists off and go to great lengths to try to do so. But as the saying goes, “One man’s terrorist is another man’s freedom fighter.” In other words, deciding who to label a terrorist can often be a difficult thing to do, as well as an extremely political decision to make. It’s certainly beyond the ken of an “intermediary” to determine — especially a smaller, less capitalized, or potentially even individual one. (Have you ever had people comment on one of your Facebook posts? Congratulations! You are an intermediary, and Section 230 applies to you too.)

Even if the rule were that a platform had to check prospective users’ names against a government list, there are significant constitutional concerns, particularly regarding the right to speak anonymously and the prohibition against prior restraint, that arise from having to make these sorts of registration denial decisions this way. There are also often significant constitutional problems with how these lists are made at all. As the amicus brief by EFF and CDT also argued, we can’t create a system where the statutory protection platforms depend on to be able to foster online free speech is conditioned on coercing platforms to undermine it.

Filed Under: fields v. twitter, free speech, intermediary liability, material support for terrorism, platforms, section 230, tamara fields, terrorism
Companies: twitter

Court (Again) Tosses Lawsuit Seeking To Hold Twitter Accountable For ISIS Terrorism

from the that's-not-how-causation-works,-never-mind-Section-230... dept

At the beginning of this year, Tamara Fields — whose husband was killed by ISIS terrorists — sued Twitter for “providing material support” to the terrorist group. The actions underlying Fields’ lawsuit were undeniably horrific and tragic, but by no means provided any sort of legal basis for holding Twitter responsible for actions or speech undertaken by users of its service.

The lawsuit was dismissed in August, with the court pointing to Twitter’s Section 230 immunity and the lawsuit’s general lack of argumentative coherence. Perhaps recognizing that Section 230 (and common sense) would prevent Twitter from being held responsible for ISIS’s terrorist activities, Fields chose to approach the lawsuit from some novel angles. At some points, Twitter “provided material support” by allowing ISIS members to obtain accounts. At other points, it was Twitter’s inability to stop the spread of ISIS propaganda that was the issue.

The court invited Fields to file an amended complaint, hoping to obtain a coherent argument it could address with equal clarity. It didn’t get it. The amended complaint may be a bit more structured, but the court has again dismissed the lawsuit [PDF] on Section 230 grounds while also addressing the deficiencies of other arguments raised by Fields. (h/t Eric Goldman)

Fields tries to drill down the “provision of accounts” theory: that the ability of ISIS terrorists to obtain accounts somehow amounts to “material support” — or, in any case, should result in the removal of Twitter’s Section 230 immunity. The court says this argument makes no sense and, in fact, invites the court to engage in restriction of First Amendment-protected activity.

Plaintiffs’ provision of accounts theory is slightly different, in that it is based on Twitter’s decisions about whether particular third parties may have Twitter accounts, as opposed to what particular third-party content may be posted. Plaintiffs urge that Twitter’s decision to provide ISIS with Twitter accounts is not barred by section 230(c)(1) because a “content-neutral decision about whether to provide someone with a tool is not publishing activity.”

The court disagrees. There’s no way Twitter can act in a “content-neutral” manner and still deny accounts to ISIS members like the plaintiff believes it should. The only way to discover whether an account holder might be a terrorist or terrorist sympathizer is by examining the content they post.

Although plaintiffs assert that the decision to provide an account to or withhold an account from ISIS is “content-neutral,” they offer no explanation for why this is so and I do not see how this is the case. A policy that selectively prohibits ISIS members from opening accounts would necessarily be content based as Twitter could not possibly identify ISIS members without analyzing some speech, idea or content expressed by the would-be account holder: i.e. “I am associated with ISIS.” The decision to furnish accounts would be content-neutral if Twitter made no attempt to distinguish between users based on content – for example if they prohibited everyone from obtaining an account, or they prohibited every fifth person from obtaining an account. But plaintiffs do not assert that Twitter should shut down its entire site or impose an arbitrary, content-neutral policy. Instead, they ask Twitter to specifically prohibit ISIS members and affiliates from acquiring accounts – a policy that necessarily targets the content, ideas, and affiliations of particular account holders. There is nothing content-neutral about such a policy.

The plaintiff, despite amending her complaint, still takes a cake-and-eat-it-too approach when trying to twist ISIS terrorism into a Twitter-enabled activity. The court notes that the new complaint tries to push Twitter’s provision of accounts to terrorists as the linchpin of her case, but still spends far more time complaining about Twitter’s alleged moderation failures.

As discussed above, the decision to furnish an account, or prohibit a particular user from obtaining an account, is itself publishing activity. Further, while plaintiffs urge me to focus exclusively on those five short paragraphs, I cannot ignore that the majority of the SAC still focuses on ISIS’s objectionable use of Twitter and Twitter’s failure to prevent ISIS from using the site, not its failure to prevent ISIS from obtaining accounts. For example, plaintiffs spend almost nine pages, more than half of the complaint, explaining that “Twitter Knew That ISIS Was Using Its Social Network But Did Nothing”; “ISIS Used Twitter to Recruit New Members”; “ISIS Used Twitter to Fundraise”; and “ISIS Used Twitter To Spread Propaganda.” These sections are riddled with detailed descriptions of ISIS-related messages, images, and videos disseminated through Twitter and the harms allegedly caused by the dissemination of that content.

[…]

It is no surprise that plaintiffs have struggled to excise their content-based allegations; their claims are inherently tied up with ISIS’s objectionable use of Twitter, not its mere acquisition of accounts. Though plaintiffs allege that Twitter should not have provided accounts to ISIS, the unspoken end to that allegation is the rationale behind it: namely, that Twitter should not have provided accounts to ISIS because ISIS would and has used those accounts to post objectionable content.

Because of Fields’ inability to raise one (possibly) Section 230-dodging argument (provision of accounts) without relying heavily on one that specifically invokes Twitter’s immunity, the lawsuit is doomed to fail no matter how many times the complaint is rewritten or how many levels up it’s appealed.

In short, the theory of liability alleged in the [complaint] is not that Twitter provides material support to ISIS by providing it with Twitter accounts, but that Twitter does so by allowing ISIS to use Twitter “to send its propaganda and messaging out to the world and to draw in people vulnerable to radicalization.” SAC ¶ 41. Plaintiffs do not dispute that this theory seeks to treat Twitter as a publisher and is barred by section 230(c)(1).

Furthermore, there is nothing at all connecting Twitter to the murders committed by terrorists.

Even under plaintiffs’ proposed “substantial factor” test, see Oppo. at 11, the allegations in the SAC do not support a plausible inference of proximate causation between Twitter’s provision of accounts to ISIS and the deaths of Fields and Creach. Plaintiffs allege no connection between the shooter, Abu Zaid, and Twitter. There are no facts indicating that Abu Zaid’s attack was in any way impacted, helped by, or the result of ISIS’s presence on the social network. Instead they insist they have adequately pleaded proximate causation because they have alleged “(1) that Twitter provided fungible material support to ISIS, and (2) that ISIS was responsible for the attack in which Lloyd Fields, Jr. and James Damon Creach were killed.” Id. at 13. Under such an expansive proximate cause theory, any plaintiff could hold Twitter liable for any ISIS-related injury without alleging any connection between a particular terrorist act and Twitter’s provision of accounts. And, since plaintiffs allege that Twitter has already provided ISIS with material support, Twitter’s liability would theoretically persist indefinitely and attach to any and all future ISIS attacks. Such a standard cannot be and is not the law.

No doubt this decision will be appealed but it’s unlikely to find a court willing to cede as much ground on Section 230 as Fields would like it to, even with the series of bad Section 230-related decisions that have recently plagued the California court system.

Filed Under: isis, material support, material support for terrorism, section 230, tamara fields, terrorism
Companies: twitter

Judge On Whether Twitter Is Legally Liable For ISIS Attacks: Hahahahahaha, Nope.

from the decision-in-140-characters-or-less dept

This is not a surprise, but the judge overseeing the case where Twitter was sued by a woman because her husband was killed in an ISIS attack has tossed out the case. We fully expected this when the lawsuit was first filed, and the judge was clearly skeptical of the case during a hearing on it back in June. The order dismissing the case comes in at slightly longer than 140 characters, but you get the feeling that was really about all that was needed to point out how ridiculous this case was. As we expected, Twitter pointed to CDA Section 230 to say it’s simply immune from such a claim and the judge agrees:

As noted above, courts have repeatedly described publishing activity under section 230(c)(1) as including decisions about what third-party content may be posted online…. Plaintiffs? provision of accounts theory is slightly different, in that it is based on Twitter?s decisions about whether particular third parties may have Twitter accounts, as opposed to what particular third-party content may be posted. But it is not clear to me why this difference matters for the purposes of section 230(c)(1). Under either theory, the alleged wrongdoing is the decision to permit third parties to post content ? it is just that under plaintiffs? provision of accounts theory, Twitter would be liable for granting permission to post (through the provision of Twitter accounts) instead of for allowing postings that have already occurred. Plaintiffs do not explain why this difference means that the provision of accounts theory seeks to treat Twitter as something other than a publisher of third-party content, and I am not convinced that it does. Despite being based on Twitter accounts instead of tweets, the theory is still based on Twitter?s alleged violation of a ?duty . . . derive[d] from [its] status or conduct as a publisher.?

Even if Section 230 wouldn’t have resulted in the case being tossed, Judge William Orrick notes a number of other problems with the lawsuit, including that the claims in the lawsuit don’t even make sense (that seems like a big problem). The judge first focuses on how the plaintiffs’ arguments shift back and forth between whether it’s the mere providing of service to ISIS members that’s the problem or the failure of Twitter to prevent the spread of ISIS content. These two things are different, but the lawyers for the plaintiff don’t do much to distinguish the two from one another.

Plaintiffs characterize these allegations as ?focus[ed] on [Twitter?s] provision of . . . accounts to ISIS, not the content of the tweets.? … But with the exception of the statement that ?ISIS accounts on Twitter have grown at an astonishing rate,? …, all of the allegations are accompanied by information regarding the ISIS-related content disseminated from the accounts. Plaintiffs allege not just that ISIS had approximately 70,000 Twitter accounts, but that ISIS used those accounts to post at least 90 tweets per minute, … not just that Al-Furqan maintained a Twitter page, but that it maintained one ?where it posted messages from ISIS leadership as well as videos and images of beheadings and other brutal . . . executions to 19,000 followers,? … not just that Twitter failed to stop an ISIS-linked account from ?springing right back up,? but that an inflammatory message was tweeted from this account following the shooting attack in San Bernadino, California in December 2015….

The rest of the FAC is likewise riddled with detailed descriptions of ISIS-related messages, images, and videos disseminated through Twitter and the harms allegedly caused by the dissemination of that content. The FAC also includes a number of allegations specifically faulting Twitter for failing to detect and prevent the dissemination of ISIS-related content through the Twitter platform.

That issue is a big part of the reason why Twitter’s Section 230 defense works. The lawyers for the plaintiff argued that it wasn’t a 230 issue because it’s about the provisioning of services, not the content of the tweet, but their complaint focuses almost exclusively on the content, which clearly keeps liability off of Twitter.

And then there’s the other big, non-230, problem with the lawsuit: there’s nothing whatsoever in the lawsuit arguing that Twitter had anything directly to do with the ISIS attack that killed Lloyd Fields.

The third problem with the provision of accounts theory is that plaintiffs have not adequately alleged causation. Although the parties dispute the exact formulation of the appropriate causal test for civil liability under the ATA, they agree that the statute requires a showing of proximate causation….

Even under plaintiffs? proposed ?substantial factor? test, …, the allegations in the FAC do not support a plausible inference of proximate causation between Twitter?s provision of accounts to ISIS and the deaths of Fields and Creach. The only arguable connection between Abu Zaid and Twitter identified in the FAC is that Abu Zaid?s brother told reporters that Abu Zaid had been very moved by ISIS?s horrific execution of al-Kassasbeh, which ISIS publicized through Twitter…. That connection is tenuous at best regardless of the particular theory of liability plaintiffs decide to assert. But the connection is particularly weak under the provision of accounts theory because it is based on specific content disseminated through Twitter, not the mere provision of Twitter accounts.

The plaintiff, Tamara Fields, can still file an amended complaint that tries to fix these problems, but it’s not clear how she’ll get past them. I imagine that the various copycat lawsuits that have been filed against Twitter, Facebook and Google in the past few months will all face similar fates.

Filed Under: isis, lloyd fields, material support, section 230, tamara fields, terrorism, william orrick
Companies: twitter

Twitter Asks Court To Dump Ridiculous Lawsuit Claiming It Provides Material Support For Terrorists

from the should-be-an-easy-one dept

Back in January, we wrote about an absolutely ridiculous case, in which Tamara Fields sued Twitter, after her husband was tragically killed in an ISIS raid last year. Why Twitter? She apparently blames Twitter for the rise of ISIS. She provides no evidence to show that the people who killed her husband (a government contractor for DynCorp International) was killed by people who used Twitter. Or that anything about the attack was related to Twitter. It’s entirely just “ISIS uses Twitter. ISIS killed by husband. Let’s sue Twitter.” As we noted at the time, Section 230 should easily get this lawsuit tossed out quickly, and the company has now filed its Motion to Dismiss. The TL;DR: “Section 230, Section 230, What a stupid lawsuit this is.”

Plaintiff?s claims seek to hold Twitter liable for the content of messages posted to its platform by third parties and are thus barred by Section 230 of the Telecommunications Act of 1996, 47 U.S.C. § 230 (?Section 230?). In enacting Section 230, Congress unequivocally resolved the question whether computer service providers may be held liable for harms arising from content created by third parties. Announcing the policy of the United States to preserve the ?free market that presently exists for the Internet . . . unfettered by Federal or State regulation,? 47 U.S.C. § 230(b)(2), Congress broadly immunized entities like Twitter against lawsuits that seek to hold them liable for harmful or unlawful third-party content, including suits alleging that such entities failed to block, remove, or alter such content

Of course, even without Section 230, the lawsuit should be dumped, because Twitter had nothing to do with anything in this case.

It fails to state a claim for relief under the Terrorism Civil Remedy provision. That provision requires Plaintiff to allege and prove (1) that she was injured ?by reason of??i.e., that her injury was proximately caused by?(2) an ?act of international terrorism? committed by Twitter. 18 U.S.C. § 2333(a). The Complaint?s allegations satisfy neither requirement. First, the link the Complaint attempts to draw between Twitter?s alleged conduct and the attack is, as a matter of law, far too tenuous to establish that Twitter proximately caused Mr. Fields? death. Second, the Complaint?s allegations amount to nothing more than the claim that Twitter made its communications platform available to everyone in the world with an Internet connection. As a matter of law, that conduct cannot have constituted ?an act of international terrorism? as defined by the statute because it plainly does not ?appear to [have been] intended? ?to intimidate or coerce a civilian population,? ?to influence the policy of a government by intimidation or coercion,? or ?to affect the conduct of a government by mass destruction, assassination, or kidnapping,?…

Later, the filing notes:

The Complaint makes no attempt to connect Twitter directly to Abu Zaid or his attack. It does not allege that ISIS recruited Abu Zaid over the Twitter platform. Nor does it allege that Abu Zaid or ISIS used the Twitter platform to plan, carry out, or raise money for the attack. It does not even allege that Abu Zaid had a Twitter account or ever accessed the Twitter platform. And although the Complaint devotes considerable attention to how other terrorists allegedly used the Twitter platform, it never explains how that alleged use had even the remotest connection to Abu Zaid?s ?lone wolf? attack. The Complaint does not, for example, allege that ISIS helped Abu Zaid plan the attack or that ISIS provided Abu Zaid with weapons or funds. Beyond the speculation that Abu Zaid and ISIS may have shared the common objectives of inflicting harm on Americans and establishing a transnational Islamic caliphate, the closest the Complaint comes to even hinting at a connection between the two is the allegation that ISIS?s ?brutal execution of Jordanian pilot Maaz al-Kassasbeh in February 2015? may have inspired Abu Zaid to become a ?lone wolf? terrorist nine months later.

Obviously, having your husband killed in a “lone wolf” attack is a horrible and horrifying situation for anyone to go through. But suing Twitter seems like a particularly misguided response. It would be positively shocking if the judge actually lets this case go any further.

Filed Under: isis, lloyd fields, material support, material support for terrorism, section 230, tamara fields, terrorism
Companies: twitter

Woman Files Ridiculous Lawsuit Against Twitter For 'Providing Material Support' To ISIS

from the not-how-it-works dept

Over the past year or so, there has been some people questioning if merely tweeting could be considered “material support for terrorism.” Taking things to another level altogether, Tamara Fields, whose husband (a government contractor for DynCorp International) was tragically killed in an ISIS strike late last year, has now sued Twitter for providing “material support” for ISIS.

Let’s be clear on a few things: I can’t even imagine the horrors of having your loved ones killed that way. It is horrible and tragic, and the pain must be unfathomable to those who have not gone through it. But, at the same time, that’s not Twitter’s fault no matter how you look at it. The full lawsuit, filed in California by lawyers who should know better, makes a number of ridiculous assertions, including the idea that the rise of ISIS would have never happened without Twitter.

Without Twitter, the explosive growth of ISIS over the last few years into the mostfeared terrorist group in the world would not have been possible. According to the Brookings Institution, ISIS ?has exploited social media, most notoriously Twitter, to send its propaganda and messaging out to the world and to draw in people vulnerable to radicalization.? Using Twitter, ?ISIS has been able to exert an outsized impact on how the world perceives it, by disseminating images of graphic violence (including the beheading of Western journalists and aid workers) . . . while using social media to attract new recruits and inspire lone actor attacks.? According to FBI Director James Comey, ISIS has perfected its use of Twitter to inspire small-scale individual attacks, ?to crowdsource terrorism? and ?to sell murder.?

Is ISIS fairly adept at using Twitter? Sure. Does that mean that it wouldn’t have become the group it’s become today? That’s ridiculous. The rest of the complaint takes a number of statements, concerning Twitter’s support for free speech rights totally out of context, including repeatedly relying on quotes from individuals who haven’t worked for Twitter in years. It also quotes people whining that Twitter should do more as evidence that the company has a legal obligation to do more.

The lawsuit is going nowhere. First of all, considering that it’s a civil lawsuit, Twitter is totally and completely protected by Section 230 of the CDA that says the company is not liable for how people use the platform. That’s enough to end the case right there. The case will almost certainly be tossed pretty quickly based on 230. Even if that wasn’t the case, the claims in the lawsuit that Twitter does basically nothing to stop terrorists are laughably untrue. In fact, ISIS has been issuing death threats against the company and its execs because they’ve been removing accounts.

On top of that, many have actually been complaining that Twitter goes too far in these efforts. Hell, just a couple weeks ago, the company accidentally shut down the account of a guy people mistakenly thought was ISIS’s leader, despite actually being a strong supporter of democracy and freedom, who just happened to have the same last name.

Too many people seem to think that there’s some magic wand that Twitter can wave that’ll make ISIS “disappear” from the service. It doesn’t work that way. The law certainly doesn’t require that. And while Twitter does proactively look to take down accounts that are advocating for terrorism, that doesn’t mean it’s even possible, or reasonable, that it can find every one. Targeting Twitter for a lawsuit just smacks of a Steve Dallas lawsuit, where upset people sue a large company barely involved in things, because that’s where the money is.

Finally, over and over again, intelligence officials keep claiming that the fact that ISIS folks are tweeting and Facebooking is actually one of the best ways to keep track of what they’re doing and saying. Shutting them down may seem appealing, but actually could decrease the ability to track them and their activities.

Either way, this lawsuit is dead on arrival. It will get tossed out thanks to Section 230. The lawyers who filed it should have known better. Yes, the situation is tragic and horrible and unfortunate. But it’s not Twitter’s fault — and suing the company over it just looks ridiculous.

Filed Under: carl fields, cda, cda 230, free speech, isis, lloyd fields, material support, section 230, tamara fields, terrorism
Companies: dyncorp, twitter