sheldon whitehouse – Techdirt (original) (raw)

Senator Sheldon Whitehouse Has Really Bad Ideas About Section 230

from the that's-not-how-any-of-this-works dept

Over the summer I got a copy of the new book from Lee Bollinger and Geoffrey Stone, two formerly staunch 1st Amendment supporters who have apparently decided to go back on their earlier views, with a collection of essays by a variety of authors about “social media, freedom of speech, and the future of our democracy.” Much of the book is maddening, because there are many essays from very famous people who should know better, but seem more than willing to reject the 1st Amendment because people said bad stuff online.

There are some pieces that are interesting and thoughtful, but the vast majority of them are incredibly frustrating. I had kind of blocked the book out of my mind, not wanting to revisit the frustration, but last week, Senator Sheldon Whitehouse decided to post a thread on Twitter promoting his chapter in the book, and presenting some of his ideas from his chapter — which was one of the most frustrating, by far, in the book. Bollinger and Stone obviously had an agenda in what they chose to put in the book, but Whitehouse’s chapter, in particular, would have benefited from having literally anyone who understands how Section 230 and the 1st Amendment intersect do an editing pass to call out some of the bullshit claims in it.

Unfortunately, after Whitehouse posted his thread, a lot of people cheered on his tweet saying “we should repeal Section 230, as it does more harm than good.” I criticized that tweet, and some people argued that it was unfair, because I didn’t deal with the entire contents of the thread. So, now I’ll go one step further and breakdown Whitehouse’s chapter in the book, and just how confused and wrong it is.

We should also note that Whitehouse has a bit of a history of being massively, dangerously wrong about the internet. He’s long been an extreme copyright maximalist, and was one of the initial co-sponsors of the initial bill that eventually became the SOPA/PIPA package. He’s been cozy with the top copyright industry lobbyists. A few years later, during a Senate hearing, he started making up nonsense about how a Google search he did lead him to The Pirate Bay and how this “criminal activity” (searches to websites) had to be stopped. He’s also been terrible on encryption, where he made up a story of a kidnapped girl and the only way we could possibly find her was to break into her phone. He also once pushed a bill to make the terrible CFAA even worse, and when called out on how much damage it would do, he blamed the “pro-botnet, pro-foreign cyber criminal caucus.” And, more recently, he pushed an obviously unconstitutional bill that would have required search engines to block searches that lead to information about illegal drugs.

Are you sensing a pattern yet?

Anyway, let’s get to Whitehouse’s book chapter. The premise is that (1) social media is bad, and (2) it’s because of Section 230, so (3) we should get rid of Section 230, and (4) while it’s possible that there would be some negative impacts of that, he has a “narrower” version of Section 230 that can replace it. Basically all four prongs of the plan are bad, and confused about reality.

The article starts out with the expected “social media is bad because the people on it say bad things” line of reasoning, cherry-picking from the usual pile of “social media bad” tropes:

Social media platforms—companies that facilitate information sharing through virtual networks—have shielded themselves more than any other media from responsibility for destructive content they house and propagate.

First off… what? Again, multiple studies have shown that cable news spreading bogus reporting has been way more responsible for misinforming the public. Indeed, there’s a whole book detailing, with very thorough data, how the viral stories go viral because of cable news, and not the internet. But, Whitehouse isn’t concerned with being factual. He’s here to burn down social media with disinformation.

Second, his “evidence” to support this claim is a footnote… linking to a Newsweek article that doesn’t even make the claim Whitehouse pretends it does. I’m guessing he didn’t think anyone would bother to check his footnotes, but the article is just about an advertising boss, who is not exactly an unbiased player, saying some nonsense about whether tech companies are “platforms or publishers,” which is a nonsense distinction.

They claim that their algorithms simply promote whatever is selected by the collective wisdom of the public, and that they lack the resources or expertise to identify and remove unlawful or untruthful content. But the truth is they are not neutral or incapable observers. Social media companies spread disinformation, exacerbate preexisting biases, and disseminate unlawful content because of deliberate, profit-seeking choices. These platforms choose how to structure their services; what content to allow or disallow; what content to promote; what ads to sell, and to whom; and how they connect advertising to the content users consume or create.

Then what’s your excuse for pushing this disinformation, Senator? This whole paragraph is misleading disinfo. No one claims that the platforms are “neutral,” but it is accurate that when you have an open platform for billions of people to communicate, some of those users are going to push garbage. No one claims that they can’t deal with any of it. Every platform puts in place rules and enforcement to try to deal with as much of it as they can, but the impossible part is actually being able to deal with all of it. And it’s not just because of “profit-seeking choices,” though if it were that, then rather than go after Section 230, why not go after Wall Street and its demands for short-term profit maximization for public companies?

And, again, it’s people who spread disinformation. As noted above, cable news does it much more impactfully than social media does. But Whitehouse isn’t looking to regulate cable news, because he recognizes the 1st Amendment problems with that. Unfortunately, he ignores them when it comes to social media.

Also, many of the other complaints are exaggerated or misleading in their own way. For example, the claims about “exacerbating preexisting biases,” the evidence there… suggests otherwise. As we wrote about last year, the evidence suggests that the internet actually doesn’t push people into echo chambers, but rather the reverse. The only “truth” to the idea that it exacerbates preexisting biases is by introducing people to other views leading some people to react negatively, and leading them to dig into their preexisting biases. That’s not the fault of social media. That’s the fault of bad education and people who are scared of new and different ideas. Kinda like Senator Whitehouse who seems to keep exacerbating his preexisting biases against the usefulness of the internet.

So, we’re already off to a not great start, but it quickly gets much, much worse. Because, while lots of people insisted that Whitehouse had a serious plan to “repeal and replace” Section 230 with “something better,” in the chapter, he suggests a total repeal would be the best overall solution:

I should say at the beginning that I would support simply repealing Section 230 and letting courts sort it out. This has the advantage of legislative simplicity and speed. It also minimizes the hand of Congress in an area that relates to speech, where our own political motives—whether of incumbency or party—create their own hazards. Better to minimize Congress’s hand in this.

So, yeah, I agree with minimizing Congress’s hand in this, and he’s correct about the problems of Congress meddling in areas relating to speech, where political motives may take control, but it’s utter madness to suggest that repealing it and “letting courts sort it out” is any kind of reasonable solution.

First, to argue that it’s faster is similarly madness. If we simply removed Section 230 today, tomorrow the courts would be flooded with tons of ridiculous and frivolous cases, blaming social media for basically everything, and it would take years to sort out the mess, and it would be incredibly costly for everyone sued. And while totally out-of-touch Senators like Senator Whitehouse can dismiss this concern by saying that the big tech companies have plenty of money and lawyers to deal with this, it ignores the fact that Section 230 protects tons of smaller sites, that would also get caught up in this, and many would go out of business trying to defend these frivolous lawsuits.

It is the height of elitist nonsense to say “let’s just create massive turmoil” for every website on the internet, because I don’t like the internet. But that’s what Senator Whitehouse does here. It’s obnoxious. It’s based on fallacious assumptions, and it’s dangerous.

Senator Whitehouse suggests that this mess would be relatively short-lived:

Most of the questions that would come up in court post-repeal would find ready answers in existing legal doctrines, with familiar structures and duties. Repeal is not a ticket to an alien legal environment; it’s actually a return to established legal norms.

Tell me you’ve never been a small business sued by frivolous grifters without telling me you’ve never been a small business sued by frivolous grifters. As far as I can tell, Whitehouse, whose great great grandfather was a railroad magnate, and whose parents and grandparents were career diplomats, has never worked in the private sector at all. He got a law degree, became a law clerk, and then went straight to work in the government. It is the height of hubris to suggest that a ton of small, struggling businesses should have to go through years of expensive litigation because you’re mad that some people on the internet aren’t nice.

From there, Whitehouse insists that without Section 230, “disinformation” directed at individuals would be solved by lawsuits:

Where disinformation targets an injured individual, liability law will usually clean up the mess.

And here we begin to realize the root of the problem: Senator Sheldon Whitehouse does not understand the 1st Amendment. Most disinformation is protected speech under the 1st Amendment, Senator. There are some exceptions, but they are pretty limited, because we protect freedom of speech. What’s lacking from Whitehouse’s analysis (and this becomes a bigger deal later in the piece) is the lack of understanding that there is no underlying cause of action for most disinformation, because it’s protected under the 1st Amendment.

He goes on a weird, and somewhat misleading, history of Section 230, followed by a very, very, very misleading explanation claiming that social media sites have no interest in stopping the spread of disinformation on their platforms. This is simply untrue, and fails to reckon with the very real challenges and tradeoffs in trust and safety, and how these companies have tried to balance those different trade-offs.

Like most interlopers with zero experience in the field, Senator Whitehouse writes as if there’s some easy solution. Much of this part of his chapter reads as a Senator with no experience in healthcare saying “the solution is easy, we just need to ban health insurance, and everything will sort itself out.” That’s not how anything works, and your “solution” completely ignores all the work of tons of experts who have spent decades actually making the internet better.

And, again and again, Whitehouse seems to struggle with understanding what is protected speech under the 1st Amendment. In a section of his chapter about “illegal content,” he repeatedly presents examples of content that, while bad, is not illegal. He talks about disinformation and misinformation. He talks about anti-vax content. He talks about false COVID info. He talks about “climate denial.” We can all agree that this information is problematic, but it’s all very much protected.

And… removing Section 230 would actually make it that much more difficult to deal with. Section 230 is what allows different companies to experiment with different approaches and figure out what works. It’s why all of the major social media companies have implemented increasingly beneficial policies for handling this stuff, and that they can adapt rapidly. Without it, every single change would require a review by the legal team, who is risk averse, and would greatly limit the ability of companies to keep ahead of malicious actors on their sites.

This is obvious to anyone who has worked on these issues, which does not include Senator Whitehouse.

Whitehouse continues to insist that companies aren’t even trying to do anything about these issues, which is an insult to their trust and safety teams who actually do work incredibly hard to put in place and effectively implement policies and enforcement to improve the various websites. Whitehouse’s entire chapter is an insult to all the work those teams put in.

Finally, towards the end of the chapter, after he suggests a full on repeal, and after making a ton of false claims (disinformation?), Whitehouse does admit that just getting rid of Section 230 might have some negatives. It’s almost like halfway through the chapter, someone else took over writing it. He notes that repealing 230 would leave a lot of uncertainty. He notes that misinformation often isn’t legally actionable (something he then forgets later in the chapter). He even admits that Congress can’t just outlaw misinformation because of the 1st Amendment (though he sort of hides the ball by not directly mentioning the 1st Amendment, and instead just talking about “strict scrutiny”):

Congress can’t readily solve these problems by creating new causes of action. Causes of action based on the content of speech — for example, a new cause of cation for knowingly publishing misinformation onlinewill be subject to strict scrutiny in court. Many statutes seeking to criminalize cyberbullying or other online speech have been struck down on vagueness grounds.

Vagueness isn’t the problem here. The 1st Amendment is.

He also points out, correctly, that “unlimited liability could privilege wealthy special interests.” That, of course, is the point that some of us keep trying to highlight. If you want to lock in Facebook and Google, get rid of Section 230. They have the lawyers and the bank accounts to deal with the fallout. No one else does. Whitehouse actually is correct in saying this… but then seems to immediately forget about it and not care about it:

If Section 230 is repealed without additional guidelines, an already unbalanced information ecosystem could be unbalanced further as platforms yield to legal pressure from big, deliberate manipulators of information. Powerful special interests can bring lawsuits they are unlikely towin in order to scare off social media companies in terms of how they police certain content.

[…]

The ultimate success of a lawsuit, however, may not matter to well-funded interests with the means to threaten nuisance suits, and there are other doctrines of tort law that could be used to frame a dispute or a threatened dispute.

And, of course, the benefit to the biggest companies is mentioned as well:

Trillion-dollar social media companies could be beneficiaries as well as victims of nuisance litigation. With an abundance of resources at their disposal, Google and Facebook can easily afford to litigate. This gives them an incumbency advantage: New social media startups cannot afford to spend millions of dollars on litigation. Startups also can’t afford to spend millions of dollars developing automoderation mechanisms.

This is… all accurate. Yet most of the chapter, both before and after this part, completely ignores this. It’s almost as if he handed off his pen to someone more well informed, and they added this section which is then otherwise ignored.

You’d think that whoever wrote the parts above would then recognize why Section 230 is actually useful. But instead, it leads Whitehouse to finally laying out his “proposal.” Which is basically… a DMCA for misinformation, plus some additional transparency mandates.

The best solution would be for Congress to require a “notice-and-takedown” systems removing Section 230 protections when a company willfully refuses to remove unlawful content. As part of this system, major social media platforms should maintain an “acceptable use” policy, explain how the platform enforces its content moderation policies, and describe the methods of reporting content or speech that violates policies or other laws. They should notify users when their content is taken down, and give users a forum for appeal if they they’ve been wrongly removed or if the company has failed to act.

All of these ideas have been suggested elsewhere, and all of them have very, very serious challenges and tradeoffs Whitehouse does not grapple with. First, we already have a DMCA notice-and-takedown provision in copyright that is massively abused to try to silence speech. A big study of how well notice-and-takedown works in everyday practice… shows that it doesn’t. As that paper notes, they found “surprisingly high percentages of notices of questionable validity.” Expanding the notice-and-takedown system much more broadly without recognizing how the existing one already puts a burden on protected speech should be a non-starter.

Also, note the sleight-of-word trickery in this paragraph. He notes that the notice-and-takedown provision should cover both “speech that violates policies or other laws.” But… policies are not laws. There’s an important difference there. Speech that violates laws is an issue, and almost every website will actually remove it once it’s proven that the speech is violative. The problem is that people like Whitehouse assume that there’s some obvious marker of “illegal speech” as compared to speech that is legal, but there is not. There’s a reason why we have a court system that takes a lot of time and back and forth with hearings and trials and evidence and juries to determine if something breaks the law. This proposal ignores all that, and assumes that a website can just tell, based on a report, if some speech breaks a law. That’s not how any of this actually works.

And if we’re just talking about internal policies, well, companies already try to remove content that violates those policies once they’re informed of it, but the problem is that people with zero experience in this field (like Senator Whitehouse) don’t understand that it’s rarely as obvious as they think whether or not content actually violates policies. Sometimes it is, but the biggest issues and the biggest challenges are that so much content is in a gray area where it’s not really clear if they break the rules or not, and many judgment calls are made, often by understaffed, overworked teams with little time to judge the context or nuance of the issues at play.

Whitehouse doesn’t care. He assumes, incorrectly, that companies are deliberately ignoring this content, when the reality is that they’re trying to figure out ways to enforce policy across billions of people, in which every scenario is impossible to fully understand, and millions of judgment calls are made every day.

As for the demands for an appeals process, and clear explanations, again, that is massively burdensome for smaller players. Techdirt removes between 500 and 1000 spam comments per day. I should need to contact each spammer to let them know, and share with them our “appeals” process? Fuck that.

He has a few more suggestions that are just as confused, and just as untethered from reality. For example, he insists that Section 230 should not apply to algorithmic recommendations (which the Supreme Court might solve for him this term). But that basically would destroy search on any website, let alone many other important, and useful algorithms.

He keeps forgetting that most misinformation is constitutionally protected. For example, he says:

The threat of legal liability, for example, could make Facebook and other companies more likely to adopt measure that stop the spread of misinformation even if they also reduce user engagement, as failing to act would carry its own financial risks.

But, as even he admitted earlier in the chapter, misinformation is mostly protected under the 1st Amendment. So what possible cause of action could there be for failing to remove misinformation? Also, this sentence completely ignores the other statements in this very chapter highlighting how powerful interests will use the threat of litigation to pressure companies into hiding legitimate information they want hidden, as well as the vastly different scenarios for smaller competitors that don’t have Facebook’s legal team and bank account.

The whole chapter is weirdly disconnected from reality. There is that weird bit in the middle that accurately highlights many (but not all) of the problems with his own proposal, which he then completely ignores.

Senator Whitehouse, who has never worked for a private company and has no experience with the internet other than proposing and supporting ridiculously bad regulations for the internet, does not understand the problem at all (he misrepresents why content moderation is such a challenge, blaming it on profit-seeking, rather than the complexities of human beings and society). And his proposed solutions have so many negatives that he fails to grapple with. And, even when he does grapple with some of the downsides to his proposals… he then just ignores them as if he hadn’t even mentioned them.

It’s the worst of political nonsense. It’s grandstanding on an issue he doesn’t understand, with a solution that will not work and will make the actual issues that much worse. This is not good policymaking at all.

Filed Under: 1st amendment, content moderation, disinformation, free speech, misinformation, section 230, sheldon whitehouse

Five Senators Agree: Search Engines Should Censor Drug Information

from the foot-in-the-door-for-greater-government-control-of-web-content dept

The US government would like to be involved in the web censorship business. The anti-sex trafficking bill recently passed by the House would do just that, forcing service providers to pre-censor possibly harmless content out of fear of being sued for the criminal acts of private citizens. Much has been made recently of “fake news” and its distribution via Russian bots, with some suggesting legislation is the answer to a problem no one seems to be able to define. This too would be a form of censorship, forcing social media platforms to make snap decisions about new users and terminate accounts that seem too automated or too willing to distribute content Congressional reps feel is “fake.”

For the most part, legislation isn’t in the making. Instead, reps are hoping to shame, nudge, and coerce tech companies into self-censorship. This keeps the government’s hands clean, but there’s always the threat of a legal mandate backing legislators’ suggestions.

Key critic of Russian bots and social media companies in general — Senator Dianne Feinstein — has signed a handful of letters asking four major tech companies to start censoring drug-related material. Her co-signers on these ridiculous letters are Chuck Grassley, Amy Klobuchar, John Kennedy, and Sheldon Whitehouse. As members of the Senate Caucus on International Narcotic Control, they apparently believe Microsoft, Yahoo (lol), Pinterest, and Google should start preventing users for searching for drug information. (h/t Tom Angell)

The letters [PDFs here: Google, Yahoo, Microsoft, Pinterest] all discuss the search results returned when people search for information on buying drugs. (For instance, “buy percocet online.”) But the letter doesn’t limit itself to asking these companies to ensure only legitimate sites show up in the search results. It actually asks the companies to censor all results for drug information.

The senators specifically urge Google, Microsoft, Yahoo and Pinterest to take the following steps in helping us fight the opioid crisis:

It’s the second bullet point that’s key. It simply says “disable the ability to search for illicit drugs.” There’s no way to comply with that directive that won’t result in the disappearance of useful information needed by thousands of search engine users. As Angell points out in this tweet, this would possibly cause information about drug interactions to be delisted. On top of that, students often need to research illegal drugs for class assignments and term papers. Authors and journalists also need access to a variety of drug info, including various ways they can be purchased online. Law enforcement Googles stuff just like the rest of us and its ability to track down purveyors of illegal drugs would be harmed if it was all pushed off the open web.

Those seeking to buy illegal drugs would find other ways of accomplishing this even if the info disappears. The so-called dark web is an off-the-radar option that many are using already. A whole host of useful info is in danger of being removed simply because questionable purveyors of prescription drugs have found a way to game search engine algorithms.

All of the companies receiving letters already have policies in place to restrict the illicit sale of drugs. They also have policies in place to forward pertinent info to law enforcement agencies. So, companies are already doing much of what is asked, but these senators feel the mere existence of questionable sites in search results makes these companies “facilitators” of illegal drug sales.

If SESTA is signed into law, it will make it that much easier for the government to demand similar legislation targeting opioid distribution. It will allow the government to claw back more of the immunity granted to service providers with the passage of the Communications Decency Act. The more holes drilled into Section 230 by legislation, the easier it is to remove it entirely, and paint targets on the back of search engines and social media platforms.

It’s also dangerous to suggest companies need to set up dedicated 24/7 service for law enforcement agencies. This will only encourage law enforcement to bypass legal protections set up by previous legislation and lean on companies already feeling the heat from the government’s increasingly-insane reaction to opioid overdoses. Warrants will seem unnecessary when legislators in DC are saying tech companies must be more responsive to law enforcement than they already are.

A suggestion from the government to start censoring search results is exactly that: censorship. The government may not be mandating it, but this is nothing like a concerned citizens group asking for more policing of search results. There’s the threat of legislation and other government action propelling it. Even if these senators aren’t mandating policy changes, they’re still using the weight of their position to compel alteration of search results.

Filed Under: amy klobuchar, censorship, chuck grassley, dianne feinstein, drugs, first amendment, free speech, john kennedy, search, search engines, sheldon whitehouse
Companies: google, microsoft, pinterest, yahoo

Botnet Bill Could Give FBI Permission To Take Warrantless Peeks At The Contents Of People's Computers

from the mind-if-we-take-a-look-around,-they-asked-never dept

In a recent ruling in a child porn investigation case, a judge declared that the FBI’s Network Investigative Technique (NIT) — which sent identifying user info from the suspect’s computer to the FBI — was the equivalent of a passing cop peering through broken blinds into a house.

[I]n Minnesota v. Carter, the Supreme Court considered whether a police officer who peered through a gap in a home’s closed blinds conducted a search in violation of the Fourth Amendment. 525 U.S. 83, 85 (1998). Although the Court did not reach this question, id at 91, Justice Breyer in concurrence determined that the officer’s observation did not violate the respondents’ Fourth Amendment rights. Id at 103 (Breyer, J., concurring). Justice Breyer noted that the “precautions that the apartment’s dwellers took to maintain their privacy would have failed in respect to an ordinary passerby standing” where the police officer stood.

What would normally be awarded an expectation of privacy under the Fourth Amendment becomes subject to the “plain view” warrant exception. If a passerby could see into the house via the broken blinds, there’s nothing to prevent law enforcement from enjoying the same view — and acting on it with a warrantless search.

Of course, in this analogy, the NIT — sent from an FBI-controlled server to unsuspecting users’ computers — is the equivalent of a law enforcement officer first entering the house to break the blinds and then claiming he saw something through the busted slats.

The DOJ may be headed into the business of breaking blinds in bulk. Innocuous-sounding legislation that would allow the FBI to shut down botnets contains some serious privacy implications.

Senators Whitehouse (D-RI), Graham (R-SC), and Blumenthal (D-CT) introduced the Botnet Prevention Act in May, which (among other things) amends the portion of federal law (18 U.S.C. § 1345) that authorizes these injunctions. The bill would expand § 1345 by adding violations of a section of the Computer Fraud and Abuse Act (“CFAA”) that covers botnets (and more) to the list of offenses that trigger the DOJ’s ability to get an injunction.

More specifically, it would allow injunctions in all violations or attempted violations of subsection (a)(5) of the CFAA that result or could result in damage to 100 or more computers in a year, including any case involving the “impair[ment of] the availability or integrity of the protected computers without authorization,” or the “install[ation] or maintain[nance of] control over malicious software on the protected computers” that “caused or would cause damage” to the protected computers.

It only sounds like a good idea: the government riding to the rescue of unaware computer users whose devices have been pressed into service by malware purveyors and criminals. But, as Gabe Rottman of CDT points out, there’s some vague wording in the existing law that would undercut important Fourth Amendment protections when used in conjunction with the DOJ’s botnet-fighting powers.

Buried deep within § 1345(b) is a single phrase that could open up a number of thorny issues when this injunctive authority is applied to botnets. The section not only allows the government to obtain a restraining order that stops someone from doing something nefarious, but also an order that directs someone to “take such other action, as is warranted to prevent a continuing and substantial injury . . . .”’

Rottman points to the FBI’s 2011 shutdown of the Coreflood botnet. After obtaining a restraining order under the federal rule, the FBI used its own server to issue commands to infected computers, halting further spread of the malware and shutting down the software on infected host devices. Again, this seems like a good use of the government’s resources until you take a closer look at what’s actually happening when the FBI does this sort of thing.

The court hearing the Coreflood case accepted the government’s argument that the “community caretaker” doctrine allowed the transmission of the shutdown order, as the action was “totally divorced from the detection, investigation, or acquisition of evidence relating to the violation of a criminal statute.” At the time, the government likened its actions to a police officer who, while responding to a break-in, finds the door to a house open or ajar and then closes it to secure the premises.

The “community caretaker” function is one exception to warrant requirements. Accessing peoples’ computers without their permission under these auspices allows the FBI to avail itself of a second warrant exception.

In order to scrub private computers for malware, the government would, by necessity, have to search the computer and its contents for the malware. Once the door is ajar, rather than closing it, the police would actually “walk in” to the computer. And anything they find in “plain view” can be used as evidence of a crime. Nothing in the current version of the bill would prevent such a search or collection, giving the government the potential means to search countless computers of victims of the botnet (not the perpetrators) without a warrant.

While these are both valid exceptions to warrant requirements, they’ve never been deployed on this sort of scale. Officers can perform community caretaker functions that may result in contraband being discovered in plain view. When the FBI takes on a botnet, however, it will have access to potentially thousands of computers at a time and the legislated permission to not only “enter” these computers, but to take a look around at the contents.

The Fourth Amendment was put into place to end the practice of general warrants. The FBI’s botnet-fighting efforts turn court-ordered injunctions into digital general warrants, only without the pesky “warrant” part of the phrase. And, unlike other warrants, the proposed legislation would do away with another Fourth Amendment nicety: notification.

As CDT noted in its comments on the Rule 41 change mentioned above, potentially as many as a third of computers in the United States are infected with some form of malware. And, botnets are extremely hard to clean up, especially when you depend on victims to voluntarily submit their computers for cleaning. Given this reality, unless notice is required by statute, law enforcement would have an incentive to dispense with notice in the much wider array of shutdowns permitted under the Graham-Whitehouse bill.

The bill has only been introduced and there’s no forward motion as of yet. It’s in need of serious repair before it heads further up the legislative chain. As it’s written, there’s nothing standing between people’s personal files and a host of digital officers wandering through virtual houses in search of malware and searching/seizing anything else that catches their eye.

Filed Under: botnet, botnet prevention act, congress, fbi, hacking, lindsey graham, richard blumenthal, sheldon whitehouse, warrants

Dear Sheldon Whitehouse: Do You Really Mean To Put Activists In Jail?

from the questions-to-ask dept

Last week, we noted that Senator Sheldon Whitehouse (who has a bit of a history of really flubbing key tech issues), was downright angry that people were pushing back on his plans to expand the possible punishment under the widely abused Computer Fraud and Abuse Act (CFAA). Whitehouse pulled out the usual tropes, saying that the DOJ supports his amendment, and he couldn’t understand why anyone could possibly be against ratcheting up punishment for “cyber criminals.” Except, that shows an astounding level of ignorance on the part of Senator Whitehouse, because the CFAA is regularly used against people who most would not consider to be traditional “cyber criminals.”

A group of activist organizations recently sent Senator Whitehouse a letter protesting his CFAA Amendment, and noting that while he thinks it’s just being used against nefarious computer hackers, that’s not the case. The CFAA is regularly used against plenty of others as well, including some people that Senator Whitehouse might even support. For example, the letter points to a case from a few years ago, Pulte Homes v. Laborers’ International Union, in which the 6th Circuit decided that a union telling its members to call and email a company they were protesting violated the civil portion of the CFAA. The case involved the union telling its members to call and email Pulte Homes, which apparently slowed down their computer systems, and the court ruled that was enough “intentional damage” to qualify as a CFAA violation:

The following allegations illustrate LIUNA?s objective to cause damage: (1) LIUNA instructed its members to send thousands of e-mails to three specific Pulte executives; (2) many of these e-mails came from LIUNA?s server; (3) LIUNA encouraged its members to ?fight back? after Pulte terminated several employees; (4) LIUNA used an auto-dialing service to generate a high volume of calls; and (5) some of the messages included threats and obscenity. And although Pulte appears to use an idiosyncratic e-mail system, it is plausible LIUNA understood the likely effects of its actions?that sending transmissions at such an incredible volume would slow down Pulte?s computer operations. LIUNA?s rhetoric of ?fighting back,? in particular, suggests that such a slow-down was at least one of its objectives. The complaint thus sufficiently alleges that LIUNA?motivated by its anger about Pulte?s labor practices?intended to hurt Pulte?s business by damaging its computer systems.

This seems notable, because sending lots of calls and emails is a pretty standard activist/protest mechanism, used all the time. The idea that it might be considered a CFAA violation is immensely troubling. And it should be doubly so for Senator Whitehouse, given that he’s received tremendous support from unions in the past. According to Open Secrets, two of his largest funders are the Service Employees International Union and the Teamsters. He’s also received lots of money from Ocean Champions, an activist group focused on protecting our oceans. And, yet, at the same time, he seems to be ratcheting up a law that might be used against some of their activism. Meanwhile, his PAC has received lots of support from unions as well, including the Sheet Metal Workers Union and the International Brotherhood of Electrical Workers.

As the letter from the activist groups note:

No government that respects the public’s right to speak and organize should ban such behavior, yet that is precisely what the CFAA has been interpreted to do. That activists must work under fear of such exposure redounds to the benefit of corporations that are exploitative of their workers, polluters, and others who stand opposed to our shared progressive values.

The letter further notes that one of the key parts of Senator Whitehouse’s amendment, about “botnets,” will almost certainly be used against online activists, who seek to bring a bunch of internet users together to speak out. Given the way Senator Whitehouse reacted last week, it seems unfortunately likely that this letter will fall on deaf ears. But it’s important to recognize that just because you claim a law will be used against “cyber criminals” doesn’t mean that it won’t be used to stifle things like activism that you might actually support.

Filed Under: activists, cfaa, hacking, sheldon whitehouse, unions

Sheldon Whitehouse Freaks Out, Blames 'Pro-Botnet Lobby' For Rejecting His Terrible CFAA Amendment

from the the-pro-botnet-lobby-is-here dept

As we mentioned yesterday, one of the (many) bad things involved in the new Senate attempt to push the CISA “cybersecurity” bill forward was that they were including a bad amendment added by Senator Sheldon Whitehouse that would expand the terrible Computer Fraud and Abuse Act, a law that should actually be significantly cut back. Senator Ron Wyden protested this amendment specifically in his speech against CISA. And, for whatever reason, Whitehouse’s amendment has been pulled from consideration and Whitehouse is seriously pissed off about it.

He went on the Senate floor to directly whine about it, even sarcastically calling out the “hidden pro-botnet, pro-foreign cyber criminal caucus” that somehow fought against the bill. Except it wasn’t a “pro-botnet” anyone who killed the amendment. It was a lot of people who were quite reasonably concerned about what the amendment would do to the CFAA. And while it’s true that Whitehouse improved the amendment from its originally really terrible state, it still was a bad amendment. Whitehouse goes on and on in has rant about who could possibly be “against” shutting down botnets or raising penalties for hacking into critical infrastructure, citing that “law enforcement” supports the bill. But, of course, that leaves out the other side entirely. And that’s not the “pro-botnet, pro-foreign cyber criminal” caucus, but rather people who are well aware of how the CFAA has regularly been abused by law enforcement to bring charges against non-criminals, or to pile on charges on those committing minor offenses. Expanding all of that without stopping the potential for abuse only means the bill will be abused further.

Whitehouse continues to make a name for himself as one of the most technologically illiterate members of the Senate. Late last year he went on a rant about a totally made up Google search (the results did not show what he claimed they showed) and an equally made up Pirate Bay whose actual site did not show what Whitehouse pretended it showed. He also was strongly in favor of backdooring encryption, arguing that if Apple doesn’t backdoor encryption, perhaps it will be opening itself up to a lawsuit when the FBI can’t track down a kidnapper (ignoring all the times that such encryption would actually protect people). This push to expand the CFAA and then whining about pushback on the Senate floor is only adding to his reputation as one of the most anti-tech industry Senators out there.

And, of course, for all the show on the floor, it’s not like the Amendment is dead anyway. As Marcey Wheeler notes in her post (linked above), there’s still a good chance that his CFAA amendment will be brought back into the bill when the House and Senate conference to resolve differences in the bills across houses.

Filed Under: botnet caucus, botnets, cfaa, cisa, cybersecurity, hacking, sheldon whitehouse

Senate Pushes Forward With CISA As Internet Industry Pulls Its Support

from the what-are-they-thinking? dept

Despite the fact that most of the internet industry has recently come out against the ridiculous faux-cybersecurity bill CISA, the Senate today began the process of moving the bill forward with a debate. The arguments were pretty much what you’d expect. The supporters of the bill, such as Senators Dianne Feinstein and Richard Burr, went on and on about how the bill is “voluntary” and about various online hacks (none of which would have been stopped by CISA — but apparently those details don’t matter). Senator Ron Wyden responded by pointing to all the internet companies coming out against the bill, and saying (accurately) that they’re doing so because they know the public no longer trusts many of those companies, and they don’t want a bill that will almost certainly be used for further surveillance efforts.

Amazingly, Burr shot back with a really dishonest and misleading claim that companies that don’t agree to “share” information with the government are the ones harming their users by somehow not protecting their info. That’s fairly incredible. The reason that companies don’t want to share info is because no one — the companies or the public — trust the government to not abuse the information. To turn that around and pretend that sharing the info with the government is likely to better protect user information is laughable.

The fact that the internet companies have finally come out against CISA is a really big deal. For the past few years, they’ve remained pretty quiet on it and related bills, because it would have granted them immunity from liability for participating in the program. So, for the tech companies, it was tough to argue against the bill, since it just protected them from legal liability. Yet, in the last few weeks, many internet companies and industry associations have (finally) spoken out against the bill, noting that it actually puts their users’ privacy at risk. This also helps highlight how the claim that this is all “voluntary” is a myth, and the companies recognize that they will likely be pressured into sharing information.

Meanwhile, a bunch of amendments have been introduced along with CISA… including an absolutely terrible amendment introduced by Senator Sheldon Whitehouse that would revamp an unrelated bill, the infamous CFAA, which needs to be reformed. Except that the Whitehouse amendment makes the CFAA worse, not better.

There’s still plenty of process to occur, but the ball is now rolling. There will likely be some fights and votes in the next few days, but if you don’t think CISA (or this horrible CFAA amendment) should pass, now would be a good time to call your two Senators and let them know to oppose this.

Filed Under: cfaa, cisa, cybersecurity, dianne feinstein, information sharing, richard burr, ron wyden, sheldon whitehouse

Insanity Rules: NSA Apologists Actually Think Apple Protecting You & Your Data Could Be 'Material Support' For ISIS

from the this-is-wrong dept

A few weeks ago, we pointed out that Senator Sheldon Whitehouse led the way with perhaps the most ridiculous statement of any Senator (and there were a lot of crazy statements) in the debate over encryption and the FBI’s exaggerated fear of “going dark.” He argued that if the police couldn’t find a missing girl (using a hypothetical that not only didn’t make any sense, but which also was entirely unlikely to ever happen), then perhaps Apple could face some civil liability for not allowing the government to spy on your data. Here’s what he said:

It strikes me that one of the balances that we have in these circumstances, where a company may wish to privatize value — by saying “gosh, we’re secure now, we got a really good product, you’re gonna love it” — that’s to their benefit. But for the family of the girl that disappeared in the van, that’s a pretty big cost. And, when we see corporations privatizing value and socializing costs, so that other people have to bear the cost, one of the ways that we get back to that and try to put some balance into it, is through the civil courts. Through the liability system. If you’re a polluter and you’re dumping poisonous waste into the water rather than treating it properly somebody downstream can bring an action and can get damages for the harm they sustained, can get an order telling you to knock it off.

You can read our longer analysis of how wrong this is, but in short: encryption is not pollution. Pollution is a negative externality. Encryption is the opposite of that. It’s a tool that better protects the public in the vast majority of cases. That’s why Apple is making it so standard.

The suggestion was so ridiculous and so wrong that we were surprised that famed NSA apologist Ben Wittes of the Brookings Institute found Whitehouse’s nonsensical rant “interesting” and worthy of consideration. While we disagree with Wittes on nearly everything, we thought at the very least common sense would have to eventually reach him, leading him to recognize that absolutely nothing Whitehouse said made any sense (then again, this is the same Wittes who seems to have joined the magic unicorn/golden key brigade — so I’m beginning to doubt my initial assessment that Wittes is well-informed but just comes to bad conclusions).

However, even with Wittes finding Whitehouse’s insane suggestion “interesting,” it’s still rather surprising to see him find it worthy of a multi-part detailed legal analysis for which he brought in a Harvard Law student, Zoe Bedell, to help. In the first analysis, they take a modified form of Whitehouse’s hypothetical (after even they admit that his version doesn’t actually make any sense), but still come to the conclusion that the company “could” face civil liability. Though, at least they admit plaintiffs would “not have an easy case.”

The first challenge for plaintiffs will be to establish that Apple even had a duty, or an obligation, to take steps to prevent their products from being used in an attack in the first place. Plaintiffs might first argue that Apple actually already has a statutory duty to provide communications to government under a variety of laws. While Apple has no express statutory obligation to maintain the ability to provide decrypted information to the FBI, plaintiffs could argue that legal obligations it clearly does have would be meaningless if the communications remained encrypted.

To make this possible, Bedell and Wittes try to read into various wiretapping and surveillance laws a non-existent duty to decrypt information from your mobile phone. But that’s clearly not true. If that actually existed, then we wouldn’t be having this debate right now in the first place, and FBI Director James Comey wouldn’t be talking to Congress about changing the law to require such things. But, still, they hope that maybe, just maybe, a court would create such a duty out of thin air based on things like “the foreseeability of the harm.” Except, that’s going to fall flat on its face, because the likelihood of harm here goes the other way. Not encrypting your information leads to a much, much, much greater probability of harm than encrypting your data and not allowing law enforcement to see it.

Going to even more ridiculous levels than the “pollution” argument, this article compares Apple encrypting your data to the potential liability of the guy who taught the Columbine shooters how to use their guns:

For example, after the Columbine shooting, the parents of a victim sued the retailer who sold the shooters one of their shotguns and even taught the shooters how to saw down the gun?s barrel. In refusing to dismiss the case, the court stated that ?[t]he intervening or superseding act of a third party, . . . including a third-party’s intentionally tortious or criminal conduct[,] does not absolve a defendant from responsibility if the third-party’s conduct is reasonably and generally foreseeable.? The facts were different here in some respects?the Columbine shooters were under-age, and notably, they bought their supplies in person, rather than online. But that does not explain how two federal district courts in Colorado ended up selecting and applying two different standards for evaluating the defendant’s duty.

But it’s even more different than that. Even with this standard — which many disagree with — there still needs to be “conduct” that is “reasonably and generally foreseeable.” And that’s not the case here that it is “reasonably and generally foreseeable” that because data is encrypted that people will be at more risk. In all these years, the FBI still can’t come up with a single example where such encryption was a real problem. It would be basically impossible to argue that this is a foreseeable “problem,” especially when weighed against the very real and very present problem of people trying to hack into your device and get your data.

In the second in the series, Bedell and Wittes go even further in looking at whether or not Apple could be found to have provided material support to terrorists thanks to encryption. If this sounds vaguely familiar, remember a similarly ridiculous claim not to long ago from a music industry lawyer and a DOJ official that YouTube and Twitter could be charged with material support for terrorism because ISIS used both platforms.

Bedell and Wittes concoct a scenario in which a court might argue that providing a phone that can encrypt a terrorist’s data, opens the company up to liability:

In our scenario, a plaintiff might argue that the material support was either the provision of the cell phone itself, or the provision of the encrypted messaging services that are native on it. Thus, if a jury could find that providing terrorists with encrypted communications services is just asking for trouble, then plaintiffs would have satisfied the first element of the definition of international terrorism in § 2331, a necessary step for making a case for liability under § 2333.

Of course, this is wiped out pretty quickly because that law requires intent. The authors note that this would “pose a challenge” to any plaintiff “as it would appear to be difficult, if not impossible, to prove that Apple intended to intimidate civilians or threaten governments by selling someone an iPhone…”

You think?

But, our intrepid NSA apologists still dig deeper to see if they can come up with a legal theory that will actually work:

But again, courts have handled this question in ways that make it feasible for a plaintiff to succeed on this point against Apple. For example, when the judge presiding over the Arab Bank case considered and denied the bank?s motion to dismiss, he shifted the analysis of intimidation and coercion (as well as the question of the violent act and the broken criminal law) from the defendant in the case to the group receiving the assistance. The question for the jury was thus whether the bank was secondarily, rather than primarily, liable for the injuries. The issue was not whether Arab Bank was trying to intimidate civilians or threaten governments. It was whether Hamas was trying to do this, and whether Arab Bank was knowingly helping Hamas.

Judge Posner?s opinion in Boim takes a different route to the same result. Instead of requiring a demonstration of actual intent to coerce or intimidate civilians or a government, Judge Posner essentially permits the inference that when terrorist attacks are a ?foreseeable consequence? of providing support, an organization or individual knowingly providing that support can be understood to have intended those consequences. Because Judge Posner concludes that Congress created an intentional tort, § 2333 in his reading requires the plaintiff to prove that the defendant knew it was supporting a terrorist or terrorist organization, or at least that it was deliberately indifferent to that fact. In other words, the terrorist attack must be a foreseeable consequence of the specific act of support, rather than just a general risk of providing a good or service.

But even under those standards, it’s hard to see how Apple could possibly be liable for material support. It’s just selling an iPhone and doing so in a way that — for the vast majority of its customers — is better protecting their privacy and data. It would take an extremely twisted mind and argument to turn that into somehow “knowingly” helping terrorists or creating a “foreseeable consequence.” At least the authors admit that much.

But why stop there? They then say that Apple could still be liable after the government asks them to decrypt messages. If Apple doesn’t magically stop the user in particular from encrypting messages, then, they claim, Apple could be shown to be “knowingly” supporting terrorism.

The trouble for Apple is that our story does not end with the sale of the phone to the person who turns out later to be an ISIS recruit. There is an intermediate step in the story, a step at which Apple?s knowledge dramatically increases, and its conduct arguably comes to look much more like that of someone who?as Posner explains?is recklessly indifferent to the consequences of his actions and thus carries liability for the foreseeable consequences of the aid he gives a bad guy.

That is the point at which the government serves Apple with a warrant?either a Title III warrant or a FISA warrant. In either case, the warrant is issued by a judge and puts Apple on notice that there is probable cause to believe the individual under investigation is engaged in criminal activity or activity of interest for national security reasons and is using Apple?s services and products to help further his aims. Apple, quite reasonably given its technical architecture, informs the FBI at this point that it cannot comply in any useful way with the warrant as to communications content. It can only provide the metadata associated with the communications. But it continues to provide service to the individual in question.

But all of this, once again, assumes an impossibility: that once out of its hands, Apple can somehow stop the end user from using the encryption on their phone.

This is the mother of all stretches in terms of legal theories. And, throughout it all, neither Bedell nor Wittes even seems to recognize that stronger encryption protects the end user. It’s like it doesn’t even enter their minds that there’s a reason why Apple is providing encryption that isn’t “to help people hide from the government.” It’s not about government snooping. It’s about anyone snooping. The other cases they cite are not like that at all. These arguments, even as thin as they are, only make sense if Apple’s move to encryption doesn’t really have widespread value for basically the entire population. You don’t sue Toyota for “material support for terrorism” just because a terrorist uses a Toyota to make a car bomb. Yet, Wittes and Bedell are somehow trying to make the argument that Apple is liable for better protecting you, just because in some instances it might also help “bad” people. That’s a ridiculous legal theory that barely deserves to be laughed at, let alone a multi-part analysis of how it “might work.”

Filed Under: ben wittes, encryption, isis, liability, material support, mobile encryption, pollution, sheldon whitehouse, terorrism, zoe bedell
Companies: apple

Two Of The Most Ridiculous Statements From Senators At Yesterday's Encryption Hearings

from the these-people-are-in-charge? dept

We already wrote a bit about the two Senate hearings that FBI Director James Comey participated in yesterday, concerning his alleged desire to have a “discussion” about the appropriateness of backdooring encryption. The phrase tossed around at the hearings was about the FBI’s fear of “going dark” in trying to track down all sorts of hypothetical bad guys (and it always was hypothetical, since no actual examples were given). However, not all of the crazy statements came from Comey. There was plenty of nuttiness from Senators as well. It is, of course, difficult to pick out the most ridiculous, so here are two that stood out to me, personally. And, to avoid any charges of bias, I’ll include one from each hearing and one from a Democrat and one from a Republican.

Let’s start with the first hearing, the one before the Senate Judiciary Committee, where Senator Sheldon Whitehouse decides to add his bizarrely ignorant statements (starting around 1 hour, 18 minutes into the recording). Whitehouse starts out with a hypothetical (again!) story of a girl being kidnapped outside of her home (“taken into a van”), but having her phone left inside. He claims that in the past, law enforcement could get a warrant for the phone “to help locate the girl.” And now “they cannot do that.” This hypothetical makes no sense for a variety of reasons. First, the number of actual abductions like that is pretty rare. But, more importantly, if the phone is at home then it’s not exactly going to help law enforcement locate her any more. He’s mixing up a variety of different things involving location versus stored data encryption. It’s just a scare story that has little to do with the issue of stored data encryption, which is what the hearing is supposed to be about.

But, from there, he goes on to make an even more bizarre statement, claiming that companies pushing encryption are doing so solely for their own corporate benefit, creating harm for the public. In fact, he compares encryption to pollution, and then argues that there could be civil liability because encrypted phones make it difficult to find hypothetical kidnapped girls:

It strikes me that one of the balances that we have in these circumstances, where a company may wish to privatize value — by saying “gosh, we’re secure now, we got a really good product, you’re gonna love it” — that’s to their benefit. But for the family of the girl that disappeared in the van, that’s a pretty big cost. And, when we see corporations privatizing value and socializing costs, so that other people have to bear the cost, one of the ways that we get back to that and try to put some balance into it, is through the civil courts. Through the liability system. If you’re a polluter and you’re dumping poisonous waste into the water rather than treating it properly somebody downstream can bring an action and can get damages for the harm they sustained, can get an order telling you to knock it off.

This appears to be a thing that Senator Sheldon Whitehouse does. He makes up ridiculous hypotheticals of situations that aren’t happening and then jumps to flat out wrong arguments based on those hypotheticals.

Here, he’s just wrong that companies employing encryption are “privatizing value and socializing costs.” In fact, as many, many, many people will argue, companies that are putting in place end-to-end encryption actually can make it more difficult for them to make money, since they close off avenues such as targeted advertising, since they lose access to the information being transmitted. But, even more to the point, this entire argument is based on the simply wrong (and completely ignorant) argument that the there’s a “cost” to the public of greater encryption. That’s not just wrong, it’s so wrong as it should call into question the career choices of whatever clueless staffer fed that line to Senator Whitehouse. The whole crux of the argument, as has been explained over and over again, is that greater encryption better protects the public from cyberattacks, from those seeking to violate their privacy and from other potential malicious actors.

In other words, the actual scenario that Whitehouse should be concerned about is not the mythical girl being abducted into a van (again, a scenario that rarely happens), but the malicious online actors who are seeking to break into the girl’s bank account or other online accounts in order to cause all sorts of actual problems for her in real life. That’s the much more likely threat, and it’s the one that strong encryption helps protect. The whole idea that strong encryption is the equivalent of pollution is hilariously wrong. Pollution is a negative externality. But strong encryption is not a negative externality. It better protects the public. It’s a public benefit.

Senator Whitehouse’s argument is based on a near total misunderstanding of what encryption does and how it protects people, and is devoid of any understanding of actual threats that people face in the world — both the low likelihood of random abduction and the high likelihood of having your online accounts under attack. It’s so far from reality that it feels like Senator Whitehouse ought to issue an apology.

On to the second hearing before the Intelligence Committee. In this case, the Senator we’ll pick on is Senator John McCain. His part starts a little after the 1 hour and 15 minute mark into that video. And he’s focused on the worst kind of political grandstanding, hyping up FUD around ISIS, followed by a “but we must do something!” argument that ignores the simple fact that the plan he supports actually makes the problem worse, not better. As you’ll see, Senator McCain doesn’t care about that. He just wants something done. This one involves some back and forth with Comey, starting with the scare stories to start things out:

McCain: Is it true that, you have stated on several occasions, that ISIS poses over time a direct threat to the United States of America?

Comey: Yes.

McCain: And that is the case today?

Comey:: Yes. Everyday they’re trying to motivate people here to kill people on their behalf.

McCain: And every day that they take advantage of this use of the internet, which you have described by going to unbreakable methods of communicating, the more people are recruited and motivated to, here in the United States and other countries to attack the United States of America. Is that true.

Comey: Yes sir.

Okay, let’s just cut in here first of all to note that it’s not actually true. I mean, it’s possible that this is happening, but there still has yet to be a single credible story about ISIS successfully “recruiting” people in the US to perform an attack in the US. All of the ISIS “arrests” so far have been part of the FBI’s own plots, where it’s an FBI informant doing the “recruiting and motivating.”

McCain: So this is not a static situation. This is a growing problem, as ISIS makes very effective use of the internet. Is that correct?

Comey: That’s correct sir.

McCain: So with all due respect to your opening comments, this is more than a conversation that’s needed. It’s action that’s needed. And, isn’t it true that, over time, the ability of us to respond is diminished as the threat grows and we maintain the status quo?

Comey: I think that’s fair.

Actually, it’s not fair. It’s wrong. I mean, it depends on what kind of “action” we’re talking about — but since the entire hearing focused on backdooring encryption, it’s difficult to argue that the “ability to respond diminishes” over time because any plan to backdoor encryption wouldn’t be an actual response that matters. ISIS would quickly just switch to encrypted systems that aren’t backdoored by the US government, and there are plenty to choose from.

McCain: So, we’re now — and I’ve heard my colleagues, with all due respect talking about attacks on privacy and our Constitutional rights etcetera — but it seems to me that our first obligation is the protection of our citizenry against attack. which you agree is growing. Is that a fact?

Comey: I agree that is our first responsibility. But I also…

McCain: So the status quo is not acceptable if we support the assertion that our duty is to protect the lives and property of our fellow citizenry. That is our first priority. You agree with that?

Okay, first off, you should really watch this point to see the dismissive way he shrugs off the part about “privacy” and “our Constitutional rights etcetera.” It’s really quite disturbing, frankly. And that’s because the next line is just wrong. The Oath of Office given to Senators is that they will “support and defend the Constitution of the United States against all enemies, foreign and domestic.” It does not say anywhere that they are to “protect our citizenry against attack.” And it especially does not say that the role of a Senator is to protect the citizenry from attack over protecting the Constitution. It says the exact opposite. It says that his sole job is to protect the Constitution.

That a Senator who has been in office as long as McCain is flat out ignoring the Oath he’s taken many times, and actually arguing for a policy that he is admitting violates that oath is somewhat stunning. He is flat out saying, in violation of his oath, that his job is to undermine the Constitution if he believes it will protect the American people from attack.

And, just to highlight how incredibly stupid this statement is, pushing for backdoors on encryption doesn’t even do what he thinks it does. It actually makes Americans more open to attack by making their digital information less safe and secure. So even if we took McCain’s argument at face value and ignored that it’s directly in contrast to his oath of office, he’s still wrong, because he’s putting more Americans at risk, rather than “protecting” them.

As for Comey agreeing that this is a first priority, he’s wrong about that too. Some might think that is the first priority for the FBI, even if it isn’t for Congress, but it’s not. The FBI’s oath is also to “support and defend the Constitution against all enemies, foreign and domestic.”

McCain then drags out a bunch of leading questions in which he continues to try to make it out like “something must be done to stop this nasty encryption” stuff, getting Comey to (mostly) agree, even to completely bogus statements.

Comey: I agree that this is something that we have to figure out what to do about.

McCain: So now we have a situation where the major corporations are not cooperating and saying that if we give the government access to their internet, that somehow, it will compromise their ability to do business. Is that correct also?

Comey: (Shakes his head back and forth in a way suggesting he disagrees, but then says): That’s a fair summary of what some have said.

McCain: So we’re discussing a situation in which the US government — i.e., law enforcement and the intelligence community — lack the capability to do that which they have the authority to do. Is that correct?

Comey: Certainly with respect to the interception of encrypted communications and accessing locked devices, yes.

McCain: So we’re now in an interesting situation where your obligation is to defend the country, and at the same time, you’re unable to do so, because these telecommunications… these organizations are saying that you can’t, and are devising methodology that prevents you from doing so, if it’s the single key, only used by the user. Is that correct?

Comey: I wouldn’t agree, Senator, that I’m unable to discharge my duty to protect the country. We’re doing it every single day using all kinds of tools…

McCain: Are you able to have access to those systems that only have one key?

Comey: No, we can’t break strong encryption.

McCain: So, you can’t break it. And that is a mechanism which is installed by the manufacturer prevent you [sic] from using… that there’s only one key that is available to them… to you.

Comey: That’s correct.

Now, to his very slight credit, after this misleading back and forth, Comey eventually plays a slight devil’s advocate here, and at least attempts to channel the views of all of those computer security experts who have pointed out that backdooring encryption makes people less safe.

McCain: So suppose that we had legislation which required two keys. One for the user and one that, given a court order, requiring a court order, that you would be able to — with substantial reason and motivation for doing so — would want to go into that particular site. What’s the problem with that?

Comey: Well, a lot of smart people, smarter than I, certainly, say that would have a disastrous impact on broader security across the internet, which is also part of my responsibility.

McCain: Do you believe that?

Comey: I’m skeptical that we can’t find a solution that overcomes that harm. But a lot of serious people say “ah, you don’t realize, you’ll rush into something and it’ll be a disaster for your country. Because it’ll kill your innovation, it’ll kill the internet.” That causes me to at least pause and say “well, okay, let’s talk about it.”

At which point McCain totally ignores that point to go back to his but we need to do something! mantra.

McCain: But, we’ve just established the fact that ISIS is rushing in to trying… attempting… to harm America and kill Americans. Aren’t we?

Comey: They are.

McCain: So I say with respect to my colleagues, and their advocacy for our constitutional obligations and rights, that we’re facing a determined enemy who is, as we speak — according to you and the director of Homeland Security — seeking to attack America, destroy America and kill Americans. So it seems to me that the object should be here, is to find a way not only to protect Americans’ rights, but to protect American lives. And I hope that you will devote some of your efforts — and I hope that this Committee… and I hope the Congress will — understand the nature of this threat. And to say that we can’t protect Americans’ Constitutional rights in the same time protect America, is something that I, simply, won’t accept.

Except, we can protect Americans’ Constitutional rights and, at the same time, protect America: by enabling strong encryption that better protects the security and privacy of everyone, without adding unnecessary vulnerabilities in the form of government backdoors. McCain completely ignored the rebuttal point that his position actually makes America less safe by opening things up to those who wish to attack us.

Don’t we deserve Senators who don’t spout pure ignorance, focused on scaring the American public in ways that make us both less safe and take away the Constitutional rights they’ve sworn to defend?

There were plenty of other ridiculous claims made by Senators in both hearings, but these were the two nutty ones that stuck out for me. We deserve better elected officials.

Filed Under: congress, constitution, encryption, going dark, james comey, john mccain, senate, sheldon whitehouse

Senator Whitehouse Is Very Angry About A Made Up Google Search And A Made Up Pirate Bay

from the something-must-be-done dept

Senator Sheldon Whitehouse was a strong supporter of the SOPA/PIPA approach to breaking the internet to appease Hollywood. Even as lots of others bailed out on their support of the bills, Whitehouse refused to change his position. It appears he’d like to push such a solution again. On Wednesday, the Senate held hearings for the nominees for both the head of the US Patent and Trademark Office, Michelle Lee, as well as the new “Intellectual Property Enforcement Coordinator” (IPEC), Dan Marti. Marti is a bit of a wildcard, with most of his legal practice related to intellectual property being focused on trademark, rather than copyright. So it was worth paying attention to what he had to say in response to the questions. However the most bizarre and ridiculous question came from Senator Whitehouse, who proved to be rather confused about how both the internet and copyright law worked. You can see the full video here. Whitehouse begins talking around the one hour, 35 minute mark. He kicks it off by displaying his ignorance. First, he refers to Marti’s predecessor, Victoria Espinel, and how he had asked her to do more to stomp out piracy, and then launches into a statement almost entirely devoid of factual statements:

I can remember Ms. Espinel coming here, some time ago to talk about the progress she intended to make in dealing with the criminal activity that steals American intellectual property, particularly entertainment content, and provides it to viewers, and that they were going to work really hard, with other American corporations that were supporting that activity to try to knock it down.

So while we were having this hearing, I picked up my iPad, and I went to Google, and I Googled “pirate movie.” And Google gave me “The Pirate Bay” [holds up his iPad] which is an illegal enterprise, operating out of Sweden. And if you go to the page where you would get access to the pirate content, it says “get access now” and underneath it you have the flags of Visa, of Mastercard, of American Express, of Cirrus and of Paypal. And below that it tells you all the devices it works on and shows you the logos of Apple, Android, and so forth.

It looks to me like this criminal activity is still being wrapped around with the apparent support of a wide variety of American corporations. [Incredulous expression]. Explain to me how there’s been progress made.

Almost everything Senator Whitehouse said in this statement is either wrong or totally clueless. It does not speak well of him as a Senator to be so misinformed about some rather basic things. First, there are the basics. A search engine is not and should not be illegal. Yet, Senator Whitehouse doesn’t seem to understand the different role of a system like The Pirate Bay from a site that actually hosts or uploads infringing content. Second, at the time of the hearing, the Pirate Bay is down, so his claims pretending to show the site are clearly a lie. It’s been in the news a lot that the site is gone. You’d think some staffer would have told Senator Whitehouse not to use that example.

Third, a Google search on “pirate movie” does not link to The Pirate Bay at all. Here’s the search done on Google:

Note that it actually highlights a 1982 movie, and even points people to Amazon where they can purchase it. Nowhere on the page does it point anyone to The Pirate Bay or any other site from which you can download infringing content. Not even close.

Senator Whitehouse appears to be flat out lying about what happens when you do such a search on Google, and then compounding it by lying about going to The Pirate Bay. On top of that, his description of what he claims he saw on The Pirate Bay appears to be totally false as well. And while some of my critics may find this difficult to believe, I’ve never used the site (other than occasionally reading some of its blog posts) so I reached out, via Twitter, to multiple people who had used the site regularly to see if his description was accurate. None could ever remember seeing credit card logos or Apple/Android logos. And, why would they, really, since the content found on The Pirate Bay was usually just pure files and available for free. So there would be no need to post credit card logos or even device compatibility, since that would depend more on the kinds of apps you used to view/listen/read the files obtained. Yes, there were tons of ads on the site, people point out, but they tended to be for crappy porn sites and the like.

In other words, almost every detail of what Senator Whitehouse describes is a lie. He may be describing some other site, but he didn’t find it with the search he described and it wasn’t The Pirate Bay. And even if there were logos from American companies, anyone can set up a website with such logos and it means nothing about whether they’re complicit.

And then he demands that something must be done?

Marti barely gets half a robotic sentence out in response, saying that “criminal actors, criminal enterprises have no limits” when Whitehouse cuts him off with some more nonsense:

They actually do! [Holds up iPad again] There are ways in which these companies could go to court and try to knock this stuff down. There are ways in which prosecutors can have discussions with companies about aiding and abetting offenses and about being accessories to offenses. There’s a lot that can be done in this area, it seems to me!

Marti points out that he was talking about something entirely different — that sites will of course put up logos to make themselves try to look legit (though he doesn’t go so far as to point out that Senator Whitehouse’s suggestion that because a site puts up a logo, that doesn’t mean the company whose logo was put up isn’t “aiding and abetting” a damn thing).

Even more to the point, Whitehouse’s claim that companies can “go to court and try to knock this stuff down” also makes no sense. Under what law? What legal issue is there in the (fake) circumstances that Whitehouse describes? At most, there might be a trademark violation, and does he really think it’s worth company’s time to go after such fly-by-night sites for trademark violations? And the whole “aiding and abetting” claim is ridiculous. Is Senator Whitehouse honestly claiming that if a site that offers up infringing works notes that the works can work on Apple or Android that Apple or Google are “accessories” to a crime? Isn’t a Senator supposed to understand the law?

Whitehouse then turns to Michelle Lee, who used to work at Google, but on patent policy, not copyright, and asks her if Google could stop this. Though, again, he’s flat out lying about what Google is supposedly doing. It’s a bizarre question. And Lee just says she doesn’t know the answer to that question (how could she when it makes no sense). Whitehouse gives a sarcastic “Hmm!” in response, as if he’s discovered something — other than actually revealing his astounding ignorance. He further claims that because Lee was a deputy general counsel at Google (again on patents, not copyright issues) that it shows that Google didn’t really care about this issue. Really?

Finally, he appears to attack Marti for not having done anything, despite the fact he’s not in the job yet, and then claims that all of this proves that the “voluntary” process that Espinel championed (like the ridiculous “six strikes” agreements between some ISPs and the legacy entertainment companies) is not enough. He seems to clearly be hinting that we need more government action, or more SOPA-type laws, based on an entirely false scenario that either he or his staffers (or some… lobbyists) made up and handed to him. Instead, all it shows is him getting angry in a manner that displays his near total ignorance of the topic at hand.

Is it really too much to ask that the people who make the laws impacting technology not be totally ignorant about both the laws and the technology? Frankly, Senator Whitehouse owes Marti, Lee and basically all internet users an apology.

Filed Under: copyright, danny marti, ip czar, ipec, michelle lee, search, senate, sheldon whitehouse, sopa, voluntary agreements
Companies: google, the pirate bay

Congress Continues To Pretend That SOPA Actually Is The Law

from the shameful dept

One of the more troubling aspects that we’ve seen in the past few years is that, despite SOPA failing to pass in Congress, thanks to widespread public outcry, various copyright interests have continued to look for ways to push forward ways to implement SOPA in practice, even if not in law. For example, we recently pointed to how the USTR praised Italy for implementing a plan even more draconian than SOPA, likely leading to a later attempt by the USTR to “harmonize” international laws by requiring the US to do the same in a future trade agreement or treaty. Similarly, the US government still continues to do questionable domain seizures that appear to be a clear First Amendment violation. Even more nefarious, however, may be the various attempts by politicians to push for questionable “voluntary agreements” that effectively implement SOPA anyway.

Recently, four members of Congress — Reps. Bob Goodlatte and Adam Schiff, and Senators Sheldon Whitehouse and Orrin Hatch — sent an exceptionally questionable letter to various internet ad networks, asking them to start blacklisting “piracy sites.” This was one of the requirements in SOPA. And, as we discussed years ago, there are serious problems with such plans. Back in 2011, ad giant GroupM tried to do the same sort of thing, asking Universal Music to provide it with a list of piracy sites, and that list included tons of legitimate sites — including SoundCloud, Vimeo, the Internet Archive, BitTorrent’s corporate page… and a bunch of hip hop blogs. It also included (Universal music artist) 50 Cent’s personal website as a piracy site.

And these four members of Congress seem to have no problem with such censorship.

But this letter is even worse than that. Various ad networks have already set up “best practices” for not putting ads on “bad” sites — but this letter says that’s not enough:

We support these steps, but note that much remains to be done to operationalize the commitments made and to make them effective in preventing the appearance of legitimate ads on pirate sites, rather than simply responding once they are placed. Best practices are useful, but greater specificity is needed around preventative measures that participants in the digital advertising ecosystem can and should take to avoid the placement of ads on piracy sites, as well as the development of metrics to measure the effectiveness of these steps. Only through proactive efforts will the harms associated with ad-supported piracy be mitigated.

As the EFF notes, such intimidation by members of Congress raises a whole host of legal problems:

Letting commercial companies with their own competitive motivations decide which sites are “rogue” or “pirate” sites is a recipe for abuse. It means that site owners who comply with copyright law could still have their sources of revenue cut off when a company who might be a competitor asks for it. The legislators’ letter doesn’t define “online piracy sites,” but most of the definitions we’ve seen lately focus on the number of takedown requests a site has received from copyright holders, or the number of requests sent to search engines about the site. Since just a few companies send out a large portion of the takedown requests, those companies would effectively have the power to control who gets deemed a “piracy site.”

As a federal law, this scheme would have created serious First Amendment and due process problems. As a private agreement among competing ad networks, it could raise other legal problems. Under the Sherman Antitrust Act, companies that compete with each other aren’t allowed to make a pact amongst themselves about who they will refuse to do business with, especially if the purpose of the pact is to squelch competition or punish a rival. It’s called a “group boycott” or “concerted refusal to deal,” and it can lead to big-money lawsuits and years of trouble. In some cases, groups of competitors sharing a list of companies that they deem to be bad actors, with a wink-wink understanding that no one in the group should do business with those companies, was deemed a violation of the Sherman Act1.

Claiming that an industry-wide refusal to deal is justified by “fighting piracy” doesn’t necessarily avoid an antitrust jam. In 2003, the Motion Picture Association of America decided that its members, major movie studios who compete with one another, would no longer send pre-release “screener” copies of films to members of awards committees like the Motion Picture Academy. According to the MPAA, the group boycott of awards committees was needed to stop infringement of pre-release movies. But the group ban put smaller studios at a huge disadvantage in getting award nominations and votes. In just two months, a court decided that the MPAA’s screener ban was likely illegal, and that loss may have precipitated MPAA head Jack Valenti’s retirement a few months later.

Once again, we have lawmakers — with an unfortunately long history of being the movie and recording industry’s lapdogs in Congress — making suggestions that would make those industries happy, but which almost certainly violate the law. And, even worse, they clearly go against the will of the American public, who vocally rejected such measures when they were put into SOPA and PIPA.

Could it be that Reps. Goodlatte and Schiff, and Senators Whitehouse and Hatch, have already forgotten what happened when they pushed for such a law? I can assure them that the American public hasn’t forgotten.

Filed Under: ad networks, ad providers, adam schiff, bob goodlatte, copyright, orrin hatch, sheldon whitehouse, sopa