house energy & commerce committee – Techdirt (original) (raw)

House Looks To Make KOSA And COPPA Worse

from the not-this-again dept

If you had that Congress was going to take bad “protect the children” bills and make them worse on your bingo card, congratulations, you’ve won this week’s easiest prediction.

This seems like a not great situation. As mentioned, the House Energy & Commerce Committee is holding a markup today for a bunch of bills. Yesterday, they revealed the various amendments being proposed, including three replacement bills (called Amendments in the Nature of a Substitute or “AINS”) for KOSA, COPPA 2.0, and APRA.

COPPA 2.0 is another problematic “protect the children bill” that takes the already problematic COPPA law and makes it way worse. In the Senate, the vote on KOSA (which was approved 91 to 3) was actually a merged version of KOSA and COPPA 2.0, referred to as KOSPA. So, what happens to both bills is important. The inclusion of APRA is a bit strange as it was an attempt at a comprehensive federal privacy bill that I thought was probably about as good as we might otherwise get, though it involved compromises. But it died a quick death as folks on both sides of the issue hated the compromises. I have no idea why anyone would bring it back now as it just didn’t have the votes.

The new versions of KOSA and COPPA 2.0, however, still do not fix the underlying problems of these bills, and in some ways appear to make them worse. The new version of KOSA adds in new language that will absolutely lead to abuse by Republicans to force companies to remove LGBTQ+ content. At least the version that passed the Senate made some weak attempts to try to insulate the bill against claims that was possible. Not so much here.

There’s a new part of the “duty of care” section (always the most problematic part of the bill) that says that “high impact online companies” need to “prevent and mitigate” the following:

Promotion of inherently dangerous acts that are likely to cause serious bodily harm, serious emotional disturbance, or death.

On its face, this might not seem all that troubling, until you actually think about it. First off, it seems like an attempt to hold websites liable for various challenges that the media loves to report on, though there is scant evidence that they lead to widespread copycat behavior. As we’ve discussed, these kinds of stupid “challenges” predate even the internet. As the CDC has noted, there are reports of such challenges appearing in newspapers dating back to the 1970s.

Did we threaten to put liability on newspapers for that? Of course not. Yes, it’s bad if kids are putting themselves in danger, but when these challenges made the rounds in the 70s and 80s we taught kids not to do stupid stuff. There’s no reason we can’t continue to educate kids, rather than hand off an impossible task to internet companies to magically find and stop all such content in the first place.

As much as people may not like it, most of those challenges are protected First Amendment speech. Holding companies liable for not magically making them disappear is going to be a problem.

Perhaps a bigger deal, though, is the inclusion of “serious emotional disturbance” in that description. The bill includes a definition for this, which might seem better than the vague “mental health disorder” that was in the original KOSA, but here it creates even more problems:

SERIOUS EMOTIONAL DISTURBANCE.—The term ‘‘serious emotional disturbance’’ means, with respect to a minor, the presence of a diagnosable mental, behavioral, or emotional disorder in the past year, which resulted in functional impairment that substantially interferes with or limits the minor’s role or functioning in family, school, or community activities.

So, um, can literally anyone in Congress explain how a social media platform will know whether or not (1) someone using their website has been diagnosed with a “serious emotional disturbance,” or (2) that content on their website will lead to a diagnosis of a “serious emotional disturbance”?

A pretty straight reading of this suggests that social media companies need to get direct access to everyone’s medical records to determine if they’ve been diagnosed with an “emotional disturbance.” And, well, that doesn’t seem good.

Also, given that much of the GOP falsely believes that a child coming out as transgender means that they have a “serious emotional disturbance,” this part of the bill will almost certainly be used (as we warned) to try to force social media companies to remove LGBTQ+ content to avoid being accused of failing to “prevent and mitigate” such “serious emotional damage.”

Right after that section, the new version says that the “duty of care” means that a “high impact online company” must then somehow prevent or mitigate “compulsive usage” when they know a user is a “minor.” The definition for compulsive usage is… well… not great.

COMPULSIVE USAGE.—The term ‘‘compulsive usage’’ means a persistent and repetitive use of a covered platform that substantially limits 1 or more major life activities (as described in section 3(2) of the Americans with Disabilities Act of 1990 (42 U.S.C. 12102(2))) of an individual, including eating, sleeping, learning, reading, concentrating, thinking, communicating, and working.

So, over the past few weeks, I’ve been reading a fascinating book. I ended up staying up noticeably later than I planned to on multiple evenings as I was glued to the book. It also definitely limited my ability to concentrate and think about other stuff as it was so gripping. Is that compulsive usage? It might not be an online service, but if I’d been reading it on the Kindle, would it be?

How does one distinguish compulsive usage that one must “prevent” under this law from… some content people really like a lot?

The problem, of course, is that the bill doesn’t say. This means that companies will take the lazy way out and just block all sorts of content to be safe.

There are more problems, but this “amended” version is deeply, deeply problematic.

The same is true of the updated COPPA 2.0 as well, especially for children who are estranged from their parents, or who may live in a household where a parent does not accept the kids for who they are. Specifically, the bill includes the ability of parents of children and teenagers to obtain basically any information a website has on their children, and to correct or delete that information.

It’s not hard to see how this could go very, very badly. Imagine an LGBTQ+ child who has not come out to their intolerant parents, but who has found communities online that are helpful to them. Parents can demand that internet platforms hand over all the info they provided to the platform:

require the operator to provide, upon the request of a parent of a teen or a teen under this subparagraph who has provided personal information to the operator, upon proper identification of that parent or that teen—

(i) a description of the specific types of personal information collected from the teen by the operator, the method by which the operator obtained the personal information, and the purposes for which the operator collects, uses, discloses, and retains the personal information;

(ii) the opportunity at any time to delete personal information collected from the teen or content or information submitted by the teen to a website, online service, online application, or mobile application and to refuse to permit the operator’s further use or maintenance in retrievable form, or online collection, of personal information from the teen;

(iii) the opportunity to challenge the accuracy of the personal information and, if the parent or the teen establishes the inaccuracy of the personal information, to have the inaccurate personal information corrected; and

(iv) a means that is reasonable under the circumstances for the parent or the teen to obtain any personal information collected from the teen, if such information is available to the operator at the time the parent or the teen makes the request;

Kids have privacy rights too, but not under this bill. The assumption here is that kids have no rights, that kids are the property of their parents, and that parents can get access to basically any content their kids access online, and even change the data on their kids. While there may be cases where that would be appropriate, there are so many when it is not, and the bill makes no effort to distinguish.

All in all, these new versions of the bill don’t fix the problems of the old ones, and in many ways make them worse.

Filed Under: apra, congress, coppa 2.0, house, house energy & commerce committee, kosa, kospa, protect the kids

KOSA Rises From The Ashes: House Committee Announces Markup

from the it-never-ends dept

Just when you thought it was safe to go back on the internet, KOSA rears its ugly head once again.

The rumors of KOSA’s demise in Congress may have been overstated. Following a big push by supporters of the bill, including Senator Marsha “we need this to protect kids from the transgender in our culture” Blackburn, House Energy and Commerce Committee Chair, Rep. Cathy McMorris Rodgers, has announced that she’ll hold a markup of it and 15 other bills on Wednesday of this week.

This does not mean that KOSA is really going to get a vote. Lots of things could happen. But it does mean that KOSA (and COPPA 2.0, which the Senate combined into KOSPA — the Kids Online Safety & Privacy Act) are getting a bit of new life.

It’s possible the markup will be delayed or won’t actually happen. Markups get announced and delayed and sometimes shelved entirely. And what happens at the markup may matter. Markups are when other committee members can offer up amendments, and it gives everyone a sense of what people feel about a bill. It’s possible that amendments could change KOSA quite a bit, though the fundamental problems of the bill are unfixable.

I’ve also heard that House GOP leadership is still not a fan of the bill. So, even if it goes through a markup and passes out of committee, that doesn’t mean that House Speaker Mike Johnson would agree to bring it to the floor.

Since the House bill is still significantly different from the Senate version that passed, even if the bill went to the floor and passed, there would still need to be a reconciliation between both versions and another vote.

In short, there are still plenty of reasons why KOSA might not become law. But, the fact that the markup has been announced suggests that it could move forward and is not totally dead.

If you have a Representative who is on the Energy & Commerce Committee, you might want to call your Representative and point out the many, many problems with the bill. If your Rep is a Republican, I’d recommend Rand Paul’s thoughtful exploration of the problems with the bill. If your Rep is a Democrat, then just highlight how hard the Heritage Foundation is pushing for the bill, and how it sees it as part of its Project 2025 goals to have more power to stop speech it dislikes on the internet.

Filed Under: cathy mcmorris rodgers, coppa 2.0, house energy & commerce committee, kosa, kospa

GOP ‘Investigates’ NTIA For Wanting To Make Broadband Affordable To Poor People

from the this-is-why-we-can't-have-nice-things dept

Fri, Jul 19th 2024 05:31am - Karl Bode

We’ve noted a few times that states will soon receive more than $42.5 billion in taxpayer funded broadband subsidies courtesy of the 2021 infrastructure bill. A lot of that money will go to big giant monopolies with terrible track records on subsidy fraud. But a lot of it is also poised to go toward super popular community-owned broadband networks or cooperatives that might not exist without that funding. It should be a mixed bag, but largely beneficial for driving competition to market.

The BEAD (Broadband Equity Access And Deployment) grant program is overseen by the NTIA, and was made possible by the Infrastructure Investment and Jobs Act (IIJA). The law has a small caveat: ISPs that take taxpayer money have to make at least a passing effort to ensure there’s a base-level introductory tier that’s semi-affordable for poor people. That’s it.

This was of course enough to drive telecom giants like AT&T and Comcast into a major hissy fit back in April. Most major telecoms have a long, elaborate history of taking taxpayer money then not delivering on the actual fiber networks, and they don’t want anything interfering with that proud tradition.

So telecoms whined incessantly that these minor affordability requirements (which I doubt will even be enforced by an increasingly feckless U.S. telecom regulatory apparatus) were illegal efforts at “rate regulation.” AT&T, in particular, has been warning states if they are required to provide cheaper broadband to poor people, they’ll take their ball and go home.

Given the GOP works in perfect policy symmetry with telecom monopolies, they’ve now joined the outrage parade. The GOP House Energy and Commerce Committee has announced an “investigation” into the NTIA for “trying to impose illegal rate regulation:”

“The letter comes amid concerns that NTIA is unlawfully pressuring states to rate regulate low-cost broadband plans required by the BEAD Program.”

This is, to be clear, corruption-fueled telecom industry puppetry from a party that opposed the infrastructure bill (yet routinely tries to take credit for its benefits among constituents). A party that has worked tirelessly (with some help from Democrats) to ensure telecom giants see neither meaningful competition nor oversight, resulting in expensive, spotty, and slow broadband.

The law in question, IIJA, delegates fund management to the states, which should start receiving money this fall. It also requires that providers that take taxpayer money provide at least one “low-cost broadband service option for eligible subscribers.” But the law also says the NTIA may not “regulate the rates charged for broadband service.” At a hearing last May, NTIA Administrator Alan Davidson put it this way:

“The statute requires that there be a low-cost service option. We do not believe the states are regulating rates here. We believe that this is a condition to get a federal grant. Nobody’s requiring a service provider to follow these rates, people do not have to participate in the program.”

Again, I suspect this was to never be enforced with any zeal; telecom giants already broadly enjoy corruption fueled regulatory capture, all but own countless state legislatures, and, as extremely patriotic participants in our vast domestic surveillance initiatives, are broadly considered beyond the reach of under-funded regulators or a corrupt Congress.

Big ISPs have always been terrified of even the faintest idea that anybody in government would so much as think about “rate regulation” (or any attempt to stop them from exploiting their regional monopolies to rip off captive customers). Even if, thanks to regulatory capture and widespread U.S. corruption, that hasn’t been a serious threat to their regional fiefdoms any time in the last quarter century.

U.S. regulators can barely even acknowledge that monopolies harm competition and consumers, much less pursue a solution with any zeal. The best we tend to get are these sort of late in the game attempts to ask nicely if big ISPs will try to be ethical. And it’s all poised to get even more feckless now that the Supreme Court has dismantled independent regulatory authority.

Large, politically influential ISPs like AT&T want to take taxpayer money with absolutely no strings attached. They also have tried very hard to make sure most of the taxpayer money goes to them, and not actual communities or smaller competitors. And the GOP vision on telecom is, if it’s not yet clear, to let these widely disliked monopolies do pretty much whatever they want.

That gets dressed up as serious adult policymaking in press and policy circles, but it’s really just corruption and regulatory capture wearing an ugly hat.

Filed Under: affordable broadband, alan davidson, BEAD, broadband, congress, gop, house energy & commerce committee, ntia, rate regulation

Congress Wants A Magic Pony: Get Rid Of Section 230, Perfect Moderation, And Only Nice People Allowed Online

from the the-land-of-magical-thinking dept

The internet is the wild west! Kids are dying! AI is scary and bad! Algorithms! Addiction! If only there was more liability and we could sue more often, internet companies would easily fix everything. Once, an AI read my mind, and it’s scary. No one would ever bring a vexatious lawsuit ever. Wild west! The “like” button is addictive and we should be able to sue over it.

Okay, you’re basically now caught up with the key points raised in yesterday’s House Energy & Commerce hearing on sunsetting Section 230. If you want to watch the nearly three hours of testimony, you can do so here, though I wouldn’t recommend it:

It went like most hearings about the internet, where members of Congress spend all their time publicly displaying their ignorance and confusion about how basically everything works.

But the basic summary is that people are mad about “bad stuff” on the internet, and lots of people seem to falsely think that if there were more lawsuits, internet companies would magically make bad stuff disappear. That, of course, elides all sorts of important details, nuances, tradeoffs, and more.

First of all, bad stuff did not begin with the internet. Blaming internet companies for not magically making bad stuff disappear is an easy out for moralizing politicians.

The two witnesses pushing for sunsetting Section 230 talked about how some people were ending up in harmful scenarios over and over again. They talked about the fact that this meant that companies were negligent and clearly “not doing enough.” They falsely insisted that there were no other incentives for companies to invest in tools and people to improve safety on platforms, ignoring the simple reality that if your platform is synonymous with bad stuff happening, it’s bad for business.

User growth slows, advertisers go away. If you’re an app, Apple or Google may ban you. The media trashes you. There are tons of incentives out there for companies to deal with dangerous things on their platforms, which neither the “pro-sunset” witnesses nor the congressional reps seemed willing to acknowledge.

But the simple reality is that no matter how many resources and tools are put towards protecting people, some people are going to do bad things or be put in unsafe positions. That’s humanity. That’s society. Thinking that if we magically threaten to sue companies that it will fix things is not just silly, it’s wrong.

The witnesses in favor of sunsetting 230 also tried to play this game. They insisted that frivolous lawsuits would never be filed because that would be against legal ethics rules (Ha!), while also insisting that they need to get discovery from companies to be able to prove that their cases aren’t frivolous. This, of course, ignores the fact that merely the threat of litigation can lead companies to fold. If the threat includes the extraordinarily expensive and time consuming (and soul-destroying) process of discovery, it can be absolutely ruinous for companies.

Thankfully, this time, there was one witness who was there who could speak up about that: Kate Tummarello from Engine (disclosure: we’ve worked with Kate and Engine in the past to create our Startup Trail startup policy simulation game and Moderator Mayhem, detailing the challenges of content moderation, both of which demonstrate why the arguments from those pushing for sunsetting 230 are disconnected from reality).

Kate’s written testimony is incredibly thorough. Her spoken testimony (not found in her written testimony, but can be seen in the video at around 34:45) was incredibly moving. She spoke from the heart about a very personal situation she faced in losing a pregnancy at 22 weeks and relying on online forums and groups to survive the “emotional trauma” of such a situation. And, especially at a time when there is a very strong effort to criminalize aspects of women’s health care, the very existence of such communities online can be a real risk and liability.

The other witnesses and the reps asking questions just kept prattling on about “harms” that had to be stopped online, without really acknowledging that for about half of the panel, they would consider the groups that Kate relied on through one of the most difficult moments in her life as a “harm” where liability should be there, allowing people to sue whoever hosts or runs such groups.

It’s clear that the general narrative of the “techlash” has taken all of the oxygen out of the room, disallowing thoughtful or nuanced conversations on the matter.

But what became clear at this hearing, yet again, is that Democrats think (falsely) that removing Section 230 will lead to some magic wonderland where internet companies remove “bad” information, like election denials, disinformation, and eating disorder content, but leave up “good” information, like information about abortions, voting info, and news. While Republicans think (falsely) that removing Section 230 will let their supporters post racial slurs without consequence, but encourage social media companies to remove “pro-terrorist” content and sex trafficking.

Oh, and also, AI is bad and scary and will kill us all. Also, big tech is evil.

The reality is a lot more complicated. AI tools are actually incredibly important in enabling good trust & safety practices that help limit access to truly damaging content and raise up more useful and important content. Removing Section 230 won’t make companies any better at stopping bad people from being bad or things like “cyberbullying”. This came up a lot in the discussion, even as at least one rep got the kid safety witness on the panel to finally admit that most cyberbullying doesn’t violate any law and is protected under the First Amendment.

Removing Section 230 would give people a kind of litigator’s veto. If you threaten a lawsuit over a feature, some content, or an algorithm recommendation you don’t like, smaller companies will feel pressured to remove it to avoid the risk of costly endless litigation.

It wouldn’t do much to harm “big tech,” though, since they have their buildings full of lawyers, snf large trust & safety teams empowered by tools they spend hundreds of millions of dollars developing. They can handle the litigation. It’s everyone else who suffers. The smaller sites. The decentralized social media sites. The small forums. The communities that are so necessary to folks like Kate when she faced her own tragic situation.

But none of that seemed to matter much to Congress, who just wants to enable ambulance chasing lawyers to sue Google and Meta. They heard a story about a kid who had an eating disorder, and they’re sure it’s because Instagram told them to. It’s not realistic.

The real victims of this rush to sunset Section 230 will be all the people, like Kate, and also like tons of kids looking for their community, or using the internet to deal with various challenges online.

Congress wants a magic pony. And, in the process, they’re going to do a ton of harm. Magic ponies don’t exist. Congress should deal in the land of reality.

Filed Under: algorithms, congress, content moderation, house energy & commerce committee, Kate Tummarello, liability, section 230

When Viral Advocacy Fails: TikTok’s Call Flood To Congress Backfires

from the swipe-here-to-call-congress dept

Flooding Congress with phone calls can work wonders to stop bad bills at times. The SOPA blackout 12 years ago was one of the most effective advocacy campaigns in history. Coincidentally, I was at the Capitol that day, and wandering the halls between meetings, hearing phones ringing non-stop was amazing.

However, that process was carefully planned out over weeks, with sites pushing a very clear message of why internet users should call Congress and complain about the terrible copyright laws that were being pushed.

It appears that TikTok may have taken the wrong lesson from all that and assumed that simply flooding Congress with calls is an effective strategy. It can be, but you have to equip callers with a basic understanding of what it is that they’re calling for and why. And maybe it doesn’t make sense to do it on a bill built off the (mostly false) belief that your app is controlling the minds of gullible American voters.

On Thursday, TikTok put up a pop-up on all US users’ screens when they went to get their daily fill of random videos:

Image

“Stop a TikTok shutdown!” it yells, claiming that “Congress is planning a total ban of TikTok. Speak up now — before your government strips 170 million Americans of their Constitutional right to free expression.”

The bill in question is stupid. It’s a fear-mongering (bipartisan) bunch of grandstanding nonsense. It doesn’t technically “ban” TikTok, but would directly require ByteDance to divest its ownership in the company. If ByteDance does not do so, then it is a ban (despite the bill’s sponsors insisting it’s not). It does seem like a pretty clear bill of attainder, targeting a single company, TikTok, out of yet another fear-mongering moral panic that a successful internet company coming out of China must be evil.

As we’ve been saying for years now, if the fear is about the privacy of American users of the platform, Congress could pass a comprehensive privacy bill. They just choose not to do so. Instead, they play up a silly culture war, which will only lead to even more retribution for American apps outside the US. Indeed, expect to see other countries passing similar bills demanding that US companies divest from successful apps in their countries, as a result of this stupid bill.

And, on top of that, the bill is almost certainly a First Amendment violation, as has been found during previous attempts to effectively ban TikTok, none of which have gone well in court.

TikTok’s gambit apparently worked in terms of getting people to call. But it didn’t always effectively get the message out:

TikTok users flooded some congressional offices with dozens of calls. Results were mixed: Some staffers dismissed the callers as uninformed, or as pranksters, or as “teenagers and old people saying they spend their whole day on the app.”

And, look, when you have a bunch of overly anxious politicians who think that TikTok is like Chinese mind control over American brains (it’s not, but that’s what they seem to think), it’s not difficult to see how telling TikTok users to call Congress could drive those politicians to think this is even more evidence of why the bill is needed, especially when there is a flood of calls from unsophisticated constituents talking about how they “spend their whole day on the app.”

And that seems to have been the case.

House Energy and Commerce Chair Cathy McMorris Rodgers (R-Wash.) said if anything, TikTok’s orchestrated calling campaign “only exposed the degree in which TikTok can manipulate and target a message.”

And thus it’s no surprise that the committee voted 50 to 0 to advance the bill:

Lawmakers on the Energy and Commerce Committee, which greenlit the bill Thursday afternoon after months of negotiations, said the intent was not to get rid of TikTok, but to prevent a Chinese company from having access to large troves of American data. The committee voted 50-0 to advance the bill to the full House or Representatives.

Again, it’s a painfully stupid and reactionary bill, but this campaign seemed pretty mistargeted. There was a way in which TikTok could have more effectively leveraged its large user base to talk about the problems and risks of such a bill. But just sending them in to scream at Congress was perhaps not the best approach given the specific animus behind this bill.

Filed Under: advocacy, congress, fear mongering, house energy & commerce committee, privacy, tiktok ban
Companies: tiktok

Ajit Pai Refuses To Brief Congress On What He Plans To Do About Wireless Location Data Scandals

from the thanks-but-no-thanks dept

Wed, Jan 16th 2019 06:22am - Karl Bode

So last week yet another location data scandal emerged for the wireless industry, highlighting once again how carriers are collecting your location data, then selling it to a universe of sometimes shady partners with little to no oversight or accountability. Like the Securus and LocationSmart scandals before it, last week’s Motherboard report highlighted how all manner of dubious dudebros (and law enforcement officers) have been abusing this data for years, and the Ajit Pai FCC has yet to so much as mention the problem, much less spend a single calorie addressing it in any meaningful way.

Shortly after the scandal broke last week, Frank Pallone, the Chair of the House Committee on Energy and Commerce, asked Pai (pdf) to brief Congress on the steps the agency was taking to address the wireless sector’s long-standing failure to adequately address location data abuse. Pai’s response? Yeah, no thanks.

In a statement issued by Pallone, he says Pai’s office claimed that since the location data scandal wasn’t putting lives at risk, Pai could not attend such a briefing during the government shutdown:

“Today, FCC Chairman Ajit Pai refused to brief Energy and Commerce Committee staff on the real-time tracking of cell phone location, as reported by Motherboard last week. In a phone conversation today, his staff asserted that these egregious actions are not a threat to the safety of human life or property that the FCC will address during the Trump shutdown.

While the FCC’s working on a skeleton crew right now due to the shut down, there’s nothing actually stopping Pai from wandering down the road to answer a few questions, something Pallone was quick to highlight in his statement:

“There?s nothing in the law that should stop the Chairman personally from meeting about this serious threat that could allow criminals to track the location of police officers on patrol, victims of domestic abuse, or foreign adversaries to track military personnel on American soil. The Committee will continue to press the FCC to prioritize public safety, national security, and protecting consumers.”

Granted Pai wasn’t doing much about this problem when the government was open, either.

Academics and other privacy experts have told me this could easily be addressed using the FCC and FTC authority we already have (read: we don’t even need a new privacy law), we’ve just chosen to kowtow to telecom lobbyists instead. In fact the FCC’s privacy rules would have addressed the issue by giving consumers more control of how their location data is shared and sold, but sector lobbyists made quick work of those rules back in 2017. Even having Pai publicly state that this behavior is unacceptable might go a long way toward addressing the issue, though he’s yet to do even that.

Pai has made it fairly clear by now that he sees government consumer protection oversight as largely unnecessary, and all criticism of his unpopular policies as entirely political in nature, therefore making it OK to ignore (the myopia of that belief system most obviously exemplified by his attacks on net neutrality). As a result, you should expect the FCC to continue to do little to nothing about location data scandals. At least until there’s enough scandals of this type to push public outrage past the breaking point, finally making it clear that doing absolutely nothing is no longer an option. So, 2025 or so?

Filed Under: ajit pai, congress, e&c, fcc, frank pallone, house energy & commerce committee, location, location data, privacy, shutdown

Congressional Committees Say Backdooring Encryption Is A Bad Idea

from the sorry,-Jim,-but-thanks-for-asking! dept

Two bipartisan Congressional committees are the latest to express their opposition to government-mandated encryption backdoors. The House Judiciary Committee and the House Energy and Commerce Committee have arrived at the same conclusion as the experts FBI director James Comey insists on ignoring: encryption backdoors are a net loss for everyone, no matter what gains might be experienced by law enforcement and intelligence agencies.

This is stated plainly in the first bullet point of its encryption report [PDF]:

Any measure that weakens encryption works against the national interest

While the committees acknowledge encryption can impede investigative efforts, the downsides of backdoors cannot be offset by making things easier for certain government agencies.

[S]takeholders from all perspectives acknowledged the importance of encryption to our personal, economic, and national security. Representatives of the national security community told the EWG that strong encryption is vital to the national defense and to securing vital assets, such as critical infrastructure. Civil society organizations highlighted the importance of encryption for individual privacy, freedom of speech, human rights, and protection against government intrusion at home and abroad. Private sector stakeholders—in particular, their information security officers—and members of the academic community approached the question from an engineering perspective—against a wide array of threats, foreign and domestic, encryption is one of the strongest cybersecurity tools available.

However, the committees still believe there might be a way to reconcile competing interests, even though it has more questions than answers at this point. The report suggests more “collaboration” between tech companies and law enforcement agencies — a term that generally means most of the compromises will be made by the private sector. Whether this means companies collecting more data and communications and storing them where law enforcement can access them or creating “one time” backdoors in response to court orders remains to be seen.

More encouragingly, the report suggests the “smart guys” in law enforcement haven’t fully taken advantage of the tools and data available to them.

It also remains unclear whether the law enforcement community is positioned to fully leverage the unencrypted information still held by many companies. A number of stakeholders acknowledged the potential benefit of improving law enforcement’s understanding of what data or information is available, who controls it, and how it could be useful to investigators. In particular, companies are often able to provide volumes of unencrypted metadata associated with their products or services. In some cases, this source of information could be useful to investigators. In others, one representative of a law enforcement agency told the EWG, access to a stream of metadata might be more like “looking for a particular grain of sand on the beach.”

This is probably the result of the law enforcement mindset. It often seems agencies are more interested in what is quickest and easiest, rather than what might be more productive, if just a bit more difficult. (A number of cases where warrants were never obtained, despite officers having both the time and probable cause to do so, is evidence of this mindset.) The report suggests this is one area where things could be improved by collaboration with private companies. It’s not a terrible suggestion but it’s one that requires agencies to move on from their defeatist attitudes and to stop pretending advances in technology are always far more beneficial to criminals than to law enforcement.

The report also inadvertently points out just how disingenuous it is to shrug off mass surveillance concerns by saying, “It’s just metadata.”

Metadata may not completely replace the loss of encrypted content, but metadata analysis could play a role in filling in the gap. The technology community leverages this information every day to improve services and target advertisements. There appears to be an opportunity for law enforcement to better leverage this information in criminal investigations.

The report also touches on “legal hacking” as a potential solution — albeit one with very limited practical application. If this is the route the government chooses to go more frequently in response to encrypted devices, it will signal the end of the already mostly-worthless Vulnerabilities Equity Process. It would also — as the report acknowledges — only further the “us vs. them” conflict between tech companies and law enforcement, as the government’s interest in keeping vulnerabilities secret would tend to outweigh its obligation to divulge security holes to affected companies.

While the report breaks very little new ground in terms of issues raised, it does at least signal that legislative efforts to undermine encryption aren’t likely to find much bipartisan support. So, for the time being, device encryption is still safe. It’s the other issues raised — legal hacking, compelled disclosure, etc. — that will need to be watched closely in the future.

Filed Under: backdoors, congress, crypto wars, encryption, going dark, house energy & commerce committee, house judiciary committee, james comey