federal government – Techdirt (original) (raw)
For Whatever Reason, NASA’s Inspector General Has Decided To Buy Itself Some Clearview Access
from the wait-why? dept
Get ready for some more unexpected uses of the world’s most controversial facial recognition tech. Clearview has amassed a 10-billion-image database — not through painstaking assembly but by sending its bots out into the open web to download images (and any other personal info it can find). It then sells access to this database to whoever wants it, even if Clearview or the end users are breaking local laws by using it.
Not for nothing do other facial recognition tech firms continue to distance themselves from Clearview. But none of this matters to Clearview — not its pariah status, not the lawsuits brought against it, nor the millions of dollars in fines and fees it has racked up around the world.
Here’s why none of this matters to Clearview: government entities still want its product, even if that means being tainted by association. While we know spies and cops similarly don’t care what “civilians” think about them or their private contractors, we kind of expect some government agencies to show some restraint. But as we’ve seen in the past, “have you no shame?” tends to earn a shrug at best and a “no” at worst.
Clearview is relatively cheap. And no other tech firm can compete with the size of its web-scraped database. So we get really weird stuff, like the IRS, US Postal Service, FDA, and NASA buying access to Clearview’s tech.
The IRS has always been an early adopter of surveillance tech. The origin of the steady drip of Stingray info began with an IRS investigation in the early 2000s. The Postal Service claims its Clearview use pertains to investigations of burgled mail or property damage to postal buildings/equipment. NASA hasn’t bothered to explain why it needs Clearview. But it bought a short-term license in 2021. And, as Joseph Cox reports for 404 Media, it (following taking a two-year break) purchased another Clearview license two months ago.
NASA bought access to Clearview AI, a powerful and controversial surveillance tool that uses billions of images scraped from social media to perform facial recognition, according to U.S. government procurement data reviewed by 404 Media.
[…]
“Clearview AI license,” the procurement record reads. The contract was for $16,000 and was signed in August, it adds.
While it would make sense NASA might employ some sort of facial recognition tech (or other biometric scanner) to ensure secure areas remain secure, what’s needed there is one-to-one matching. Clearview offers 1-to-10 billion matching, which would make zero sense if NASA just needs to ensure solid matches to keep unauthorized personnel out of certain areas.
The twist to this purchase is that it doesn’t belong directly to NASA, so to speak. It belongs to its oversight.
The part of NASA that will use the Clearview AI license is its oversight body, the Office of the Inspector General (OIG), which has special agents who sometimes carry concealed firearms, perform undercover operations, and develop cases for criminal or civil prosecution.
Now, that makes a little more sense. But if the investigation involves unauthorized access to facilities or equipment, it still seems like a one-to-one solution would do better at generating positives and negatives without increasing the chance of a false match.
If there’s something else going on at NASA that involves non-NASA personnel doing stuff at NASA (or committing crimes on NASA property), then Clearview would make more sense, but only in the sense that it isn’t limited to one-to-one searches. Any other product would do the same job without NASA having to put money in Clearview’s pockets. But at $16,000, it’s safe to assume the NASA OIG couldn’t find a cheaper option.
Even so, it’s still weird. While the OIG does engage in criminal investigations, those target government employees, not members of the general public. If there’s criminal activity involving outsiders, it’s handled by federal law enforcement agencies, not NASA’s oversight body.
Maybe the questions this purchase raises will be answered in a future OIG report. Or maybe that report will only raise more questions. But it seems pretty clear from even the limited information in this report that Clearview licenses are probably far less expensive than anything offered by its competitors. And, for that reason alone, we’re going to see an uptick in explicable purchases by governments all over the nation for as long as Clearview can manage to remain solvent.
Filed Under: facial recognition, facial recognition tech, federal government, nasa
Companies: clearview, clearview ai
Federal Watchdog Finds Lots Of Facial Recognition Use By Gov't Agencies, Very Little Internal Oversight
from the getting-a-real-'Wild-West'-vibe-from-this dept
Facial recognition tech remains controversial, given its tendency to produce false positives and send cops after the wrong people. Private companies offering even sketchier tech than what’s already in use (looking at you, Clearview) have made everything worse.
The upside is this state of affairs has prompted at least one federal government oversight entity to do some actual oversight. The Government Accountability Office (GAO) has released its report [PDF] on federal agencies’ use of facial recognition tech and it contains a couple of surprises and, unfortunately, several of the expected disappointments. (via the Washington Post)
For instance, while we expect law enforcement agencies like the FBI, DEA, ATF, and TSA to use facial recognition tech, the report notes that a total of 20 agencies own or use the tech. That list also includes some unexpected agencies, like the IRS, US Postal Service, the FDA, and NASA.
There’s also a surprising number of Clearview users among federal agencies, which seems unwise given the company’s history for being sued, investigated, exposed as dishonest, and just kind of terrible in every way. Of the 20 agencies that admitted using this tech, ten have used or have contracts with Clearview, outpacing other third-party offerings by a 2-to-1 margin.
What are these agencies using this tech for? Mainly criminal investigations.
According to the FBI, the system has been used for investigations of violent crimes, credit card and identity fraud, missing persons, and bank robberies, among others. The Department of Homeland Security’s Office of Biometric Identity Management offers a similar service to its partners (e.g., U.S. Immigration and Customs Enforcement). Specifically, the agency’s Automated Biometric Identification System can be used to search a photo of an unknown individual and provide potential matches (i.e., generate leads) to support criminal investigations. Federal agencies also reported using state, local, and non-government systems to support criminal investigations.
This includes people who may have committed criminal acts during last summer’s nationwide anti-police violence protests. One of the agencies on this list is the US Postal Inspection Service, which used Clearview to identify suspects who damaged USPS property or stole mail. The US Capitol Police also used Clearview to “generate leads” following the January 6th attack on the US Capitol.
That’s what’s known. There’s a lot that’s unknown, thanks to federal agencies apparently not caring who’s doing what with whatever facial recognition tech they have access to.
Thirteen federal agencies do not have awareness of what non-federal systems with facial recognition technology are used by employees. These agencies have therefore not fully assessed the potential risks of using these systems, such as risks related to privacy and accuracy. Most federal agencies that reported using non-federal systems did not own systems. Thus, employees were relying on systems owned by other entities, including non-federal entities, to support their operations.
Yay! Your federal tax dollars at work putting citizens at risk of being misidentified right into holding cells or deportation or whatever. The less you know, I guess. Some agencies had to “poll” employees to figure out how often this tech had been used, something that relies on honest self-reporting for accuracy. Literally any other system would provide better data, including the old standby “making some shit up.”
Then there’s mind-boggling stuff like this:
Officials from another agency initially told us that its employees did not use non-federal systems; however, after conducting a poll, the agency learned that its employees had used a non-federal system to conduct more than 1,000 facial recognition searches.
The line between “we don’t do this” and “we do this pretty much nonstop” is finer than I thought.
The CBP, which has used this tech for years, says it’s still “in the process of implementing a mechanism to track” use of non-federal facial recognition systems for employees. So far, the CBP has come up with nothing better than hanging up a couple of clipboards.
According to U.S. Immigration and Customs Enforcement officials, in November 2020 they were in the process of developing a list of approved facial recognition technologies that employees can use. In addition, log-in sheets will be made available to employees, allowing supervisors to monitor employee use of the technologies.
Behold the awesome power of the CBP, utilizing its billions in budget to send someone to Office Depot with a $20 bill and telling them to bring back change and a receipt.
In addition to being careless and cavalier about the use and deployment of unproven tech, the sullen shrugs of these thirteen government agencies are also possibly admissions of criminal activity.
When agencies use facial recognition technology without first assessing the privacy implications and applicability of privacy requirements, there is a risk that they will not adhere to privacy-related laws, regulations, and policies. There is also a risk that non-federal system owners will share sensitive information (e.g. photo of a suspect) about an ongoing investigation with the public or others.
The GAO closes its depressing report with 26 recommendations — thirteen of them being “start tracking this stuff, you dolts.” The second — which makes two recommendations per failing federal agency — is to assess the risks of the tech, including possible violations of privacy laws and the negative side effects of these systems misidentifying people.
There’s no good news in this report. Agencies are using unproven, sometimes completely unvetted tech without internal or external oversight. They’ve rolled out these programs well ahead of required Privacy Impact Assessments or internal tracking/reporting measures in place. The only pleasant surprise is that this hasn’t resulted in more false arrests and detainments. But that definitely can’t be attributed to the care and diligence of agencies using this tech because the GAO really wasn’t able to find much evidence of that. But this does put the issue on the radar of Congress members who haven’t been paying much attention to this tech’s drift towards ubiquity.
Filed Under: 4th amendment, accountability, facial recognition, federal government, gao, oversight, surveillance
Court: It's Cool If The (Federal) Government Searches A Phone The (Local) Government Seized Illegally
from the Fifth-Circuit's-bizarre-vendetta-against-the-Constitution-continues dept
The Fifth Circuit Court of Appeals has decided it’s OK if a government agency searches a phone that should never have been seized in the first place… so long as it’s not the same government agency that illegally seized it. The illegality of the original seizure — which should have provoked some discussions of poisonous trees and their harmful fruit — is pretty much discarded in favor of the good faith exception.
The backstory is this: Charles Fulton Jr. was targeted by the Galveston (TX) Police Department — working in tandem with the FBI — for sex trafficking and prostitution of teens. He was ultimately found guilty on four sex trafficking charges, prompting this appeal of the district court’s refusal to toss out the evidence pulled from his seized phone.
Here’s how the seizure and very eventual search went down, taken from the court’s decision [PDF]. (Some emphasis added for reasons that will become apparent momentarily.)
In February 2015, Galveston police obtained a search warrant on the Avenue L house where the prostitution was based. The warrant, though, was due to a separate investigation into Fulton’s narcotics activities. Fulton’s cellphone was seized. Nine days later, police obtained a second warrant to examine its contents but were unable to bypass the phone’s security features. Around this same time, the FBI agent assisting with the Fulton sex-trafficking investigation learned that the Galveston police had the phone. The agent acquired it to determine if the FBI could access the phone’s data. Three weeks later, that agent obtained a federal warrant to search the phone. Still, it took a year before the data on the phone was accessed. The FBI discovered evidence that helped piece together Fulton’s involvement with the minor victims.
Recovered from the phone were text messages and photographs linking Fulton to the five minor victims he was trafficking. Fulton challenged the original seizure of the phone by the Galveston PD, hoping that a finding in his favor would eliminate the evidence pulled from the phone by the FBI.
The appeals court agrees with Fulton that the phone’s seizure was illegal. The warrant makes no mention of seizing phones or other electronics. And yet, that’s exactly what was seized. The government tried to claim Fulton’s phone was pretty much the equivalent of something else actually mentioned in the warrant.
This narcotics warrant did not mention cellphones. The alleged equivalent was a reference to “ledgers,” which is a “book . . . ordinarily employed for recording . . . transactions.” Ledger, OXFORD ENGLISH DICTIONARY (2d ed. 1989). The government argues that is enough, because this court has held that a cellphone that is “used as a mode of both spoken and written communication and containing text messages and call logs, served as the equivalent of records and documentation of sales or other drug activity.”
The government quoted precedent allowing the word “ledger” to stand in for “computers, disks, monitors” and other hardware that might contain the equivalent of a ledger. The court says all that would be fine if the government made any mention of ledger equivalents in its warrant. But it didn’t.
We do not see the same factors involved in the present case. There was nothing in the Galveston warrant suggesting that anything similar to computers or even electronics was to be seized. Moreover, the officer in this case was a veteran of the Galveston Police Department’s narcotics unit, and he indicated at the suppression hearing that he knew cellphones are used in the drug trade. Though a ledger can serve one of the myriad purposes of a cellphone, we do not extend the concept of “functional equivalency” to items so different, particularly one as specific, distinguishable, and anticipatable as a cellphone.
The government says the good faith exception should apply to the FBI’s search of the illegally-seized phone. This argument wasn’t even addressed by the lower court, which found other grounds to grant the government’s use of this evidence.
The appeals court does take a swing at this argument, though, but not to the benefit of Fulton and others similarly-situated in the circuit. Good faith it is.
We conclude that viewed objectively, an FBI agent who obtained a search warrant in these circumstances would not have had reason to believe the seizure and continued possession of the cellphone by the Galveston police were unlawful.
The (federal) government gets this win even though the (local) government has just been handed a loss. Despite the fact the two agencies worked “in tandem” on this investigation, the court still decides the reasonably ignorant FBI agent had no way of knowing the phone handed to them by their investigation partner had been seized illegally.
And I suppose that’s possibly true. The FBI assists in many investigations instigated by local agencies once there’s a possibility that federal charges may be the end result. But a decision like this just encourages everyone in a joint investigation to be as blissfully ignorant as possible to obtain the best possible chance at securing a good faith ruling. In this case, an agent was working directly with the Texas agency and found out the Galveston PD had seized a phone, but didn’t take a look at the PD’s paper trail before crafting an affidavit of their own. That’s the exact opposite of “good faith.” That’s bad faith — the least amount of knowledge and effort combining to allow for law enforcement rule-bending and access to pre-made judicial excuses molded from years of slack-cutting precedent.
Filed Under: 4th amendment, 5th circuit, charles fulton jr., fbi, federal government, illegal searches, local government
Fake Comments Are Plaguing Government Agencies And Nobody Much Seems To Care
from the disinformation-nation dept
Fri, Oct 5th 2018 06:03am - Karl Bode
You might recall that when the FCC repealed net neutrality, the agency’s open comment period (the only opportunity the public had to voice their concerns) was plagued with all manner of bogus comments and identity fraud. From bots that lifted the identities of dead people to create fake enthusiasm, to the hijacking of legitimate identities (like Senators Jeff Merkley and Pat Toomey, or my own) to forge bogus support. The FCC not only refused to do anything about it, it actively blocked law enforcement efforts to do so. The agency told me there was nothing they could do when my own identity was lifted in this fashion.
A year later and a few brave journalists are still trying to find the culprit. Who benefited should be obvious. Who they paid to do the dirty work, less so.
And while the fake net neutrality comments got the lion’s share of public and media attention, the reality is this is a problem that’s been plaguing government proceedings for years. For example, new information obtained via FOIA request highlights how the NFL was involved in sending fake fan comments to the FCC as early as 2014 as the league tried to fight FCC efforts to eliminate the so-called “black out rule,” which requires that broadcasters black out certain game broadcasts if real-world attendance doesn’t meet the league’s liking. It didn’t work because the rule was so monumentally stupid, but nobody really seemed to much care about tracking down those responsible:
“The letters began ?I write as a football fan? and requested that the rule remain because, without it, premium television channels could start charging higher fees to broadcast games. The WSJ identified and interviewed fans whose names were used in the letters and were angry to be used as spokespeople for a cause they didn?t believe in.”
Sounds familiar. The same problem was recently found to have plagued a proceeding at the Labor Department, where numerous people who either don’t exist or don’t recall ever sending messages breathlessly opposed agency efforts to prevent conflicts of interest in retirement advice. The same problem plagued the Consumer Financial Protection Bureau when it proposed a rule trying to rein in some of the nastier habits of the payday lending industry. Nobody appears to have shown much interest in getting to the bottom of gamesmanship in either of those instances, either.
And last week, information obtained via FOIA request found that the Office of the Comptroller of the Currency, the primary bank regulator for nationally chartered banks, was inundated with bogus support for a 2015 merger between OneWest Bank and CIT Bank. A smattering of identity theft and fraud the regulatory agency was aware of and likely involved one of the companies involved, but resulted in no meaningful inquiries or punishment whatsoever:
“The documents reviewed by The Intercept show that the Office of the Comptroller of the Currency, the main bank regulator for nationally chartered banks, knew about the fake comments at the time, before it approved the merger. But the OCC appears to have done no meaningful investigation of the matter, and even cited public support for the merger when approving it.”
The problem’s become a bit of an epidemic, but despite the fact that this kind of behavior pollutes the public discourse and undermines the democratic process, not much (read: mostly nothing) is being done about it. Given our obsession (perhaps justly) with Russian disinformation efforts, you’d think there’d be a little more concern that the only opportunity the public is often given to provide feedback on major policy decisions or mergers, are often corrupted by widespread efforts to generate industrialized, artificial enthusiasm.
While things like astroturf and bogus support for bad policy have been a mainstay for years, these fake comments are increasingly growing in scale, as offenders now utilize hackers who’ll heavily lean on compromised databases as we saw in the net neutrality repeal. But much like we saw with the FCC, there’s little to no willpower at most government agencies to actually track down the culprits and hold those who obviously benefit from the fraudulent behavior accountable. As a result, the already marginalized will of the public has been further reduced to a faint echo, drowned out by a chorus of farmed artificiality.
Filed Under: comments, comptroller, department of labor, fake comments, fcc, federal government, occ, public input
Ron Wyden Wants Federal Government To Do More To Protect Personal Devices/Accounts Used By Senators And Staffers
from the small-fix-with-bigger-potential-repercussions dept
Ron Wyden is writing letters again. This time he wants to know why the federal government isn’t protecting the personal devices and email accounts used by federal officials. Attacks by state-sponsored hackers are never going to go away, and Wyden feels this lack of protection will make personal devices easy targets. From Wyden’s letter [PDF] to Senate majority leaders:
Press reports from January of this year indicate that Fancy Bear–the notorious Russian hacking group–targeted senior congressional staff in 2015 and 2016. My office has since discovered that Fancy Bear targeted personal email accounts, not official government accounts. And the Fancy Bear attacks may be the tip of a much larger iceberg. My office has also discovered that at least one major technology company has informed a number of Senators and Senate staff members that their personal email accounts were targeted by foreign government hackers.
Given the significance of this threat, I was alarmed to learn that SAA cybersecurity personnel apparently refused to help Senators and Senate staff after these attacks The SAA informed each Senator and staff member who asked for help that it may not offer cybersecurity assistance for personal accounts. The SAA confirmed to my office that it believes it may only use appropriated funds to protect official government devices and accounts.
This seems a little odd, but there’s a good reason the SAA doesn’t extend coverage to personal devices. As Pwn All The Things pointed out on Twitter, personal devices can be used for personal things, and we don’t want our elected officials using tax dollars for personal reasons.
This is a good example of a rule constructed for laudable reasons — the strong firewall to stop legislators using govt money for campaigning and personal things is there for a reason — ending up with bad consequences on edge-cases like defending high-value accounts from hackers
To protect against hacking attempts, Wyden is introducing legislation that would eliminate the SAA silos. The bill would allow the SAA to “provide cybersecurity assistance” for personal devices on an opt-in basis. We’ll have to see how this plays out when implemented. It may make it more difficult to discern if any federal funds were misused by Senators or their staff.
On the other hand, it will help secure devices some government employees mistakenly believe aren’t prime targets for state-sponsored hacking. It takes a certain amount of obtuseness to reach this conclusion, considering how heavily some government officials rely on their personal devices for communications with other government officials. The old FOIA dodge is still a popular one, and the difficulty of separating official work from personal work — especially during election years — likely means personal devices are used far more frequently than their government-issued ones.
While it’s good the government as a whole is continually working towards more robust security, the fact is the private sector offers plenty of options for government officials to better secure their personal devices. Personal responsibility is still underutilized at the federal level, which makes them no better (or worse) than much of the general public.
Filed Under: congress, cybersecurity, federal government, nation state attackers, ron wyden, senate
US Government Now Has An Official Open Source Software Policy
from the about-time dept
Earlier this year, we noted that the federal government was looking to further embrace open source software in its process of contracting out for (or creating in house) code. It released a draft policy which was good, though we hoped the final product would be much stronger (for example, it pushed for a portion of any code to be released under an open source license, but didn’t consider that to be the default. I was also concerned about it allowing software developed by federal government employees to be locked up by a license — something that I’m pretty sure is not allowed, since works created by federal government employees are automatically in the public domain.
On Monday, the White House’s Chief Information Officer, Tony Scott, revealed the finalized official “Federal Source Code” policy, and you can read the whole thing. Because the original was posted to GitHub, you can also easily see what’s changed. On top of that, as part of this, the government also launched a new site at code.gov, which will act as a repository for open source code from the government.
Much of the focus of the policy, understandably, is on enabling reuse of code within the government, so that different agencies and departments aren’t reinventing the wheel (and paying hundreds of millions of dollars) for projects that others are already working on. Lots of people and agencies weighed in on the draft proposal, including some interesting/surprising ones. Homeland Security, of all organizations, worried that simply pushing government agencies to release 20% of their software as open source, without understanding how that might be most useful to the wider community, would be a waste. It preferred pushing government agencies to refactor code into reusable modules, with a focus on what would be the most reusable. Others, like the Consumer Financial Protection Bureau favored (as I suggested) a default open source policy, rather than the 20% solution.
Unfortunately, the plan sticks with this “pilot program” of only having to open source 20% of code, and how well that works will be evaluated over time. It appears to have “fixed” the problem of lumping in-house developed code into the policy (since that code is public domain) by now focusing the policy solely on custom developed code by third parties (at least that’s my read on the new policy). While it’s still disappointing that the policy didn’t move to a “default to open source absent a compelling interest” standard, at least it didn’t go in the other direction either. And that’s in the face of complaints from the likes of the Software Alliance (a major Microsoft lobbying group) that whined about the need for such a policy in the first place.
In the end, this looks like a good step forward. It could have gone much farther, but it’s still a step in the right direction. Hopefully the pilot program will lead to even bigger steps towards embracing more open source (and public domain!) software.
Filed Under: federal government, open source policy, software, software prodcurement, white house
White House Further Embraces Open Source For Government… But Tell It To Do Even More
from the good-to-see dept
With so much annoying stuff coming out of the White House lately, it’s good to see the tech folks there continue to do some good work, including pushing for a policy that should lead to further embracing open source technologies inside the federal government — in part by pushing the government itself to open source the code it writes for its own work (and even when not releasing the code to the public, at least sharing it inside the government for other agencies to use).
This policy requires that, among other things: (1) new custom code whose development is paid for by the Federal Government be made available for reuse across Federal agencies; and (2) a portion of that new custom code be released to the public as Open Source Software (OSS).
This new policy has been put out for comment, and there’s a chance for you to tell the White House to go even farther with this policy. The current request asks if the policy could be improved in the following manner:
Would an ?open source by default? approach that required all new Federal custom code to be released as OSS, subject to exceptions for things like national security, be more or less effective in achieving the goals above?
I think the answer to this question needs to be that, yes, such a policy would be greatly improved by pushing for open source by default. With the current policy stating that just “a portion” of the code is released that way, it almost guarantees that very little will be. Moving to a policy where it’s open source by default would lead to a design mentality that keeps that in mind.
Of course, some may (quite reasonably!) argue that copyright does not apply to any works created by the federal government itself (though it does apply to anything written by contractors, who can then assign that copyright to the government). Thus, if the software in question was created by federal employees, then it should, automatically be in the public domain, in which case the government has no legal right to place any legal restrictions on its use, even open source restrictions. Though, of course, in that case, it still has to make the decision over whether or not to release the code publicly. Unfortunately, the current policy says that it would apply to software written by federal employees as well — and that might actually not be allowed under the law. That software is in the public domain.
Of course, it’s likely that plenty of code used in government is actually written by contractors, and for that code, the default should absolutely be that it be open sourced whenever possible.
For what it’s worth, rather than the annoying standard commenting process for most government comments, this one is being done on Github, so join in.
Filed Under: code sharing, copyright, federal government, open source, public domain
Hack Of Federal Gov't Employee Info Is Much, Much Worse Than Originally Stated: Unencrypted Social Security Numbers Leaked
from the because-that's-how-this-works dept
Over a decade ago, I pointed out that every single time there were reports of big “data leaks” via hacking, a few weeks after the initial report, we would find out that the leak was even worse than originally reported. That maxim has held true over and over again. And, here we go again. Last week, we noted that the US government’s Office of Personnel Management had been hacked, likely by Chinese hackers. And, now, it has come out that the hack was (you guessed it) much worse than originally reported.
The President of the union that represents federal government workers, the American Federation of Government Employees (AFGE) sent a letter to the director of the OPM, claiming that the hackers got away with the Central Personnel Data File, which includes full information on just about everything about that employee — including (get this) unencrypted social security numbers.
Based on the sketchy information OPM has provided, we believe that the Central Personnel Data File was the targeted database, and that the hackers are now in possession of all personnel data for every federal employee, ever federal retiree, and up to one million former federal employees. We believe that hackers have every affected person’s Social Security number(s), military records and veterans’ status information, address, birth date, job and pay history, health insurance, life insurance, and pension information; age, gender, race, union status, and more.
Oh, and then there’s this:
Worst, we believe that Social Security numbers were not encrypted, a cybersecurity failure that is absolutely indefensible and outrageous.
The letter further points out — as we did last week — that the 18 months of credit monitoring the government has offered everyone is a complete joke. It’s unlikely that the hackers are looking to do identity fraud for financial gain — and quite likely this is for espionage purposes.
But, let’s go back to the Social Security numbers being unencrypted for a second. Remember, this hack is already being used by intelligence system defenders to argue for why we need stronger “cybersecurity” laws that will give the NSA and FBI much greater access to Americans’ data.
And, yes, this would be the very same FBI that has actively argued against encryption. And the NSA has always hated encryption and insists it needs backdoors into any encryption.
Both of these organizations strongly support “cybersecurity” legislation, claiming that it’s necessary so that the US government can “help” companies dealing with “critical infrastructure.” And yet, here we are, with the government’s own personnel files being held in a system without encryption that was hacked and copied by (likely) foreign hackers. And we’re supposed to trust two government agencies who have been going around cursing encryption, that we should give them more access to “protect us” when another government agency’s attack likely could have been prevented if they’d just used encryption?
As plenty of cybersecurity experts will tell you, the problem in the security realm is not “information sharing.” It’s people doing stupid things in how they setup their systems. Not encrypting the employee files for every government employee seems to fit into that category. Perhaps, rather than focusing on bogus “cybersecurity” legislation to give more power to the idiots shouting against encryption, we should have the government focus on getting its own house in order, including encrypting employee data.
Filed Under: cybersecurity, federal government, leaks, opm, social security numbers, unencrypted
US CIO Orders All .Gov Websites To Require Encrypted Connections, Amazon Enters The Secure Cert Space
from the moving-forward dept
As top FBI officials are arguing that the tech industry needs to “prevent encryption,” the federal government’s CIO, Tony Scott, has officially announced that all federal government websites will only be available via encrypted HTTPS connections by the end of next year. As we noted, this was proposed back in March, but after an open comment period (via Github!), the policy is now official. The official memo talks about the importance of encryption:
The unencrypted HTTP protocol does not protect data from interception or alteration, which can subject users to eavesdropping, tracking, and the modification of received data. The majority of Federal websites use HTTP as the as primary protocol to communicate over the public internet. Unencrypted HTTP connections create a privacy vulnerability and expose potentially sensitive information about users of unencrypted Federal websites and services. Data sent over HTTP is susceptible to interception, manipulation, and impersonation. This data can include browser identity, website content, search terms, and other user-submitted information.
To address these concerns, many commercial organizations have adopted HTTPS or implemented HTTPS-only policies to protect visitors to their websites and services. Users of Federal websites and services deserve the same protection. Private and secure connections are becoming the Internet’s baseline, as expressed by the policies of the Internet’s standards bodies, popular web browsers, and the Internet community of practice. The Federal government must adapt to this changing landscape, and benefits by beginning the conversion now. Proactive investment at the Federal level will support faster internet-wide adoption and promote better privacy standards for the entire browsing public.
And the memo doesn’t mince words about websites that choose not to go to HTTPS-only:
Federal websites that do not convert to HTTPS will not keep pace with privacy and security practices used by commercial organizations, and with current and upcoming Internet standards. This leaves Americans vulnerable to known threats, and may reduce their confidence in their government. Although some Federal websites currently use HTTPS, there has not been a consistent policy in this area. An HTTPS-only mandate will provide the public with a consistent, private browsing experience and position the Federal Government as a leader in Internet security.
It’s good to see the federal government embracing this. The plan is to have all federal government websites fully HTTPS by the end of 2016.
Separately, another big step in the world of HTTPS happened quietly on Monday as well: Amazon started offering secure certificates as well, and it appears that they’re looking to make it much easier and convenient. Oh, and it is not just for customers registering their domains through Amazon either.
It’s good to see the internet world moving more and more to a place where all connections will be encrypted.
Filed Under: certificates, cio, encryption, federal government, https, websites
Companies: amazon
Chief Information Officers Council Proposes HTTPS By Default For All Federal Government Websites
from the being-the-change-people-have-been-waiting-for dept
In a long-overdue nod to both privacy and security, the administration finally moved Whitehouse.gov to HTTPS on March 9th. This followed the FTC’s March 6th move to do the same. And yet, far too many government websites operate without the additional security this provides. But that’s about to change. According to a recent post by the US government’s Chief Information Officers Council, HTTPS will (hopefully) be the new default for federal websites.
The American people expect government websites to be secure and their interactions with those websites to be private. Hypertext Transfer Protocol Secure (HTTPS) offers the strongest privacy protection available for public web connections with today’s internet technology. The use of HTTPS reduces the risk of interception or modification of user interactions with government online services.
This proposed initiative, “The HTTPS-Only Standard,” would require the use of HTTPS on all publicly accessible Federal websites and web services.
In a statement that clashes with the NSA’s activities and the FBI’s push for pre-compromised encryption, the CIO asserts that when people engage with government websites, these interactions should be no one’s business but their own.
All browsing activity should be considered private and sensitive.
The proposed standard would eliminate agencies’ options, forcing them to move to HTTPS, both for their safety and the safety of their sites’ visitors. To be sure, many cats will still need to be shepherded if this goes into effect, but hopefully there won’t be too many details to trifle over. HTTPS or else is the CIO Council’s goal — something that shouldn’t be open to too much interpretation.
As the Council points out, failing to do so places both ends of the interaction at risk. If government sites are thought to be unsafe, it has the potential to harm citizens along with the government’s reputation.
Federal websites that do not use HTTPS will not keep pace with privacy and security practices used by commercial organizations, or with current and upcoming Internet standards. This leaves Americans vulnerable to known threats, and reduces their confidence in their government. Although some Federal websites currently use HTTPS, there has not been a consistent policy in this area. The proposed HTTPS-only standard will provide the public with a consistent, private browsing experience and position the Federal government as a leader in Internet security.
The CIO’s short, but informative, explanatory page lists the pros of this proposed move, as well as spells out what HTTPS doesn’t protect against. It also notes that while most sites should actually see a performance boost from switching to HTTPS, sites that gather elements for other parties will be the most difficult to migrate. And, it notes, the move won’t necessarily be inexpensive.
The administrative and financial burden of universal HTTPS adoption on all Federal websites includes development time, the financial cost of procuring a certificate and the administrative burden of maintenance over time. The development burden will vary substantially based on the size and technical infrastructure of a site. The proposed compliance timeline provides sufficient flexibility for project planning and resource alignment.
But, it assures us (at least as much as any government entity can…), the money will be well-spent.
The tangible benefits to the American public outweigh the cost to the taxpayer. Even a small number of unofficial or malicious websites claiming to be Federal services, or a small amount of eavesdropping on communication with official US government sites could result in substantial losses to citizens.
The CIO is also taking input from the public, at Github no less.
A very encouraging — if rather belated — sign that the government is still making an effort to take privacy and security seriously, rather than placing those two things on the scales for intelligence and law enforcement agencies to shift around as they see fit when weighing their desires against Americans’ rights and privileges.
Filed Under: encryption, federal government, ftc, https, ssl, tls, websites, white house