Algorithm Might Protect Non-Targets Caught In Surveillance, But Only If The Government Cares What Happens To Non-Targets (original) (raw)
from the something-it-has-yet-to-show dept
Ashkat Rathi at Quartz points to an interesting algorithm developed by Michael Kearns of the University of Pennsylvania — one that might give the government something to consider when conducting surveillance. It gauges the possibility of non-targets inadvertently being exposed during investigations, providing intelligence/investigative agencies with warnings that perhaps other tactics should be deployed.
Rathi provides a hypothetical situation in which this algorithm might prove to be of use. A person with a rare medical condition they’d like to keep private visits a clinic that happens to be under investigation for fraud. This person often calls another family member for medical advice (an aunt who works at another clinic). This second person’s clinic is also under investigation.
When the investigation culminates in a criminal case, there’s a good chance the patient — a “non-target” — may have their sensitive medical information exposed.
If the government ends up busting both clinics, there’s a risk that people could find out about your disease. Some friends may know about your aunt and that you visit some sort clinic in New York; government records related to the investigation, or comments by officials describing how they built their case, may be enough for some people to draw connections between you, the specialized clinic, and the state of your health.
Even though this person isn’t targeted by investigators, the unfortunate byproduct is diminished privacy. This algorithm, detailed in a paper published by the National Academy of Sciences, aims to add a layer of filtering to investigative efforts. As Kearns describes it, the implementation would both warn of potential collateral damage as well as inject “noise” to make accidental exposure of non-targets minimal.
For such cases where there are only a few connections between people or organizations under suspicion, Kearns’s algorithm would warn investigators that taking action could result in a breach of privacy for selected people. If a law were to require a greater algorithmic burden of proof for medical-fraud cases, investigators would need to find alternative routes to justify going after the New York clinic.
But if there were lots of people who could serve as links between the two frauds, Kearns’s algorithm would let the government proceed with targeting and exposing both clinics. In this situation, the odds of comprising select individuals’ privacy is lower.
Potentially useful, but it suffers from a major flaw: the government.
Of course, if an investigation focused on suspected terrorism instead of fraud, the law may allow the government to risk compromising privacy in the interest of public safety.
Terrorism investigations will trump almost everything else, including privacy protections supposedly guaranteed by our Constitution. Courts have routinely sided with the government’s willingness to sacrifice its citizens’ privacy for security.
It’s highly unlikely investigative or intelligence agencies have much of an interest in protecting the privacy of non-targeted citizens, even in non-terrorist-related surveillance — not if it means using alternate (read: “less effective”) investigative methods or techniques. It has been demonstrated time and time again that law enforcement is more interested in the most direct route to what it seeks, no matter how much collateral damage is generated.
The system has no meaningful deterrents built into it. Violations are addressed after the fact, utilizing a remedy process that can be prohibitively expensive for those whose rights have been violated. On top of that, multiple layers of immunity shield government employees from the consequences of their actions and, in some cases, completely thwart those seeking redress for their grievances.
The algorithm may prove useful in other areas — perhaps in internal investigations performed by private, non-state parties — but our government is generally uninterested in protecting the rights it has granted to Americans. Too many law enforcement pursuits (fraud, drugs, terrorism, etc.) are considered more important than the rights (and lives) of those mistakenly caught in the machinery. If the government can’t be talked out of firing flashbangs through windows or predicating drug raids on random plant matter found in someone’s trash can, then it’s not going to reroute investigations just because a piece of software says a few people’s most private information might be exposed.
Filed Under: algorithm, michael kearns, non-targets, privacy, surveillance, targets, warrants