driving – Techdirt (original) (raw)

The Surveillance And Privacy Concerns Of The Infrastructure Bill's Impaired Driving Sensors

from the good-intentions... dept

There is no doubt that many folks trying to come up with ways to reduce impaired driving and making the roads safer have the best of intentions. And yet, hidden within those intentions can linger some pretty dangerous consequences. For reasons that are not entirely clear to me, the giant infrastructure bill (that will apparently be negotiated forever) includes a mandate that automakers would eventually need to build in technology that monitors whether or not drivers are impaired. It’s buried deep in the bill (see page 1066), but the key bit is:

to ensure the prevention of alcohol-impaired driving fatalities, advanced drunk and impaired driving prevention technology must be standard equipment in all new passenger motor vehicles

The details note that the new technology should “passively monitor the performance of a driver of a motor vehicle to accurately identify whether that driver may be impaired.” This isn’t the kind of keylock breathalyzer test things that some people have been required to install to operate a car. This is “passive” monitoring. That means sensors and cameras. And, as Julian Sanchez highlights, that raises a ton of surveillance and privacy concerns:

This amounts to mandating a sophisticated set of sensors be installed in a space where many Americans spend huge amounts of time. (And not just commuting?many people live in vehicles, whether out of choice or necessity, at least part of the time.) A narrowly-tailored sensor that only detects blood alcohol content, if designed to immediately discard any readings below the legal threshold, might not sound worryingly invasive. But the mandate extends to monitoring for other forms of “impairment,” which can require more intrusive types of sensors. One such system being developed by Nissan includes a “camera atop the instrument cluster” which “looks for facial cues signaling the driver is inebriated” while “the vehicle itself looks for driving patterns suggesting an impaired driver.” In other words, one form this mandatory technology is likely to take involves pre-installed video surveillance with facial recognition capabilities. (Law enforcement, no doubt, will eagerly think of many other applications for a ubiquitous system of cameras installed in private spaces?cameras which, by design, the vehicle owner will necessarily be unable to deactivate.)

It’s possible that there are versions of anti?impaired driving technology that could address these practical and privacy concerns. But that only underscores how little sense it makes for Congress to delegate authority for a regulatory mandate at a point when the technology being mandated remains largely hypothetical. Even in the absence of a mandate, there’s likely to be some market for these technologies: Some drivers would embrace as a safety feature a system that warns them if they’ve imbibed more than they realize, or are starting to swerve on the road. They’d doubtless also be popular in contexts where the owner of a vehicle is entrusting it to another driver: Rental cars, corporate cars, commercial cab or trucking fleets. This is not, in other words, technology that will go undeveloped and untested unless it is made universally mandatory.

That makes it seem wildly premature to empower an executive branch official to mandate what is, essentially, surveillance technology in all automobiles when the precise form of the technology remains uncertain, and it’s impossible to concretely debate the merits of specific systems. This sort of delegation lets legislators take credit for Doing Something to promote automotive safety and reduce the unacceptable annual death toll that results from drunk and impaired driving, without having to defend or be held accountable for any of the details of what Something entails. The bill doesn’t, after all, say “we’re requiring a camera that you can’t shut off be installed in everyone’s automobile”?that’s just one possible way to “passively monitor the performance of a driver.” They can reap the accolades now, and insist “that’s not what we intended” if the result is a mandatory surveillance network, or cars that stop working because you’ve used too much hand sanitizer or a pinhole aperture in the dash got blocked.

But, let’s take this even further. Sanchez notes in a parenthetical aside that law enforcement will come up with “many other applications” of this technology, but that deserves to be called out, because knowing what we know about how law enforcement embraces and “extends” every bit of surveillance technology they can get their hands on, this isn’t some pie in the sky hypothetical. We all know exactly how this will play out.

Police are going to demand access to the facial recognition and other data collected by these systems, and they’ll claim they need to have access to it for “public safety” reasons. Others are also likely to seek access to it as well. It’s going to be an insurance goldmine.

And that’s leaving aside the question of whether or not the technology will even work. The “passive” nature of it raises questions about how the system will know whether it’s the driver or passengers who may be impaired. The potential to use facial recognition may raise questions about people who have facial tics or other features that the system might deem to be a sign of impairment. And, as Sanchez also notes, there’s simply no clear reason why this technology needs to be government mandated and standard in every new vehicle.

Yes, preventing impaired driving is a worthy goal. But it shouldn’t come at the cost of installing massive new surveillance infrastructure in every new car.

Filed Under: driving, drunk driving, facial recognition, impaired driving, infrastructure, law enforcement, privacy, sensors, surveillance

CBP Updates Privacy Impact Assessment On License Plate Readers; Says Opting Out Involves Not Driving

from the just-five-years-of-surveillance-at-CBP-fingertips dept

The last time the CBP delivered a Privacy Impact Assessment of its automated license plate readers, it informed Americans as far as 100 miles inland that there’s really no privacy being impacted by the deployment of tech capable of capturing millions of plate images every year. If you don’t want to be on the CBP ALPR radar (which is shared with the DEA and other law enforcement agencies), don’t drive around in a properly licensed vehicle.

This impact assessment was not updated when the CBP’s ALPR vendor was hacked and thousands of plate photos — some of which contained photos of drivers and passengers — were taken from the vendor’s servers. The vendor was never supposed to be storing these locally, but it decided to do so and the end result was a lot of leakage the CBP assured everyone contained “no personal information” about the thousands of people and vehicles contained in the photos.

The CBP’s latest Privacy Impact Assessment [PDF] has been turned in and it’s more of the same thing. Want to dodge the feds’ plate readers, stay off the road. (via Zack Whittaker/TechCrunch)

Privacy Risk: There is a risk that individuals who are not under suspicion or subjects of investigation may be unaware of or able to consent to CBP access to their license plate information through a commercial database.

Mitigation: This risk cannot be fully mitigated. CBP cannot provide timely notice of license plate reads obtained from various sources outside of its control. Many areas of both public and private property have signage that alerts individuals that the area is under surveillance; however, this signage does not consistently include a description of how and with whom such data may be shared. Moreover, the only way to opt out of such surveillance is to avoid the impacted area, which may pose significant hardships and be generally unrealistic.

Keep in mind that “impacted areas” aren’t just the places you expect Customs and Border Protection to be. You know… like at the border. It’s also up to 100 miles inland from every border. And “border” is also defined as any entry point, which includes international airports. So, that’s a lot of “impacted area.” There’s really no realistic way to dodge everywhere the CBP operates. And one would think actively dodging CBP-patrolled areas would be treated as suspicious behavior by CBP officers, which could result in far more than license plate records being abused.

The CBP says it will keep privacy violations to a minimum, though. It will only access its database if it has “circumstantial evidence.” So… feel good about that, I guess.

The CBP also says that it probably isn’t actually allowed to perform this collection but it will try its very best not to abuse its ALPR privileges.

There is a risk that CBP does not have the appropriate authority to collect commercially available LPR information from vehicles operating away from the border and outside of CBP’s area of responsibility.

No big deal, says the CBP. It will only retain information about vehicles crossing the border. Or connected to a “person of law enforcement interest.” Or connected to potentially illicit activity. Or for “identifying individuals of concern.” Just those things. And the data not connected to anything in particular will be held onto for a limited time.

Here’s the definition of “limited:”

CBP may access LPR data over an extended period of time in order to establish patterns related to criminal activity; however, CBP has limited its access to LPR data to a five-year period in an effort to minimize this risk.

Really the only thing limited about this is that it isn’t forever. The CBP’s vendor can hold onto this data forever, but CBP agents will only be able to search the last five years of records. Cached searches will be retained for up to 30 days if they’re of interest to the CBP or other law enforcement agencies with access to the database. Uninteresting searches will be dumped within 24 hours.

Five years is a lot of data. That’s not really a mitigation of privacy concerns. The CBP’s Impact Assessment pretty much says the agency plans to use this to reconstruct people’s lives. Its definition of “limited” — the one that means five years of searchable records — is its response to the privacy risk posed by the aggregate collection of travel records over a long period of time. Apparently, the CBP feels five years is long enough for it to do its job. But not long enough that the general public should be worried about it.

Filed Under: alpr, cbp, driving, license plate reader, lpr, privacy impact assessment

from the this-spud's-for-you dept

Readers here will know that we rather enjoy when an ordinary person takes extraordinary steps to clap back against government intrusions over speech and technology. A recent example of this was a Canadian man routing around a years-long battle with his government over a vanity license plate for his last name, which happens to be Assman. One thing to note on the technology side of the equation is that as legislation seeks more and more to demonize anything to do with technology, even in some cases rightly, it causes those enforcing the laws to engage in ridiculous behavior.

For example, one man in Connecticut has only just won a legal battle that lasted over a year, and cost him far more than the $300 traffic ticket he’d been given, by convincing a court that a McDonald’s hash brown is not in fact a smart phone. This, I acknowledge, may require some explanation.

On April 11th, 2018, Stiber was pulled over by Westport Police Cpl. Shawn Wong Won, who testified that he saw Stiber moving his lips as he held an object resembling a cellphone to his face while driving. Stiber’s lawyer, John Thygerson, countered by saying those lip movements were “consistent with chewing” the hash brown his client purchased at a McDonald’s immediately before he was pulled over.

Stiber also made a Freedom of Information Act (FOIA) request to acquire records showing that Wong was on the 15th hour of a 16-hour double shift and may have had less-than-ideal judgment when he pulled Stiber over. The judge concluded that the state didn’t bring forth enough evidence to show that Stiber was, indeed, on his phone while driving.

The fact that Stiber stared down this $300 traffic ticket to the tune of two separate trials and whatever the cost of his legal representation might strike some as absurdly stupid. On the other hand, Stiber was apparently wrongly accused. What matters the cost of getting proper justice served? Especially from a hash-brown-chewing man with such high-minded morals such as the following?

In the end, this outcome took two trials and more than a year to come by, and it cost Stiber legal fees exceeding the $300 ticket and four days of missed work. But he has no regrets: “That’s why I did it, because I wouldn’t want anyone else to go through this. Other people don’t have the means to defend themselves in the same way.”

Now, this might only bring up additional questions, such as why talking on a phone and eating a hash brown are treated so differently by law, despite them requiring similar bodily motions? Eating can certainly be distracting to driving, after all. Have you ever lost that last fry down by your lap or feet while on the road? I certainly have and there is no army in the world that could keep me from finding that delicious morsel under the right conditions.

But those questions aside, it’s a win for Stiber, who spent a year in court to prove that a hash brown is not a phone.

Filed Under: driving, driving while distracted, eating, hasbrowns, jason stiber, mobile phones
Companies: mcdonald's

Oklahoma Looks To Clamp Down On Uninsured Driving With Traffic Cams And Perverse Incentives

from the like-an-ATM,-but-with-zero-end-user-interaction dept

Oklahoma is home to a large percentage of uninsured drivers. Nearly a quarter of the state’s drivers get behind the wheel as latent threats to insured drivers’ insurance rates. The state thinks it’s found a solution to this problem — one that will net a private company and the state’s district attorney offices lots of money.

Oklahoma has finalized a deal with a Massachusetts company to use license-plate scanners to catch uninsured drivers, and the firm expects to issue 20,000 citations a month starting as early as next year.

The program, believed to be the first of its kind in the nation, involves setting up automated high-speed cameras on highways around the state to detect uninsured vehicles and mailing their owners a citation with a fine of $184, according to the District Attorneys Council.

The problem isn’t so much the solution — although the solution has its own issues, like the mass collection of plate/location data. The problem is the incentives. First off, there’s the company involved: Gatso USA will receive more than 40% of the revenue ($84 for each paid citation) for the first couple of years. Its percentage of the take will decline over the next several years but will never drop below 68/ticket.Thecompanyhopestomakemorethan68/ticket. The company hopes to make more than 68/ticket.Thecompanyhopestomakemorethan1.6 million a month through its work with the state of Oklahoma.

The more problematic incentive is this:

It will be overseen by the District Attorneys Council rather than law enforcement, and the state’s 27 district attorneys’ offices are expected to receive millions of dollars in citation revenue a year, although no estimates were provided.

Why would this go to DAs? Maybe it’s the state is throwing the DAs Council a bone to shut it up.

District attorneys have complained that their revenue sources are diminishing because of state budget cuts and the drop in bounced-check fines.

I guess the DAs Council is already counting on this system to make up for lost income. There’s not much worse than a tool like this in the hands of a government entity that firmly believes it will return it to its former, cash-heavy glory. The state’s DAs appear to be ready to rely heavily on a revenue stream/camera system sold as a foolproof, cost-effective remedy. But the history of automatic plate readers and traffic cams is littered with tech failures. As Scott Greenfield points out, there’s a good chance the DA (and Gatso) will still get paid, even if the tech is error-prone.

Tech fails all the time because we have unwarranted faith in it even though it lets us down constantly. Dirt on a plate, a cover, a bent plate, or just random errors, could turn that miraculous scanner into a weapon for the unwary. And Gatso, not to mention the cops, has a huge incentive to collect as much money as possible, because money is good.

_What to do? Hire a lawyer to fight a $184 ticket? Lose a day of work, maybe lose a job because you lost a day of work, fighting city hall? The innocent will be swept into the mix along with the guilty, and it will be your problem to fix their problem at your expens_e.

And if a driver doesn’t pay Gatso fast enough (the company issues the tickets and collects the fines), the ticket — right or wrong — heads to the DA’s office for prosecution. Given the statements made by the DAs Council, offices will have every incentive to pursue non-payers vigorously and tack on as many additional fees and fines as possible.

A better solution would be to pay for the system upfront, releasing the state from worrying about contract breaches or mission creep pressure should the cameras fail to deliver millions of dollars to Gatso USA. And the fines should go into a general fund, rather than directly to an office with the power to prosecute. Once you strip out the perverted incentives, it’s a cost-effective deterrent for uninsured drivers, give or take the system’s margin of error.

Filed Under: driving, insurance, oklahome, traffic cameras

DailyDirt: Making A Road Trip Across The US…

from the urls-we-dig-up dept

The Cannonball Run plot of racing across the US has inspired some drivers to set illegal records — though the concept was started in 1933 by Edwin “Cannonball” Baker who drove from NYC to LA in 53 hours (and popularized in the 70s as a protest against highway speed limits). We’ve previously mentioned Alex Roy making the trip in about 32 hours, but more recently, Ed Bolian and a couple other drivers/passengers did it in just 28 hours and 50 minutes. If you’ve always wanted to drive across country in some insane way, check out some of the records that other people have set.

After you’ve finished checking out those links, take a look at our Daily Deals for cool gadgets and other awesome stuff.

Filed Under: alex roy, autonomous cars, cannonball run, carl reese, driving, ed bolian, edwin cannonball baker, electric vehicles, speeding
Companies: delphi, tesla

Self-Driving Cars Have Twice The Accidents, But Only Because Humans Aren't Used To Vehicles Following The Rules

from the I'm-sorry-you-hit-me,-Dave dept

Tue, Dec 22nd 2015 03:59pm - Karl Bode

When Google discusses its latest self-driving car statistics (provided monthly at the company’s website), the company is quick to highlight that with two million miles of autonomous and manual driving combined, the company’s self-driving cars have only been involved in 17 minor accidents, none of them technically the fault of Google. Or, more specifically, these accidents almost always involve Google’s cars being rear ended by human drivers. But what Google’s updates usually don’t discuss is the fact that quite often, self-driving cars are being rear ended because they’re being too cautious and not human enough.

And that’s proven to be one of the key obstacles in programming self-driving cars: getting them to drive more like flawed humans. That is, occasionally aggressive when necessary, and sometimes flexible when it comes to the rules. That’s at least been the finding of the General Motors-Carnegie Mellon Autonomous Driving Collaborative Research Lab, which says getting self-driving cars onto the highway can still be a challenge:

“Last year, Rajkumar offered test drives to members of Congress in his lab?s self-driving Cadillac SRX sport utility vehicle. The Caddy performed perfectly, except when it had to merge onto I-395 South and swing across three lanes of traffic in 150 yards (137 meters) to head toward the Pentagon. The car?s cameras and laser sensors detected traffic in a 360-degree view but didn?t know how to trust that drivers would make room in the ceaseless flow, so the human minder had to take control to complete the maneuver.”

And while Google may crow that none of the accidents their cars get into are technically Google’s fault, accident rates for self-driving cars are still twice that of traditional vehicles, thanks in part to humans not being used to a vehicle that fully adheres to the rules:

“Turns out, though, their accident rates are twice as high as for regular cars, according to a study by the University of Michigan?s Transportation Research Institute in Ann Arbor, Michigan. Driverless vehicles have never been at fault, the study found: They?re usually hit from behind in slow-speed crashes by inattentive or aggressive humans unaccustomed to machine motorists that always follow the rules and proceed with caution.”

But with a sometimes-technophpobic public quick to cry foul over the slightest self-driving car mishap, car programmers are proceeding cautiously when it comes to programming in an extra dose of rush-hour aggression. And regulators are being even more cautious still. California last week proposed new regulations that would require that all self-driving cars have full working human controls and a driver in the driver’s seat at all times, ready to take control (which should ultimately do a wonderful job of — pushing the self-driving car industry to other states like Texas).

The self-driving car future is coming up quickly whether car AI or self-driving auto philosophical dilemmas (should a car be programmed to kill the driver if it will save a dozen school children?) are settled or not. Google and Ford will announce a new joint venture at CES that may accelerate self-driving vehicle construction. And with 33,000 annual fatalities caused by highway-bound humans each year, it still seems likely that, overly-cautious rear enders aside, an automated auto industry will still likely save significant lives over the long haul.

Filed Under: accidents, autonomous vehicles, cars, driving, rules, self-driving cars
Companies: google

Malaysia To Introduce RFID Tracking For Every Vehicle

from the what-could-possibly-go-wrong? dept

Here on Techdirt, nationwide tracking schemes tend to raise a red flag. In Malaysia, by contrast, there seem to be no such worries, as ambitious plans to introduce RFID tagging for all vehicles, reported by The Sun Daily, indicate:

> A new vehicle security tracking system suitable for all types of vehicles — the Radio Frequency Identification (RFID) — will be implemented nationwide by the Road Transport Department (JPJ) by 2018.

According to the article, there are plenty of advantages of doing so:

> This new system will enable the police and other authorities to effectively track down criminals

And:

> the RFID technology will herald a new era for vehicle security in Malaysia and it could be the answer to combat vehicle theft and cloned vehicle syndicates.

Moreover:

> the RFID can also be used to provide real-time monitoring on road traffic situation.

And if you’re worried that ne’er-do-wells might seek to avoid being tracked simply by ripping off said RFID tags, fear not, Malaysia has that covered:

> theSun understands that the RFID tag is designed to shatter should any one attempt to tamper with it and can transmit a warning to the JPJ and police, should any one try to remove the sticker.

Sounds pretty foolproof. So why aren’t other countries rushing to adopt this approach?

> Interestingly, RFID technology has been criticised in many countries for its effectiveness to track vehicles movement and citizens. It has been widely accused for invasion of privacy in Belgium, Italy, UK and US.

I just can’t imagine why.

Follow me @glynmoody on Twitter or identi.ca, and +glynmoody on Google+

Filed Under: cars, driving, malaysia, privacy, rfid, surveillance, tracking

Despite Recent Court Rulings, Getting Behind The Wheel Is Pretty Much Kissing Your 4th Amendment Protections Goodbye

from the cars-are-for-criminals,-apparently dept

There’s been more good news than bad concerning the Fourth Amendment recently. In addition to the Supreme Court’s ruling that searches of cellphones incident to arrest now require a warrant, various circuit court decisions on cell site location info and the surreptitious use of GPS tracking devices may see the nation’s top court addressing these contentious issues in the near future. (The latter still needs to be addressed more fully than the Supreme Court’s 2012 punt on the issue.)

Generally speaking, the Fourth Amendment has been at least partially restored, much of it due to the courts being more willing to address the implications of technological advances and the bearing these have on the expectation of privacy.

But one area — pertaining to technology dating back over 100 years and used by millions on a daily basis — remains under-served: vehicles. As we’ve discussed before, the “motor vehicle exception” allows law enforcement to search an entire vehicle, along with its contents, without a warrant, provided they have probable cause to suspect contraband is hidden in it. An outright refusal to allow a search may result in the securing of a warrant, but the law is riddled with so many law enforcement-friendly exceptions that the use of a warrant is the exception, rather than the rule.

This is a small part of a larger problem. While the Supreme Court did declare that law enforcement officers weren’t allowed to artificially prolong routine traffic stops in order to perform further queries/searches, it also wrote them a blank check for abuse with the Heien decision.

Drivers need to be on top of traffic laws, but cops don’t.

As the text indicates and we have repeatedly affirmed, “the ultimate touchstone of the Fourth Amendment is ‘reasonableness.’” Riley v. California, 573 U. S. ___, ___ (2014)…To be reasonable is not to be perfect, and so the Fourth Amendment allows for some mistakes on the part of government officials, giving them “fair leeway for enforcing the law in the community’s protection.”

This meshes with the “good faith exception,” another out for police officers that ignore the Fourth Amendment. Cops can basically stop your for any reason and use that stop to fish for additional criminal charges. They have to be a bit quicker about it, thanks to the Rodriguez decision. But they won’t have to be any better at their jobs and they are not expected to know the laws they’re enforcing.

Ken Armstrong at Vice has a long rundown of cases where results of searches related to traffic stops were suppressed due to officers’ ignorance of the law but later reinstated to legitimacy by the Heien decision. Here’s one of them:

When a police officer in the Village of East Troy (4,281 residents, 18 miles of roadways, 500 manholes, according to its quarterly newsletter) pulled over Richard Houghton’s blue Ford Taurus, the officer was ignorant of the law in at least two ways. He thought the car needed a front license plate (in this case, it didn’t), and he thought the car’s air freshener was illegal, believing any object dangling from a rear-view mirror automatically violated the state’s law on obstructing a driver’s view (not so). Nonetheless, the Wisconsin Supreme Court decided that the marijuana and drug paraphernalia found in the officer’s subsequent search of the car would not be thrown out. Just one year after it had ruled the opposite in another case, the court decided that in light of Heien, mistakes of law by police could now be forgiven, if reasonable.

Notably, automobile air fresheners are one of law enforcement’s favorite “reasonable suspicion” indicators. The only reason for anyone to have a prominently-displayed air freshener (or multiple fresheners) is to cover up the smell of illicit drugs. Not every court has bought this theory, but enough have been willing to consider this — along with other questionable “suspicious” actions like being nervous, talking too fast, talking to slow, making eye contact, not making eye contact, etc. — as part of the constructed totality of reasonable suspicion.

Once an officer has this, he can quickly convert it to probable cause. With reasonable suspicion, an officer can often bring in a drug-sniffing dog — the search that isn’t a search — to obtain the probable cause for a complete roadside search.

But drug dogs are no more reliable than the average police officer’s command of traffic laws. A recent Seventh Circuit Appeals Court decision affirming the conviction of a man found with 15 kilos of cocaine in his vehicle took a bit of time to question the reliability of Lex, the drug dog that alerted prior to the search of the suspect’s vehicle.

In Larry Bentley’s case, a police officer initiated a traffic stop after observing Bentley’s vehicle cross into another lane on an Illinois highway without signaling. After stopping Bentley, the officer decided to call for a drug-detection dog named Lex. Once on the scene, Lex alerted, and the officers found close to 15 kilograms of cocaine in the vehicle.

But what if Lex alerts every time he is called upon? The fact that drugs are (or are not) found would have nothing to do with his behavior. That, in essence, is what Bentley is arguing here. The evidence Bentley was able to gather suggests that Lex is lucky the Canine Training Institute doesn’t calculate class rank. If it did, Lex would have been at the bottom of his class.

The ruling contains some very damning details about Lex, who alerts so often he should just be renamed “Probable Cause.”

In pressing his challenge to the dog’s alert, Bentley makes two principal points. First, he contends that Lex’s past performance in the field suggests he is particularly prone to false positives (i.e., signaling to his handler that there are drugs in a vehicle when there are not). He has a point. Lex alerts 93% of the time he is called to do an openair sniff of a vehicle, and Lex’s overall accuracy rate in the field (i.e., the number of times he alerts and his human handler finds drugs) is not much better than a coin flip (59.5%).

This dog is a coin flip for contraband. But he’s great at “authorizing” a warrantless search. He’s a police officer’s best friend — especially those that hope to turn minor traffic violations into something worth the paperwork. If an officer requested Lex, more than 9 times out of 10, he got to search the vehicle.

Unfortunately for Bentley and countless other citizens, the fallibility of drug-sniffing dogs isn’t anything they can use in their defense. From the Supreme Court on down through the various circuits, judges are waking up to the fact that drug dogs are more interested in pleasing their handlers than being an objective investigatory method (as is to be expected from nearly any domesticated animal), but more often than not, will side with law enforcement on drug dog “alerts.”

Bentley rightly points out that Lex is smart. Shively testified that he rewards Lex every time the dog alerts in the field. Presumably the dog knows he will get a “giftee” (a rubber hose stuffed with a sock) every time he alerts. If Lex is motivated by the reward (behavior one would expect from any dog), he should alert every time. This giftee policy seems like a terrible way to promote accurate detection on the part of a service animal, lending credence to Bentley’s argument that Lex’s alert is more of a pretext for a search than an objective basis for probable cause.

But despite seeing the conflict here, the court finds that Lex is still a good dog yes he is, and the permission he granted the officers to perform a more intrusive search is probably probable cause — or close enough to it that the other exceptions (good faith, etc.) swallow up Bentley’s protests to the contrary.

Nevertheless, in light of the Supreme Court’s decision in Florida v. Harris, 133 S. Ct. 1050 (2013), which addressed the use of drug-detection dogs, we conclude that the district judge did not err when he decided that Lex’s alert, along with the other evidence relating to the stop, was sufficient to support probable cause. Bentley’s other two challenges based on the traffic stop and his alleged lack of knowledge of the cocaine in the vehicle also fail. We thus affirm his conviction.

Combining drug dogs, Heien, Rodriguez, the “motor vehicle” and “good faith” exceptions, and you have a significant gap in Fourth Amendment coverage. Get in a car and kiss most of it goodbye. A cop can pull you over for nearly any reason and use this pretense to perform a dog-and-officer act that almost guarantees the generation of “probable cause.” Once this is achieved, everything in the car is subject to the search. If it isn’t (like a cellphone or a GPS system), this won’t be worked out until the arrestee is granted the chance to move to suppress evidence. There’s no stopping the search. There’s only the much smaller chance (at least compared to Lex’s magic nose) that the evidence will be tossed, along with the charges.

The only thing standing in the way of this abuse is the vague stopwatch of Rodriguez. Officers can’t artificially extend stops past the point that the objective has been achieved (ticket/warning issued). But there’s no specific time limit for officers to reach this concluding point, which means this will be adjudicated on a case-by-case basis.

If this is the only limitation, the Fourth Amendment means next to nothing if a citizen is behind the wheel. Not only is the Fourth Amendment supposed to protect against illegal searches, it’s also supposed to prevent illegal seizures. And in the definition of this amendment, a seizure includes the sort of detainment a traffic stop is.

A seizure of a person, within the meaning of the Fourth Amendment, occurs when the police’s conduct would communicate to a reasonable person, taking into account the circumstances surrounding the encounter, that the person is not free to ignore the police presence and leave at his will.

Heien allows the seizure to take place, completely without justification. Everything else allows this seizure to be refashioned into something far beyond a ticket for a broken taillight or “crossing a lane divider without signalling.” Technically, a citizen should be free to leave once a ticket/warning is in their hands. But how many will when the officer is still leaning in the window, asking questions unrelated to the traffic stop? Roughly zero. Any person who starts moving a vehicle before an officer returns to their own risks being nailed with additional charges/bullets/etc.

The legal interpretation is the ideal. The real world interpretation is nothing like it. A court may feel a traffic stop was over at point X, but the person whose vehicle was tossed after a drug dog appeared “alerted,” signaled his or her “consent” to the search by not stomping on the gas pedal the moment the officer handed over the citation.

Because of everything tied into the vehicle nexus, people are being subjected to Fourth Amendment-violating searches and seizures every day. Legal precedent puts the odds in law enforcement’s favor. And what’s the worse that happens to cops who violate the Fourth Amendment? They lose a bust or two. But there’s millions of drivers on the road. They can always find more people to arrest. And they can start this chain of events by citing laws that don’t exist and further the intrusion by bringing in a four-legged cop to give them the permission to override a citizen’s refusal to allow a search.

So, while it is heartening to see more court decisions tackling technology in a more logical fashion, something that’s been with us for more than 100 years remains a legal blind spot.

Filed Under: 4th amendment, cars, driving

Breaking: Self-Driving Cars Avoid Accident, Do Exactly What They Were Programmed To Do

from the I-can-and-will-do-that,-Dave dept

Fri, Jun 26th 2015 11:34am - Karl Bode

We just got done talking about how, after logging 1,011,338 autonomous miles since 2009, Google’s automated cars have had just thirteen accidents — none of which were the fault of the Google vehicles. By and large the technology appears to be working incredibly well, with most of the accidents the fault of inattentive human drivers rear-ending Google’s specially-equipped Lexus SUVs at stop lights. But apparently, the fact that this technology is working well isn’t quite interesting enough for the nation’s technology press.

A Reuters report making the rounds earlier today proclaimed that two self-driving cars from Google and Delphi Automotive almost got into an accident this week in California. According to the Reuters report, Google’s self-driving Lexus “cut off” Delphi’s self-driving Audi, forcing the Audi to take “appropriate action.” This apparently got the nation’s technology media in a bit of a heated lather, with countless headlines detailing the “almost crash.” The Washington Post was even quick to inform readers that the almost-crash “is now raising concerns over the technology.”

Except it’s not. Because not only did the cars not crash, it apparently wasn’t even a close call. Both Delphi and Google spokespeople told Ars Technica that both cars did exactly what they were programmed to do and Reuters apparently made an automated mountain out of a molehill:

“I was there for the discussion with Reuters about automated vehicles,” she told Ars by e-mail. “The story was taken completely out of context when describing a type of complex driving scenario that can occur in the real world. Our expert provided an example of a lane change scenario that our car recently experienced which, coincidentally, was with one of the Google cars also on the road at that time. It wasn?t a ‘near miss’ as described in the Reuters story.”

Instead, she explained how this was a normal scenario, and the Delphi car performed admirably.

“Our car did exactly what it was supposed to,” she wrote. “Our car saw the Google car move into the same lane as our car was planning to move into, but upon detecting that the lane was no longer open it decided to terminate the move and wait until it was clear again.”

In other words, As Twitter’s Nu Wexler observed, the two cars did exactly what they were programmed to do, though that’s obviously a notably less sexy story than Reuters’ apparently hallucinated tale of automated automotive incompetence.

Breaking: Self-driving cars avoid accident, doing exactly what they are programmed to do

— Nu Wexler (@wexler) June 26, 2015

Filed Under: accidents, autonomous vehicles, cars, driving, near miss, self-driving
Companies: delphi, google

Dumb Criminal Posts Video Of Dumb Crime After Leaving Hospital Injured From Dumbness

from the book-'em dept

It’s been a couple of months, so maybe you thought that there were no more dumb criminals doing dumb things with technology any longer. Well, that was a very silly thought, silly-thought-thinker. You should know by now that nothing will stop the deluge of dumb. This latest is special, however, due to the impressive dedication to stupid by our criminal mastermind. This case is one in which an 18 year old man videotaped himself driving like an idiot on purpose, injured himself to the point of needing an airlift to a hospital, after which he uploaded the video to YouTube — accurately titling it “Me Driving Like an Idiot”

Robert Charles Kelley IV, 18, driving west, first struck a Toyota sedan with his 1994 Honda on State Road 44 near Jungle Road, around 3:36 p.m. Monday afternoon, police said. He fled the scene of that crash and later would strike three more vehicles that were stopped at a red light at S.R. 44 and Colony Park Road, police said. Two patients from the first crash and one from the second were taken to Bert Fish Medical Center with injuries not considered life-threatening, police said.

Police also mentioned that they were planning on arresting Kelley, because of course they are. At the conclusion of his vehicular rampage, Kelley needed help getting himself removed from his now-destroyed Honda and was taken by helicopter to a hospital. Police had thought his injuries were serious, but he was released the next day. That’s apparently when the mood struck Kelley to finally upload the video to YouTube further implicating himself. It features, you guessed it, him driving like an idiot with a soundtrack of, you probably also guessed it, irritating techno music.

Markert said Kelley’s filming and uploading of the video falls under the “What were you thinking?” category, but of because the evidence it provided police, he added, “We certainly appreciate it.”

While the original video has since been taken down, you can see clips of it in news reports like the following:

Enjoy those multiple counts of leaving the scene of an accident with injuries, reckless driving, driving without a license, and possibly even intentional battery with a vehicle, son. Here’s hoping video of your perp walk ends up on YouTube.

Filed Under: accident, driving, evidence, robert kelley, video