accidents – Techdirt (original) (raw)
Stories filed under: "accidents"
Pedestrian Deaths By Car In Phoenix Area Last Week: 11. But One Was By A Self-Driving Uber
from the I-can't-do-that,-Dave dept
Despite worries about the reliability and safety of self-driving vehicles, the millions of test miles driven so far have repeatedly shown self-driving cars to be significantly more safe than their human-piloted counterparts. Yet whenever accidents (or near accidents) occur, they tend to be blown completely out of proportion by those terrified of (or financially disrupted by) an automated future.
So it will be interesting to watch the reaction to news that a self-driving Uber vehicle was, unfortunately, the first to be involved in a fatality over the weekend in Tempe, Arizona:
“A self-driving Uber SUV struck and killed a pedestrian in Tempe, Arizona, Sunday night, according to the Tempe police. The department is investigating the crash. A driver was behind the wheel at the time, the police said.
“The vehicle involved is one of Uber’s self-driving vehicles,” the Tempe police said in a statement. “It was in autonomous mode at the time of the collision, with a vehicle operator behind the wheel.”
Uber, for its part, says it’s working with Tempe law enforcement to understand what went wrong in this instance:
Our hearts go out to the victim?s family. We?re fully cooperating with @TempePolice and local authorities as they investigate this incident.
— Uber Comms (@Uber_Comms) March 19, 2018
Bloomberg also notes that Uber has suspended its self-driving car program nationwide until it can identify what exactly went wrong. The National Transportation Safety Board is also opening an investigation into the death and is sending a small team of investigators to Tempe.
We’ve noted for years now how despite a lot of breathless hand-wringing, self-driving car technology (even in its beta form) has proven to be remarkably safe. Millions of AI driver miles have been logged already by Google, Volvo, Uber and others with only a few major accidents. When accidents do occur, they most frequently involve human beings getting confused when a robot-driven vehicle actually follows the law. Google has noted repeatedly that the most common accidents it sees are when drivers rear end its AI-vehicles because they actually stopped before turning right on red.
And while there’s some caveats for this data (such as the fact that many of these miles are logged with drivers grabbing the wheel when needed), self-driving cars have so far proven to be far safer then even many advocates projected. We’ve not even gotten close to the well-hyped “trolly problem,” and engineers have argued that if we do, somebody has already screwed up in the design and development process.
It’s also worth reiterating that early data continues to strongly indicate that self-driving cars will be notably safer than their human-piloted counterparts, who cause 33,000 fatalities annually (usually because they were drunk or distracted by their phone). It’s also worth noting that 10 pedestrians have been killed by drivers in the Phoenix area (including Tempe) in the last week alone by human drivers, and Arizona had the highest rate of pedestrian fatalities in the country last year. And it’s getting worse, with 197 Arizona pedestrian deaths in 2016 compared to 224 in 2017.
We’ll have to see what the investigation reveals, but hopefully the tech press will view Arizona’s problem in context before writing up their inevitable hyperventilating hot takes. Ditto for lawmakers eager to justify over-regulating the emerging self-driving car industry at the behest of taxi unions or other disrupted legacy sectors. If we are going to worry about something, those calories might be better spent on shoring up the abysmal security and privacy standards in the auto industry before automating everything under the sun.
Filed Under: accidents, autonomous vehicles, pedestrian fatalities, self-driving cars
Companies: uber
Google's Self-Driving Car Causes First Accident, As Programmers Try To Balance Human Simulacrum And Perfection
from the get-out-of-my-lane,-Dave dept
Tue, Mar 1st 2016 11:40am - Karl Bode
Google’s self-driving cars have driven millions of miles with only a dozen or so accidents, all of them being the fault of human drivers rear-ending Google vehicles. In most of these cases, the drivers either weren’t paying attention, or weren’t prepared for a vehicle that was actually following traffic rules. But this week, an incident report by the California Department of Motor Vehicles (pdf) highlighted that a Google automated vehicle was at fault in an accident for what’s believed to be the first time.
According to the report, Google’s vehicle was in the right-hand turn lane in a busy thoroughfare in Google’s hometown of Mountain View, California, last month, when it was blocked by some sand bags. It attempted to move left to get around the sand bags, but slowly struck a city bus that the car’s human observer assumed would slow down, but didn’t. All in all, it’s the kind of accident that any human being might take part in any day of the week. But given the press and the public’s tendency toward occasional hysteria when self-driving technology proves fallible, Google’s busy trying to get out ahead of the report.
Google historically compiles monthly reports for its self-driving car project; and while its February report addressing this specific accident hasn’t been made public yet, Google’s been releasing an early look to the media. In the report, Google notes that just like humans, trying to predict another driver’s behavior isn’t always successful on the road:
“Our test driver, who had been watching the bus in the mirror, also expected the bus to slow or stop. And we can imagine the bus driver assumed we were going to stay put. Unfortunately, all these assumptions led us to the same spot in the lane at the same time. This type of misunderstanding happens between human drivers on the road every day.
This is a classic example of the negotiation that?s a normal part of driving ? we?re all trying to predict each other?s movements. In this case, we clearly bear some responsibility, because if our car hadn?t moved there wouldn?t have been a collision. That said, our test driver believed the bus was going to slow or stop to allow us to merge into the traffic, and that there would be sufficient space to do that.
We?ve now reviewed this incident (and thousands of variations on it) in our simulator in detail and made refinements to our software. From now on, our cars will more deeply understand that buses (and other large vehicles) are less likely to yield to us than other types of vehicles, and we hope to handle situations like this more gracefully in the future.”
Live and learn. Or compute and learn. Whatever. If automated vehicles were going to cause an accident, it’s at least good that this appears to be an experience (don’t give city bus drivers the benefit of the doubt) programmers will learn from. The problem historically is that like so many technologies, people are afraid of self-driving cars. As such, automated vehicles can’t just be as good as human beings, they’ll have to be better than human beings for people to become comfortable with the technology seeing widespread adoption.
By any measure self-driving cars have been notably safer than most people imagined, but obviously there’s still work to do. Recent data from the General Motors-Carnegie Mellon Autonomous Driving Collaborative Research Lab suggests that self-driving cars have twice the accidents of human-driven vehicles — but again largely because people aren’t used to drivers that aren’t willing to bend the rules a little bit. Striking an acceptable balance between having an automated driver be perfect — and having an automated driver be more human like — is going to be a work in progress for some time.
Filed Under: accidents, autonomous vehicles, blame, fault, self-driving cars
Companies: google
Self-Driving Cars Have Twice The Accidents, But Only Because Humans Aren't Used To Vehicles Following The Rules
from the I'm-sorry-you-hit-me,-Dave dept
Tue, Dec 22nd 2015 03:59pm - Karl Bode
When Google discusses its latest self-driving car statistics (provided monthly at the company’s website), the company is quick to highlight that with two million miles of autonomous and manual driving combined, the company’s self-driving cars have only been involved in 17 minor accidents, none of them technically the fault of Google. Or, more specifically, these accidents almost always involve Google’s cars being rear ended by human drivers. But what Google’s updates usually don’t discuss is the fact that quite often, self-driving cars are being rear ended because they’re being too cautious and not human enough.
And that’s proven to be one of the key obstacles in programming self-driving cars: getting them to drive more like flawed humans. That is, occasionally aggressive when necessary, and sometimes flexible when it comes to the rules. That’s at least been the finding of the General Motors-Carnegie Mellon Autonomous Driving Collaborative Research Lab, which says getting self-driving cars onto the highway can still be a challenge:
“Last year, Rajkumar offered test drives to members of Congress in his lab?s self-driving Cadillac SRX sport utility vehicle. The Caddy performed perfectly, except when it had to merge onto I-395 South and swing across three lanes of traffic in 150 yards (137 meters) to head toward the Pentagon. The car?s cameras and laser sensors detected traffic in a 360-degree view but didn?t know how to trust that drivers would make room in the ceaseless flow, so the human minder had to take control to complete the maneuver.”
And while Google may crow that none of the accidents their cars get into are technically Google’s fault, accident rates for self-driving cars are still twice that of traditional vehicles, thanks in part to humans not being used to a vehicle that fully adheres to the rules:
“Turns out, though, their accident rates are twice as high as for regular cars, according to a study by the University of Michigan?s Transportation Research Institute in Ann Arbor, Michigan. Driverless vehicles have never been at fault, the study found: They?re usually hit from behind in slow-speed crashes by inattentive or aggressive humans unaccustomed to machine motorists that always follow the rules and proceed with caution.”
But with a sometimes-technophpobic public quick to cry foul over the slightest self-driving car mishap, car programmers are proceeding cautiously when it comes to programming in an extra dose of rush-hour aggression. And regulators are being even more cautious still. California last week proposed new regulations that would require that all self-driving cars have full working human controls and a driver in the driver’s seat at all times, ready to take control (which should ultimately do a wonderful job of — pushing the self-driving car industry to other states like Texas).
The self-driving car future is coming up quickly whether car AI or self-driving auto philosophical dilemmas (should a car be programmed to kill the driver if it will save a dozen school children?) are settled or not. Google and Ford will announce a new joint venture at CES that may accelerate self-driving vehicle construction. And with 33,000 annual fatalities caused by highway-bound humans each year, it still seems likely that, overly-cautious rear enders aside, an automated auto industry will still likely save significant lives over the long haul.
Filed Under: accidents, autonomous vehicles, cars, driving, rules, self-driving cars
Companies: google
Driver Leaves Scene Of Accident, Gets Turned In By Her Car
from the prosecution-would-like-to-submit-this-jumble-of-circuits-and-wires-as-Exhibit-A dept
It’s no secret today’s vehicles collect tons of data. Or, at least, it shouldn’t be a secret. It certainly isn’t well-known, despite even some of the latest comers to the tech scene — legislators — having questioned automakers about their handling of driver data.
More than one insurance company will offer you a discount if you allow them to track your driving habits. Employers have been known to utilize “black boxes” in company vehicles. These days, the tech is rarely even optional, although these “event data recorders” generally only report back to the manufacturers themselves. Consumer-oriented products like OnStar combine vehicle data with GPS location to contact law enforcement/medical personnel if something unexpected happens. Drivers can trigger this voluntarily to seek assistance when stranded on the road because of engine trouble, flat tires, etc.
They can also trigger this involuntarily, as one Florida woman found out.
Police responded to a hit-and-run in the 500 block of Northwest Prima Vista Boulevard on Monday afternoon. The victim, Anna Preston, said she was struck from behind by a black vehicle that took off. Preston was taken to the hospital with back injuries.
Around the same time, police dispatch got an automated call from a vehicle emergency system stating the owner of a Ford vehicle was involved in a crash and to press zero to speak with the occupants of the vehicle.
The owner of the vehicle seemed surprised to be receiving a call from a 911 dispatcher. The driver, Cathy Bernstein, first claimed she hadn’t been in an accident. Unfortunately, the call was triggered by her airbag deploying, which can happen without a corresponding impact, but rarely enough that the dispatcher sent police officers to the driver’s home following the phone call.
At that point, her story changed.
Police went to Bernsteins’s home on Northwest Foxworth Avenue and saw that her vehicle had extensive front-end damage and silver paint from Preston’s vehicle on it. Bernstein’s airbag had also been deployed.
Police said Bernstein again denied hitting another vehicle, saying she had struck a tree.
From that point, the story gets even better.
It was later discovered that Bernstein had been involved in another accident prior to the one with Preston and was fleeing from that incident.
The whole recording is worth a listen, especially as Bernstein buys time after being blindsided by the unexpected incoming call.
Dispatcher: Are you broke down? Bernstein: No. Unfortunately [looooooong pause] I’m fine.
[…]
Bernstein: The guy who hit me […] I could not control that. Dispatcher: So, you HAVE been in an accident. Bernstein: [pause, then very slowly] No.
In this case, the system worked, although not in the way anyone really expected. Someone who thought they had gotten away with two consecutive hit-and-runs found herself talking to police officers after her car tried to help her out by dialing 911. The onboard system is meant to ensure the safety of the driver. In this case, it was apparently everyone else that needed the protection, but the circuitous route still reached the most desirable conclusion.
Filed Under: accidents, cars, hit and run, internet of things
Breaking: Self-Driving Cars Avoid Accident, Do Exactly What They Were Programmed To Do
from the I-can-and-will-do-that,-Dave dept
Fri, Jun 26th 2015 11:34am - Karl Bode
We just got done talking about how, after logging 1,011,338 autonomous miles since 2009, Google’s automated cars have had just thirteen accidents — none of which were the fault of the Google vehicles. By and large the technology appears to be working incredibly well, with most of the accidents the fault of inattentive human drivers rear-ending Google’s specially-equipped Lexus SUVs at stop lights. But apparently, the fact that this technology is working well isn’t quite interesting enough for the nation’s technology press.
A Reuters report making the rounds earlier today proclaimed that two self-driving cars from Google and Delphi Automotive almost got into an accident this week in California. According to the Reuters report, Google’s self-driving Lexus “cut off” Delphi’s self-driving Audi, forcing the Audi to take “appropriate action.” This apparently got the nation’s technology media in a bit of a heated lather, with countless headlines detailing the “almost crash.” The Washington Post was even quick to inform readers that the almost-crash “is now raising concerns over the technology.”
Except it’s not. Because not only did the cars not crash, it apparently wasn’t even a close call. Both Delphi and Google spokespeople told Ars Technica that both cars did exactly what they were programmed to do and Reuters apparently made an automated mountain out of a molehill:
“I was there for the discussion with Reuters about automated vehicles,” she told Ars by e-mail. “The story was taken completely out of context when describing a type of complex driving scenario that can occur in the real world. Our expert provided an example of a lane change scenario that our car recently experienced which, coincidentally, was with one of the Google cars also on the road at that time. It wasn?t a ‘near miss’ as described in the Reuters story.”
Instead, she explained how this was a normal scenario, and the Delphi car performed admirably.
“Our car did exactly what it was supposed to,” she wrote. “Our car saw the Google car move into the same lane as our car was planning to move into, but upon detecting that the lane was no longer open it decided to terminate the move and wait until it was clear again.”
In other words, As Twitter’s Nu Wexler observed, the two cars did exactly what they were programmed to do, though that’s obviously a notably less sexy story than Reuters’ apparently hallucinated tale of automated automotive incompetence.
Breaking: Self-driving cars avoid accident, doing exactly what they are programmed to do
— Nu Wexler (@wexler) June 26, 2015
Filed Under: accidents, autonomous vehicles, cars, driving, near miss, self-driving
Companies: delphi, google
DailyDirt: In The Long Run, We're All Dead
from the urls-we-dig-up dept
If you’re looking for some good data to put into an infographic, it’s not too hard to find statistics on death. Reliable stats of how people died go back quite a ways, too. Sure, it’s a bit morbid, but most people don’t think about dying until they’re close to doing it. So if you’re curious, check out a few of these visualizations on how we die.
- The leading causes of death have changed significantly since 1900, so the flu (or pneumonia) isn’t killing off as many Americans as it used to. Instead, heart disease and cancer have replaced the flu/pneumonia and tuberculosis. [url]
- What are the odds? Dying of heart disease has relatively common 467:1 odds — compared to dying from cycling (340,845:1) or an asteroid impact (74,817,414:1). [url]
- Another infographic on how the world died (in the 20th century) shows non-communicable diseases and infectious diseases are obviously really deadly, but so are wars and drugs. It could be difficult to change these stats. Medical technology could wipe out some diseases, but we haven’t cured old age…. [url]
- Is it worth it to try to minimize your risks of dying? If you want to try, remember to focus on the activities that are actually high risk, not the spectacular deaths that don’t kill that many people (eg. stepladders vs terrorism). [url]
If you’d like to read more awesome and interesting stuff, check out this unrelated (but not entirely random!) Techdirt post via StumbleUpon.
Filed Under: accidents, cancer, data, death, flu, health, heart disease, infographics, non-communicable diseases, pneumonia, risk, statistics, tuberculosis
DailyDirt: Dangerous Playgrounds Are Fun!
from the urls-we-dig-up dept
If you have young kids, you might have noticed that public playgrounds are a bit different than the ones you played on as a kid. Rubberized surfaces have replaced gravel or asphalt, and simple teeter-totters (or see-saws) have been re-designed using viscoelastic materials to prevent dangerous accelerations. You might have noticed it’s hard to find monkey bars on playgrounds. The reasons for these changes are obvious: safety and liability. However, are kids still having as much fun outdoors? Here are just a few links on playground equipment.
- Can a playground be too safe? Maybe some playgrounds are too boring for kids. A new kind of playground lets kids do a few more dangerous activities, but will parents have to sign a consent form for it? [url]
- In 2001, a report on playground safety stated estimates such as: there were 7.5 playground-related injuries per 10,000 US population in 1999, treated by hospital emergency rooms. This report may have spurred a generation of playground equipment that is safer for kids, but arguably not as fun or enjoyable as homemade rope swings. [url]
- The next time you see a kid sitting on a parent’s lap going down a playground slide, you might want to stop them and point out that it’s actually safer for the kid to slide down alone. Too often, well-meaning parents slide down with their toddlers and accidentally fracture their child’s leg if a shoe gets stuck and the weight of the parent continues to push the kid down. [url]
If you’d like to read more awesome and interesting stuff, check out this unrelated (but not entirely random!) Techdirt post via StumbleUpon.
Filed Under: accidents, fun, injuries, kids, parenting, playgrounds, safety
DailyDirt: The Risks Of Fossil Fuels
from the urls-we-dig-up dept
Relatively cheap fossil fuels allow everyone to enjoy comfortable lifestyles. But every so often, there seem to be horrible stories of environmental damage caused by our continuing addiction to underground hydrocarbons. Pulling oil, coal and gas out of the ground is probably going to be the way we get most of our energy for the foreseeable future, so it’s just a bit worrying that we haven’t quite figured out how to really mitigate oil spills and other accidents. Fortunately, Mother Nature hasn’t taken full revenge (yet?) on us.
- Recently, Chevron apologized for one of its natural gas wells exploding (and killing one person) by giving away coupons for free pizza to local residents. Some residents accepted the pizza apology as a thoughtful gift, but there are plenty of other people who voiced their opinion that the free pizza vouchers are an insult. [url]
- About 25 years ago, the Exxon Valdez spilled about 11 million gallons of oil into Alaska’s Prince William Sound. Oil from that spill still lingers on the shore, and wildlife hasn’t fully recovered yet. [url]
- The Exxon Valdez’s 11 million gallons of spilled oil sounds like a lot, but during the Gulf War in 1991, hundreds of millions of gallons were spilled — with estimates ranging from 300 to 500 million gallons. The spill was the deliberate action of Saddam Hussein, presumably meant to deter American troops from landing on the shore. [url]
- While BP is still dealing with the aftermath of the Deepwater Horizon accident, it just spilled a bit of crude oil into Lake Michigan, a few miles away from Chicago. Luckily, the cold weather makes it a bit easier to clean up as the oil solidifies into a wax-like substance. [url]
If you’d like to read more awesome and interesting stuff, check out this unrelated (but not entirely random!) Techdirt post via StumbleUpon.
Filed Under: accidents, deepwater horizon, energy, environmental damage, fossil fuels, oil spills, prince william sound, valdez
Companies: bp, chevron, exxon
DailyDirt: Bank Error In Your Favor….
from the urls-we-dig-up dept
You might think that with all the supercomputer capabilities available that banks wouldn’t make somewhat simple errors with huge dollar amounts. You would be wrong. Human errors can be greatly magnified by automated systems, and these errors seem to happen almost regularly. What would you do if you found more than a few extra bucks in your bank account? Here are just a few recent examples of some pretty big monetary mistakes.
- Paypal created a temporary quadrillionaire by accidentally crediting $92,233,720,368,547,800 to a very surprised Chris Reynolds. The error was corrected quickly, but Reynolds said that if he had been able to keep the money, he would have paid down the national debt and maybe bought the Phillies. [url]
- Bank of America mistakenly gave Ronald Page unlimited cash withdrawals from its ATMs — which he used to extract over 1.5million(1.5 million (1.5million(1,543,104!) over a few weeks. This 55yo Detroit resident lost it all gambling, and he’s been court-ordered to repay this sizable sum. (Good luck collecting that, BoA…) [url]
- Another accidental millionaire tried to run off with £3.4 million after his bank erroneously transferred that amount to his bank account. Hui “Leo” Gao wired the money to some foreign banks and took his girlfriend on a two year adventure. Eventually, he and his girlfriend were caught, and the bank recovered about £1.5 million. [url]
If you’d like to read more awesome and interesting stuff, check out this unrelated (but not entirely random!) Techdirt post via StumbleUpon.
Filed Under: accidents, atm, banks, error, mistakes, money, quadrillionaire
Companies: bank of america, paypal
Danish Police Accidentally Censor Over 8,000 Sites As Child Porn… Including Facebook & Google
from the censorship-is-bad,-mmmkay? dept
Reminiscent of the mooo.com screwup in the US, where Homeland Security’s ICE division “accidentally” seized 84,000 sites and plastered them over with a warning graphic about how they’d been seized by the US government for child porn, the Danish police similarly “accidentally” had 8,000 legitimate sites declared as child porn sites that needed to be blocked. Among the sites listed? Google and Facebook. Visitors to those sites, from ISP Siminn were greeted with the following message (translated, of course):
The National High Tech Crime Center of the Danish National Police [NITEC], who assist in investigations into crime on the internet, has informed Siminn Denmark A/S, that the internet page which your browser has tried to get in contact with may contain material which could be regarded as child pornography…
Upon the request of The National High Tech Crime Center of the Danish National Police, Siminn Denmark A/S has blocked the access to the internet page.
And people wonder why so many people around the world were so concerned about the threat of something like SOPA — which would make DNS blocking at the ISP level a lot more common.
So how did this “accident” happen?
According to NITEC chief Johnny Lundberg, it began when an employee at the police center decided to move from his own computer to that of a colleague.
“He sat down and was about to make an investigation, and in doing so he placed a list of legitimate sites in the wrong folder,” Lundberg explained. “Before becoming aware of the error, two ISPs retrieved the list of sites.”
It would seem that there’s a problem in this process. The fact that just one employee can change the list seems wide open to abuse. And the fact that the list seems somewhat automated beyond that is even more problematic. You know what would solve this problem? A little thing called due process. What a concept.