uber – Techdirt (original) (raw)

Former Uber Security Officer Won’t Go To Prison For Covering Up A 2016 Data Breach

from the not-sure-what-this-is-meant-to-deter dept

A rather strange prosecution of a former Uber executive finally comes to an end. And the first tech company executive to be convicted of criminal acts related to a data breach won’t be going to prison, as Joseph Menn reports for the Washington Post.

Former Uber chief security officer Joe Sullivan avoided prison Thursday as he was sentenced for covering up the 2016 theft of company data on 50 million Uber customers while the company was being investigated by the Federal Trade Commission over a previous breach.

Sullivan had been convicted in October of obstruction of justice and hiding a felony, making him the first corporate executive to be found guilty of crimes related to a data breach by outsiders.

To be sure, some poor decisions were made by Sullivan. But this wasn’t a case where a company carelessly exposed user data and then made moves to ensure its users never found out about it. This was extortion by cyber-criminals, an act aided by the accidental exposure of a digital key, which the extortionists used to obtain data on 600,000 drivers and 50 million passengers.

Sullivan’s team tried to satisfy the extortionists with a $10,000 payment under the company’s bounty program but the hackers insisted on a six-figure payout. Sullivan agreed to pay the amount, provided the hackers destroyed the data and never disclosed the breach. These were the acts federal prosecutors claimed amounted to obstruction of justice and hiding a felony.

According to Sullivan, this was done to ensure the data never leaked while also utilizing the back-and-forth with the extortionist to seek clues to their identity. The pair of extortionists was eventually arrested, with one of the two testifying on behalf of the prosecution(!).

With more and more companies paying ransoms to recover data/prevent data distribution, it seems extremely odd the government would go after someone who appeared to be doing what he could to protect drivers and passengers from having their personal data exposed or sold to other criminals.

And it’s not as though Sullivan had a track record of being careless with sensitive data collected by the companies he worked for. That’s the message that came through in the letters of support delivered to the court by more than 180 colleagues and security professionals.

The conviction shocked many security professionals, many of whom saw Sullivan, a onetime federal cybercrime prosecutor, as an industry leader who continued to work in the public interest as the top security executive at Facebook, Uber and Cloudflare.

They also criticized the government for criminalizing questionable judgment in paying off extortionists when the practice has become a regular occurrence at U.S. companies hit by ransomware.

What has now become an acceptable, if a bit unsavory, “solution” to ransom demands was treated as a criminal act in this case. This successful prosecution suggests the feds might go after more big tech targets if it finds out they’ve been secretly negotiating with criminals.

The only assurance we have from the government that it won’t start prosecuting security professionals for paying off crooks isn’t all that assuring:

The FBI has said it will not pursue charges against those who approve payouts that do not go to gangs sanctioned for working in concert with Russian authorities or targeting critical infrastructure.

All well and good, but it’s not like malicious hackers provide targets with business cards and employment history (such as it were…) when trying to extort cash from their victims. Attribution is difficult. With the proper operational security in place, it can be almost impossible. Unless hackers affirmatively declare their affiliation with the Russian government, victims of ransomware attacks won’t actually know where the money is going. And with time being of the essence, sometimes the payment has to be made far ahead of the due diligence.

And it’s not as though the federal government is willing to prosecute its own for careless handling of breaches and lax security practices that invite hackers to partake of massive, government-mandated data collections. This seems like a very selective prosecution meant to show the government won’t let the private sector get away with mishandling their users’ data.

It’s unclear what deterrent effect this is supposed to create. If anything, it encourages companies to take a hands-off approach when dealing with extortionists, increasing the risk exfiltrated data will be publicized or sold to other criminals. That can’t be what the federal government actually wants. But it seems like that’s what it’s going to get.

Filed Under: computer security, doj, extortion, joe sullivan, obstruction of justice, ransomware
Companies: uber

ByteDance Spying Scandal Isn’t So Much About TikTok, But About The US’s Failure To Pass A Comprehensive Privacy Law

from the privacy-is-the-victim dept

Emily Baker-White has quite the story over at Forbes, revealing how ByteDance, the Chinese company that owns TikTok, apparently planned to have its “Internal Audit and Risk Control” department spy on the location of some American citizens:

The team primarily conducts investigations into potential misconduct by current and former ByteDance employees. But in at least two cases, the Internal Audit team also planned to collect TikTok data about the location of a U.S. citizen who had never had an employment relationship with the company, the materials show. It is unclear from the materials whether data about these Americans was actually collected; however, the plan was for a Beijing-based ByteDance team to obtain location data from U.S. users’ devices.

[….]

But the material reviewed by Forbes indicates that ByteDance’s Internal Audit team was planning to use this location information to surveil individual American citizens, not to target ads or any of these other purposes. Forbes is not disclosing the nature and purpose of the planned surveillance referenced in the materials in order to protect sources. TikTok and ByteDance did not answer questions about whether Internal Audit has specifically targeted any members of the U.S. government, activists, public figures or journalists.

Given the near non-stop moral panics about TikTok from the past few years, I’m am absolutely sure that this will be used (yet again) to argue that TikTok is somehow uniquely problematic, when the reality (yet again) is that what it’s doing is really no different than what a ton of American internet companies already do and have done in the past. Baker-White, who is one of the best reporters on this beat, makes that clear in her reporting:

ByteDance is not the first tech giant to have considered using an app to monitor specific U.S. users. In 2017, the New York Times reported that Uber had identified various local politicians and regulators and served them a separate, misleading version of the Uber app to avoid regulatory penalties. At the time, Uber acknowledged that it had run the program, called “greyball,” but said it was used to deny ride requests to “opponents who collude with officials on secret ‘stings’ meant to entrap drivers,” among other groups.

[….]

Both Uber and Facebook also reportedly tracked the location of journalists reporting on their apps. A 2015 investigation by the Electronic Privacy Information Center found that Uber had monitored the location of journalists covering the company. Uber did not specifically respond to this claim. The 2021 book An Ugly Truth alleges that Facebook did the same thing, in an effort to identify the journalists’ sources. Facebook did not respond directly to the assertions in the book, but a spokesperson told the San Jose Mercury News in 2018 that, like other companies, Facebook “routinely use[s] business records in workplace investigations.”

So, rather than making this a big thing about “oh no TikTok/China bad,” this should be a recognition that Congress should stop bickering about stupid stuff, and that includes pushing silly performative legislation, and come up with an actual federal privacy law that gives the public greater ability to protect their own privacy from all sorts of companies.

But, of course, that would take competence, and probably wouldn’t be useful for grandstanding or headlines… so it’ll never happen.

Of course, there are questions about what this means regarding TikTok’s widely discussed plans to separate US user data from ByteDance’s peeking eyes. I thought Oracle was supposed to protect us from all this, right? Right?

Filed Under: location data, privacy, surveillance
Companies: bytedance, facebook, tiktok, uber

Uber Wins Dubious Honor Of Being First Big Tech Company To Bully A Small Nation Using Corporate Sovereignty

from the welcome-to-the-ISDS-club dept

Six years ago, when Techdirt first started writing about the investor-state dispute system (ISDS) — or corporate sovereignty as we prefer to call it — it was largely unknown outside specialist circles. Since then, more people have woken up to the power of this apparently obscure element of international trade and investment deals. It essentially gives a foreign company the ability to threaten to sue a nation for millions — even billions — of dollars if the latter brings in new laws or regulations that might adversely affect an investment. The majority of corporate sovereignty cases have been brought by the extractive industries — mining and oil. That’s not least because many of the laws and regulations they object to concern environmental and health issues, which have come to the fore in recent years. New legislation designed to protect local communities might mean lower profits for investors, who then often threaten to use ISDS if they are not offered compensation for this “loss”.

Big tech companies, for all their real or supposed faults, have not turned to corporate sovereignty as a way of bullying small countries — until now. En24 News reports that Uber is threatening to invoke corporate sovereignty in a dispute with Colombia. According to Uber:

a series of recent measures by the Republic have had a serious adverse impact on Uber’s investments in Colombia and the viability of its operations in the country. On December 20, 2019, for example, through the Superintendence of Industry and Commerce (“SIC”), the Republic ordered Uber, Uber Colombia, and another Uber subsidiary that will virtually cease to make the Uber Platform available of Associated Drivers and passengers in Colombia.

Uber points out:

other companies in Colombia and third countries that offer similar forms in Colombia have not undergone the same treatment and continue to operate in Colombia without similar interference from the Republic.

The company claims a wide range of harms:

The illegal order of the Republic to block the Uber Platform in Colombia also constitutes an act of censorship in contravention of international human rights instruments that protect net neutrality, freedom of expression on the internet and freedom of use of the internet.

At the moment, this is all just saber-rattling, designed to encourage the Colombian government to unblock Uber in the country. If it doesn’t, the company says, it will invoke the ISDS Articles (pdf) of the 2012 United States-Colombia Trade Promotion Agreement, and ask a tribunal to award compensation. Even if the current threat to use corporate sovereignty is not followed through, it is surely only a matter of time before another big tech company joins the ISDS club.

Follow me @glynmoody on Twitter, Diaspora, or Mastodon.

Filed Under: colombia, corporate sovereignty, free trade agreement, isds, trade agreements, trade promotion agreement
Companies: uber

Law Enforcement Agencies Bumping Up Demands For Uber Customers' Data

from the GPS-is-made-of-people dept

If it generates records — especially third-party records — the government is going to come asking for them.

Not only is Uber’s ride-hailing service subject to a bizarre and inconsistent set of state-level regulations, it’s also a storage facility containing plenty of data about people’s travels. Taking an Uber may keep a rider’s license plate off the ALPR radar, but the government can still track people’s movements by asking Uber for customer data, which presumably includes where they traveled and when.

Zack Whittaker of TechCrunch says government agencies are taking more of an interest in Uber’s data collection, according to the company’s latest transparency report:

The ride-hailing company said the number of law enforcement demands for user data during 2018 are up 27% on the year earlier, according to its annual transparency report published Wednesday. Uber said the rise in demands was partly due to its business growing in size, but also a “rising interest” from governments to access data on its customers.

Uber said it received 3,825 demands for 21,913 user accounts from the U.S. government, with the company turning over some data in 72% of cases, during 2018.

This is the Golden Age of Surveillance, whether certain law enforcement figures want to admit it or not. More services require users to create accounts linked to real names and other verifiable information, like credit cards or bank accounts. Everything feeding into Uber’s data pile is available without a warrant. Bank records are still obtained with subpoenas, given no additional Fourth Amendment protections by recent Supreme Court decisions hinting that when the Third (Party) meets the Fourth (Amendment), it’s not as simple as it used to be.

Still, warrants are being used. The transparency report shows warrants are used about a fifth of the time. Without more granular detail, it’s tough to say what law enforcement agencies feel is warrant-worthy. Subpoenas are the most popular way to obtain info with exigent circumstances (“emergency”) following close behind.

There will soon be even more the government can collect from Uber. The company plans to start recording (audio only at this point) rides for driver and passenger safety. These recordings will belong to Uber, which means the government only needs to approach the company to perform post facto eavesdropping. Conversations in an Uber vehicle will become third-party records.

Maybe courts will view these as the modern equivalent of a phone booth conversation. Maybe they’ll view them as non-private conversations — the equivalent to jailhouse calls as long as riders and drivers are informed ALL CONVERSATIONS ARE RECORDED. Until then, it’s a grey area law enforcement is free to explore.

Filed Under: data, government requests, ride share, surveillance, tracking
Companies: uber

Report Suggests Rampant Negligence In Uber Self Driving Car Fatality

from the I'm-sorry-I-can't-do-that,-Dave dept

Wed, Nov 6th 2019 01:35pm - Karl Bode

Earlier this year you might recall that a self-driving Uber in Tempe, Arizona killed a woman who was trying to cross the street with her bike outside of a crosswalk. The driver wasn’t paying attention, and the car itself failed to stop for the jaywalking pedestrian. Initial reporting on the subject, most of it based on anonymous Uber sources who spoke to the paywalled news outlet The Information, strongly pushed the idea that the car’s sensors worked as intended and detected the woman, but bugs in the system software failed to properly identify the woman as something to avoid:

“The car?s sensors detected the pedestrian, who was crossing the street with a bicycle, but Uber?s software decided it didn?t need to react right away. That?s a result of how the software was tuned. Like other autonomous vehicle systems, Uber?s software has the ability to ignore ?false positives,? or objects in its path that wouldn?t actually be a problem for the vehicle, such as a plastic bag floating over a road. In this case, Uber executives believe the company?s system was tuned so that it reacted less to such objects. But the tuning went too far, and the car didn?t react fast enough, one of these people said.”

Thanks to that report, a narrative emerged that the vehicle largely worked as designed, and the only real problem was a modest quirk in uncooked programming.

But a new report by Bloomberg this week shatters that understanding. According to NTSB findings seen by Bloomberg, the vehicle in question wasn’t even programmed to detect jaywalkers. Like, at all:

“Uber Technologies Inc.?s self-driving test car that struck and killed a pedestrian last year wasn?t programmed to recognize and react to jaywalkers, according to documents released by U.S. safety investigators.”

Assuming Bloomberg’s read of the 400 page report (only a part of which has been made public) is accurate, that’s a far cry from a bug. The NTSB report found that Uber staff had also disabled Volvo auto-detection and breaking software that could have at least slowed the vehicle if not avoided the pedestrian impact altogether. Investigators also noted that despite the fact that Uber was conducting risky trials on public streets, the company had little to no real system in place for dealing with safety issues. Again, not just underwhelming public safety protocols, but none whatsoever:

“The Uber Advanced Technologies Group unit that was testing self-driving cars on public streets in Tempe didn?t have a standalone safety division, a formal safety plan, standard operating procedures or a manager focused on preventing accidents, according to NTSB.”

Again, that’s not just buggy or “poorly tuned” software, it’s total negligence. Despite the fact the driver was distracted, the car was never adequately programmed to detect jaywalkers, some safety features were disabled, and Uber had little to no safety protocols in place, prosecutors have already absolved Uber of criminal liability (though the driver still may face a lawsuit). The NTSB also hasn’t formally affixed blame for the crash (yet):

“The documents painted a picture of safety and design lapses with tragic consequences but didn?t assign a cause for the crash. The safety board is scheduled to do that at a Nov. 19 meeting in Washington.”

Self driving cars are remarkably safe, and most accidents involve autonomous vehicles getting confused when people actually follow the law (like rear ending a human-driven vehicle that stopped at a red light before turning right). But that’s only true when the people designing and conducting trials are competent. If the NTSB report is anything to go by, Uber fell well short, yet got to enjoy a lot of press suggesting the problem was random bad programming luck, not total negligence and incompetence. Later this month we’ll get to see if Uber faces anything resembling accountability for its failures.

Filed Under: arizona, autonomous vehicles, jaywalkers, self-driving cars, sensors, tempe
Companies: uber

Uber Takes On Beautician/Barber Over Her BeauBer Mobile App

from the portmandon't dept

There’s a perception among some that the forward-looking tech companies throughout the country are more permissive in intellectual property concerns than other industries or marketplaces. And perhaps there is some truth to that. But certainly this is not without exception. For instance, you can bear witness to Uber going after a beautician over her stylist-booking app, called BeauBer.

Carolina Vengoechea, 45, tells The Post that Uber has demanded she give up the name of her beauty salon app, called “BeauBer.” But she has refused, arguing that the name is the combination of her two job titles, beautician and barber — and has nothing to do with the San Francisco-based company.

Vengoechea says she has already turned down multiple settlement offers from the $60 billion Uber, which is hell-bent on destroying trademarks that include its name. Unless the company backs down, she said, she will be forced to face them in court next year.

The fact that the portmanteau barely contains the word “uber” if you squint at it really hard is hardly any reason for Uber to have turned this into a trademark dispute. Let’s just go down the list of reasons why this is ridiculous. First, these two companies are in wildly different industries. The fact that both have an app doesn’t change that. Second, there is little potential for actual public confusion, given that the name in total and branding for BeauBer is quite different. Finally, the idea of Uber going after a sole proprietor in this way is ironic in the worst of ways.

Fortunately, Vengochea appears to be the rare small business owner with a backbone when it comes to trademark bullying.

“I’ve already spent money on BeauBer,” Vengoechea said of her app. “I feel like settling is just giving up. I know I’m not doing anything wrong. Why do I have to settle just because they have more money than me?”

Others seem to agree.

“Here, it appears that Uber has gone outside their normal zone of necessary protection and have opposed a mark which should not reasonably be opposed,” Steven Gursky, a partner at law firm Oshlan, said. “Perhaps being ‘uber’ wealthy allows them to be overly aggressive.”

Which is a shame, really. It would be rather nice if Silicon Valley companies could lead the way in having a different attitude when it comes to intellectual property issues. Apparently, though, money does in fact corrupt all things.

Filed Under: carolina vengoechea, trademark
Companies: beauber, uber

Arizona Bans Self-Driving Car Tests; Still Ignores How Many Pedestrians Get Killed

from the plenty-of-blame-to-go-around dept

Tue, Mar 27th 2018 03:33pm - Karl Bode

By now, most folks have read about the fact that Uber (surprise) was responsible for the first ever pedestrian fatality caused by a self-driving car in the United States. Investigators in the case have found plenty of blame to go around, including a pedestrian who didn’t cross at a crosswalk, an Uber driver who wasn’t paying attention to the road (and therefore didn’t take control in time), and Uber self-driving tech that pretty clearly wasn’t ready for prime time compared to its competitors:

“Uber?s robotic vehicle project was not living up to expectations months before a self-driving car operated by the company struck and killed a woman in Tempe, Ariz.

The cars were having trouble driving through construction zones and next to tall vehicles, like big rigs. And Uber?s human drivers had to intervene far more frequently than the drivers of competing autonomous car projects.”

All of the companies that contribute tech to Uber’s test vehicle have been rushing to distance themselves from Uber’s failures here. Many of them are laying the blame at the feet of Uber, including one company making it clear that Uber had disabled some standard safety features on the Volvo XC90 test car in question:

“Uber Technologies Inc. disabled the standard collision-avoidance technology in the Volvo SUV that struck and killed a woman in Arizona last week, according to the auto-parts maker that supplied the vehicle?s radar and camera.

?We don?t want people to be confused or think it was a failure of the technology that we supply for Volvo, because that?s not the case,? Zach Peterson, a spokesman for Aptiv Plc, said by phone. The Volvo XC90?s standard advanced driver-assistance system ?has nothing to do? with the Uber test vehicle?s autonomous driving system, he said.”

Mobileye, the company that makes the collision-avoidance technology behind Aptiv’s tech, was also quick to pile on, noting that if implemented correctly, their technology should have been able to detect the pedestrian in time:

“Intel Corp.?s Mobileye, which makes chips and sensors used in collision-avoidance systems and is a supplier to Aptiv, said Monday that it tested its own software after the crash by playing a video of the Uber incident on a television monitor. Mobileye said it was able to detect Herzberg one second before impact in its internal tests, despite the poor second-hand quality of the video relative to a direct connection to cameras equipped to the car.”

In response to Uber’s tragic self-driving face plant, Arizona this week announced that it will be suspending Uber’s self-driving testing technology in the state indefinitely:

NEW: In light of the fatal Uber crash in Tempe, Governor Ducey sends this letter to Uber ordering the company to suspend its testing of autonomous vehicles in Arizona indefinitely #12News pic.twitter.com/gO5BZB9P2e

— Bianca Buono (@BiancaBuono) March 27, 2018

Plenty have justly pointed out that Arizona also has plenty of culpability here, given the regulatory oversight of Uber’s testing was arguably nonexistent. That said, Waymo (considered by most to be way ahead of the curve on self-driving tech) hasn’t had similar problems, and there’s every indication that a higher quality implementation of self-driving technology (as the various vendors above attest) may have avoided this unnecessary tragedy.

Still somehow lost in the finger pointing (including Governor Doug Ducey’s “unequivocal commitment to public safety”) is the fact that Arizona already had some of the highest pedestrian fatalities in the nation (of the human-caused variety). There were ten other pedestrian fatalities the same week as the Uber accident in the Phoenix area alone, and Arizona had the highest rate of pedestrian fatalities in the nation last year, clearly illustrating that Arizona has some major civil design and engineering questions of its own that need to be answered as the investigation continues.

Again, there’s plenty of blame to go around here, and hopefully everybody in the chain of dysfunction learns some hard lessons from the experience. But it’s still important to remember that human-piloted counterparts cause 33,000 fatalities annually, a number that should be dramatically lower when self-driving car technology is inevitably implemented (correctly).

Filed Under: arizona, autonomous vehicles, pedestrians, safety, self-driving cars
Companies: uber

Uber's Video Shows The Arizona Crash Victim Probably Didn't Cause Crash, Human Behind The Wheel Not Paying Attention

from the everyone-error dept

In the wake of a Tempe, Arizona woman being struck and killed by an Uber autonomous vehicle, there has been a flurry of information coming out about the incident. Despite that death being one of eleven in the Phoenix area alone, and the only one involving an AV, the headlines were far closer to the “Killer Car Kills Woman” sort than they should have been. Shortly after the crash, the Tempe Police Chief went on the record suggesting that the victim had at least some culpability in the incident, having walked outside of the designated crosswalk and that the entire thing would have been difficult for either human or AI to avoid.

Strangely, now that the video from Uber’s onboard cameras have been released, the Tempe police are trying to walk that back and suggest that reports of the Police Chief’s comments were taken out of context. That likely is the result of the video footage showing that claims that the victim “darted out” in front of the car are completely incorrect.

Contrary to earlier reports from Tempe’s police chief that Herzberg “abruptly” darted out in front of the car, the video shows her positioned in the middle of the road lane before the crash.

Based on the exterior video clip, Herzberg comes into view—walking a bicycle across the two-lane road—at least two seconds before the collision.

Analysis from Bryan Walker Smith, a professor at the University of South Carolina that has studied autonomous vehicle technology indicates that this likely represents a failure of the AVs detection systems and that there may indeed have been enough time for the collision to be avoided, if everything had worked properly.

Walker Smith pointed out that Uber’s LIDAR and radar equipment “absolutely” should’ve detected Herzberg on the road “and classified her as something other than a stationary object.”

“If I pay close attention, I notice the victim about 2 seconds before the video stops,” he said. “This is similar to the average reaction time for a driver. That means an alert driver may have at least attempted to swerve or brake.”

The problem, of course, is that AVs are in part attractive because drivers far too often are not alert. They are texting, playing with their phones, fiddling with the radio, or looking around absently. We are human, after all, and we fail to remain attentive with stunning regularaty.

So predictable is this failure, in fact, that it shouldn’t surprise you all that much that the safety operator behind the wheel of this particular Uber vehicle apparently is shown in the video to have been distracted by any number of things.

A safety operator was behind the wheel, something customary in most self-driving car tests conducted on public roads, in the event the autonomous tech fails. Prior to the crash, footage shows the driver—identified as 44-year-old Rafaela Vasquez—repeatedly glancing downward, and is seen looking away from the road right before the car strikes Herzberg.

So the machine might have failed. The human behind the wheel might have failed. The pedestrian may have been outside the crosswalk. These situations are as messy and complicated as we should all expect them to be. Even if the LIDAR system did not operate as expected, the human driver that critics of AVs want behind the wheel instead was there, and that didn’t prevent the unfortunate death of this woman.

So, do we have our first pedestrian death by AV? Kinda? Maybe?

Should this one incident turn us completely off to AVs in general? Hell no.

Filed Under: ai, arizona, autonomous vehicles, driverless cars, pedestrians
Companies: uber

Tempe Police Chief Indicates The Uber Self-Driving Car Probably Isn't At Fault In Pedestrian Death

from the human-error dept

The internet ink has barely dried on Karl’s post about an Uber self-driving vehicle striking and killing a pedestrian in Arizona, and we already have an indication from the authorities that the vehicle probably isn’t to blame for the fatality. Because public relations waits for nobody, Uber suspended its autonomous vehicles in the wake of the death of a woman in Tempe, but that didn’t keep fairly breathless headlines being painted all across the mainstream media. The stories that accompanied those headlines were more careful to mention that an investigation is required before anyone knows what actually happened, but the buzz created by the headlines wasn’t so nuanced. I actually saw this in my own office, where several people could be heard mentioning that autonomous vehicles were now done.

But that was always silly. It’s an awkward thing to say, but the fact that it took this long for AVs to strike and kill a pedestrian is a triumph of technology, given just how many people we humans kill with our cars. Hell, the Phoenix area itself had 11 pedestrian deaths by car in the last week, with only one of them being this Uber car incident. And now all of that hand-wringing is set to really look silly, as the Tempe police chief is indicating that no driver, human or AI, would likely have been able to prevent this death.

The chief of the Tempe Police has told the San Francisco Chronicle that Uber is likely not responsible for the Sunday evening crash that killed 49-year-old pedestrian Elaine Herzberg.

“I suspect preliminarily it appears that the Uber would likely not be at fault in this accident,” said Chief Sylvia Moir.

Herzberg was “pushing a bicycle laden with plastic shopping bags,” according to the Chronicle’s Carolyn Said, when she “abruptly walked from a center median into a lane of traffic.”

After viewing video captured by the Uber vehicle, Moir concluded that “it’s very clear it would have been difficult to avoid this collision in any kind of mode (autonomous or human-driven) based on how she came from the shadows right into the roadway.”

So, once again, this tragedy has almost nothing to do with automobile AI and everything to do with human beings being faulty, complicated creatures that make mistakes. We don’t need to assign blame or fault to a woman who died to admit to ourselves that not only did the self-driving car do nothing wrong in this instance, but also that it might just be true to say that the car’s AI had a far better chance of avoiding a fatality than the average human driver. The car was not speeding. It did not swerve. It did not adjust its speed prior to the collision.

This obviously isn’t the conclusion of the police’s investigation, but when the police chief is already making these sorts of noises early on, it’s reasonable to conclude that the visual evidence of what happened is pretty clear. Sadly, all this likely means is that the major media websites of the world will have to bench their misleading headlines until the next death that may or may not be the fault of a self-driving vehicle.

Filed Under: arizona, autonomous vehicles, fatalities, pedestrian, self-driving cars, tempe
Companies: uber

Pedestrian Deaths By Car In Phoenix Area Last Week: 11. But One Was By A Self-Driving Uber

from the I-can't-do-that,-Dave dept

Mon, Mar 19th 2018 01:40pm - Karl Bode

Despite worries about the reliability and safety of self-driving vehicles, the millions of test miles driven so far have repeatedly shown self-driving cars to be significantly more safe than their human-piloted counterparts. Yet whenever accidents (or near accidents) occur, they tend to be blown completely out of proportion by those terrified of (or financially disrupted by) an automated future.

So it will be interesting to watch the reaction to news that a self-driving Uber vehicle was, unfortunately, the first to be involved in a fatality over the weekend in Tempe, Arizona:

“A self-driving Uber SUV struck and killed a pedestrian in Tempe, Arizona, Sunday night, according to the Tempe police. The department is investigating the crash. A driver was behind the wheel at the time, the police said.

“The vehicle involved is one of Uber’s self-driving vehicles,” the Tempe police said in a statement. “It was in autonomous mode at the time of the collision, with a vehicle operator behind the wheel.”

Uber, for its part, says it’s working with Tempe law enforcement to understand what went wrong in this instance:

Our hearts go out to the victim?s family. We?re fully cooperating with @TempePolice and local authorities as they investigate this incident.

— Uber Comms (@Uber_Comms) March 19, 2018

Bloomberg also notes that Uber has suspended its self-driving car program nationwide until it can identify what exactly went wrong. The National Transportation Safety Board is also opening an investigation into the death and is sending a small team of investigators to Tempe.

We’ve noted for years now how despite a lot of breathless hand-wringing, self-driving car technology (even in its beta form) has proven to be remarkably safe. Millions of AI driver miles have been logged already by Google, Volvo, Uber and others with only a few major accidents. When accidents do occur, they most frequently involve human beings getting confused when a robot-driven vehicle actually follows the law. Google has noted repeatedly that the most common accidents it sees are when drivers rear end its AI-vehicles because they actually stopped before turning right on red.

And while there’s some caveats for this data (such as the fact that many of these miles are logged with drivers grabbing the wheel when needed), self-driving cars have so far proven to be far safer then even many advocates projected. We’ve not even gotten close to the well-hyped “trolly problem,” and engineers have argued that if we do, somebody has already screwed up in the design and development process.

It’s also worth reiterating that early data continues to strongly indicate that self-driving cars will be notably safer than their human-piloted counterparts, who cause 33,000 fatalities annually (usually because they were drunk or distracted by their phone). It’s also worth noting that 10 pedestrians have been killed by drivers in the Phoenix area (including Tempe) in the last week alone by human drivers, and Arizona had the highest rate of pedestrian fatalities in the country last year. And it’s getting worse, with 197 Arizona pedestrian deaths in 2016 compared to 224 in 2017.

We’ll have to see what the investigation reveals, but hopefully the tech press will view Arizona’s problem in context before writing up their inevitable hyperventilating hot takes. Ditto for lawmakers eager to justify over-regulating the emerging self-driving car industry at the behest of taxi unions or other disrupted legacy sectors. If we are going to worry about something, those calories might be better spent on shoring up the abysmal security and privacy standards in the auto industry before automating everything under the sun.

Filed Under: accidents, autonomous vehicles, pedestrian fatalities, self-driving cars
Companies: uber