self-driving – Techdirt (original) (raw)

Stories filed under: "self-driving"

Ford Submits A Patent That Would Allow Cars To Repossess Themselves

from the only-thing-with-autonomy-is-the-inanimate-object dept

The automotive industry is entering into its own subprime crisis. Even before the COVID pandemic led to supply chain issues that vastly inflated car prices, lenders were starting to extend loan periods to make things easier for underfunded purchasers, moving on from the industry standard 3-5 year loans to 84-month baselines that ensure people could purchase cars… but for a price they’d be paying for a long, long time.

The underlying mechanics that led to the 2008 financial crisis are now at play at automotive dealers. But dealers still have the upper hand, for the most part. You can’t disable a house, but inexpensive tech allows dealers to disable cars when payments are overdue.

Starter interrupt devices are only the beginning. As loan periods extend to create “affordable” payments (ones that will not touch the principal for more than three years) and payments continue to be missed, despite this predatory lending tactic, automotive manufacturers are moving forward to protect their bottom lines.

Here’s the latest in dealer-on-driver financial violence, sent to us by Techdirt reader BentFranklin via our Insider Chat. Taking advantage of built-in smart systems, including autonomous driving features, Ford will seek to reclaim its property by any (electronic) means necessary.

The patent document was submitted to the United States Patent Office in August 2021 but it was formally published Feb. 23. It’s titled “Systems and Methods to Repossess a Vehicle.” It describes several ways to make the life of somebody who has missed several car payments harder.

It explicitly says the system, which could be installed on any future vehicle in the automaker’s lineup with a data connection would be capable of “[disabling] a functionality of one or more components of the vehicle.” Everything from the engine to the air conditioning. For vehicles with autonomous or semi-autonomous driving capability, the system could “move the vehicle from a first spot to a second spot that is more convenient for a tow truck to tow the vehicle… move the vehicle from the premises of the owner to a location such as, for example, the premises of the repossession agency,” or, if the lending institution considers the “financial viability of executing a repossession procedure” to be unjustifiable, the vehicle could drive itself to the junkyard.

Kudos to The Drive, which not only reported this news, but provided a link to the patent application [PDF], which includes helpful illustrations like this one:

Yikes. “Police authority.” That doesn’t bode well for purchasers who’ve fallen behind on their payments. They’re not actually thieves, not when the company has the option to repo the vehicle. But some irresponsible (or delayed) data reporting could lead to traffic stops predicated on the (incorrect) supposition the car is stolen, when it’s actually nothing more than delinquent.

The Drive notes no other car manufacturer has attempted to patent tech like this, putting Ford on the questionable leading edge of repo tech for the time being. Fortunately, prospective Ford purchasers won’t just find their vehicles autonomously commandeered should they fall behind on their payments. Advance notice will be given before vehicles wander off to return themselves to their maker.

There would be several warnings from the vehicle before the system initiated a formal repossession. If these warnings were ignored, the car could begin to lose functionality ahead of a repo. The first lost functions would be minor inconveniences like “cruise control, automated window controls, automated seat controls, and some components of the infotainment system (radio, global positioning system (GPS), MP3 player, etc.)” The next level is more serious, and includes the loss of things like “the air conditioning system, a remote key fob, and an automated door lock/unlock system.” Likewise, an “incessant and unpleasant sound” may be turned on “every time the owner is present in the vehicle.”

Should all of these inconveniences be ignored, the system would escalate to lock people out of their vehicles. It should be noted the patent exempts weekends from these escalating lockout tactics, perhaps recognizing it’s difficult to catch up on payments when you can’t contact the lien holder.

The semi-autonomous functions would be activated if none of the exceptions are met. At best, it would move the car out of someone’s driveway to a public street where it can be more easily towed. At worst, it would instruct the car to drive itself to the nearest authorized repo lot if possible.

But if the situation seems more dire than that, the onboard computer will opt for Mutually Assured Destruction. In certain cases, The Drive reports, the system will emulate The Bard, instructing the vehicle to “Get thee to a nunnery scrapyard.”

If it will cost the bank more to repo the vehicle as compared to what it could sell it for, then “the repossession system computer may cooperate with the vehicle computer to autonomously move the vehicle from the premises of the owner to a junkyard.”

That’s the future. Your car will run from you, if your dealer or manufacturer decides that’s the way things need to go. Never mind the fact that being without a car makes it that much more difficult to earn the wages needed to pay it off. Ford wants to do your driving for you if it feels you can no longer trust you. If that means you’re out of a car and still on the hook for thousands, so be it.

Filed Under: autonomous vehicles, patents, repossession, self-driving
Companies: ford

Report Showcases How Elon Musk Undermined His Own Engineers And Endangered Public Safety

from the first-do-no-harm dept

Wed, Dec 8th 2021 01:41pm - Karl Bode

For a long time now, it’s been fairly clear that consumer safety was an afterthought for some of the more well known companies developing self-driving technology. That was made particularly clear a few years back with Uber’s fatality in Tempe, Arizona, which revealed that the company really hadn’t thought much at all about public safety. The car involved in the now notorious fatality wasn’t even programmed to detect jaywalkers, and there was little or no structure at Uber to meaningfully deal with public safety issues. The race to the pot of innovation gold was all consuming, and all other considerations (including human lives) were afterthoughts.

That same cavalier disregard for public safety has been repeatedly obvious over at Tesla, where the company’s undercooked “autopilot” technology has increasingly resulted in a nasty series of ugly mishaps, and, despite years of empty promises, still doesn’t work as marketed or promised. That’s, of course, not great for the public, who didn’t opt in to having their lives put at risk by 2,500 pound death machines for innovation’s sake. Every week there’s new evidence and lawsuits showing this technology is undercooked and dangerous, and every week we seemingly find new ways to downplay it.

This week the scope of Elon Musk’s failures on this front became more clear thanks to a New York Times piece, which profiles how corner cutting on the autopilot project was an active choice by Musk at several points in the development cycle. The piece repeatedly and clearly shows that Musk overstated what the technology was capable of for the better part of the last decade:

“As the guiding force behind Autopilot, Mr. Musk pushed it in directions other automakers were unwilling to take this kind of technology, interviews with 19 people who worked on the project over the last decade show. Mr. Musk repeatedly misled buyers about the services? abilities, many of those people say. All spoke on the condition of anonymity, fearing retaliation from Mr. Musk and Tesla.”

Musk’s bravado, and the exaggeration of the sophistication of Autopilot, helped encourage some customers to have too much trust in the product or actively misuse it. Constantly pushing undercooked software and firmware updates without proper review also created safety challenges. But the article tends to focus heavily on how Musk repeatedly undermined his own engineers through stubborn decisions that undermined both overall safety and engineer expertise, like Musk’s unyielding belief that full automated driving could be accomplished with just cameras, and not cameras and radar (or other detection tech):

“Within Tesla, some argued for pairing cameras with radar and other sensors that worked better in heavy rain and snow, bright sunshine and other difficult conditions. For several years, Autopilot incorporated radar, and for a time Tesla worked on developing its own radar technology. But three people who worked on the project said Mr. Musk had repeatedly told members of the Autopilot team that humans could drive with only two eyes and that this meant cars should be able to drive with cameras alone.”

The article also makes it clear that employees that were overly happy to please Musk’s whims only tended to make the overall quality and safety issues worse. And when employees did challenge Musk in a bid to improve quality and safety, things very often didn’t go well:

“In mid-2015, Mr. Musk met with a group of Tesla engineering managers to discuss their plans for the second version of Autopilot. One manager, an auto industry veteran named Hal Ockerse, told Mr. Musk he wanted to include a computer chip and other hardware that could monitor the physical components of Autopilot and provide backup if parts of the system suddenly stopped working, according to two people with knowledge of the meeting.

But Mr. Musk slapped down the idea, they said, arguing it would slow the progress of the project as Tesla worked to build a system that could drive cars by themselves. Already angry after Autopilot malfunctioned on his morning drive that day, Mr. Musk berated Mr. Ockerse for even suggesting the idea. Mr. Ockerse soon left the company.”

None of this is particularly surprising for folks who have objectively watched Musk, but it does catalog his erratic bravado and risk taking in a comprehensive way that makes all of it seem notably more concrete. For a man whose reputation is one of engineering savvy, the report repeatedly showcases how Musk refused to actually listen to his own engineers. There’s little doubt Musk has been innovative, but the report does a fairly solid job showcasing how a not insubstantial portion of his near-deified reputation is more than a little hollow.

Filed Under: autonomous vehicles, elon musk, safety, self-driving
Companies: tesla

The Faintest Hint Of Regulatory Accountability Has Tesla Acting Like An Adult

from the funny-how-that-works dept

Fri, Oct 29th 2021 03:44pm - Karl Bode

Coming from telecom, I’m painfully aware of the perils of the “deregulation is a panacea” mindset. For literally thirty straight years, the idea that deregulation results in some kind of miraculous Utopia informed U.S. telecom policy, resulting in a sector that was increasingly consolidated and uncompetitive. In short, the entirety of U.S. telecom policy (with the short lived sporadic exception) has been to kowtow to regional telecom monopolies. Efforts to do absolutely anything other than that (see: net neutrality, privacy, etc.) are met with immeasurable hyperventilation and predictions of imminent doom.

So I think the U.S. telecom sector holds some valuable lessons in terms of regulatory competency and accountability. No, you don’t want regulators that are heavy-handed incompetents. And yes, sometimes deregulation can help improve already competitive markets (which telecom most certainly isn’t). At the same time, you don’t want regulators who are mindless pushovers, where companies are keenly aware they face zero repercussions for actively harming consumers, public safety, or the health of a specific market.

Enter Tesla, which is finally facing something vaguely resembling regulatory scrutiny for its bungled and falsehood-filled deployment of “full self-driving” technology. As crashes and criticism pile up, Tesla is arguably facing its first ever instance of regulatory accountability in the face of more competent government hires and an ongoing investigation into the company’s claims by the NHTSA. This all might result in no meaningful or competent regulatory action, but the fact that people aren’t sure of that fact is still a notable sea change.

This, in turn, has automatically resulted in a new tone at Tesla that more reflects a company run by actual adults:

“Tesla held a regularly scheduled conference call to discuss its quarterly financial results, but ? as he?d previously teased ? Musk did not attend. His absence took what?s normally a venue for his rants and ramblings, dismissals of Wall Street, and attacks on the press and turned it into a coherent (if scripted) presentation of the company?s recent progress.

There were fewer sideshows and a more measured tone, though the executives who spoke in Musk?s place still made some contradictions. If Musk were to leave his post atop the company, it?s likely that Tesla would look and sound a lot like how the company was presented on Wednesday night?s call.

While Musk’s bravado appeals to fans of bravado, it’s not hard to argue his behavior has also actively harmed the companies he oversees. Unless folks genuinely think securities fraud or calling basic life-saving public health measures “fascism” are genuinely productive. Tesla has now shown a profit in nine straight quarters, or 11 of the last 13. But the company now faces not only marginally more competent regulatory oversight, but a flood of well-funded competitors and increased criticism of build quality. It’s not hard to think that Musk’s mouth could, at any moment, completely sabotage efforts to take Tesla to the next level.

Still, I tend to come back to the idea of basic regulatory competency. Even if regulators aren’t going to always take action, they need to give the impression that they actively could at any moment. The threat of regulatory repercussion is sometimes as useful as regulation itself. During the Trump era (again, see telecom) and the Obama era (see: Google) the message sent was pretty clear: you can do pretty much whatever you like with little to no meaningful accountability as long as you’re moderately clever about it. That included running a sloppy open beta of 3,500 pound self-driving automobiles on public streets without public consent or much in the way of safety precautions (see: Uber’s Arizona fatality).

This free for all is likely poised to change, and it seems like Tesla might more easily navigate the coming rocky waters and sensitive legal and regulatory skirmishes with a CEO who isn’t prone toward absolute chaos. While Musk’s behavior is certainly tied to the company’s disruptive brand, it is possible to have executives who are performatively chaotic and disruptive (see: ex-T-Mobile CEO John Legere) without actively shooting the company in the foot every other time they open their mouths.

Filed Under: elon musk, nhtsa, regulatory accountability, self-driving
Companies: tesla

Tesla 'Self-Driving' NDA Hopes To Hide The Reality Of An Unfinished Product

from the I'm-sorry-Dave-I-can't-do-that dept

Mon, Oct 4th 2021 03:37pm - Karl Bode

There isn’t a day that goes by where Tesla hasn’t found itself in the news for all the wrong reasons. Like last week, when Texas police sued Tesla because one of the company’s vehicles going 70 miles per hour in self-driving mode failed to function properly, injuring five officers.

Five Montgomery County deputy constables were injured Saturday when the driver of a Tesla rear-ended a cruiser during a traffic stop, causing a chain-reaction crash, authorities said. https://t.co/FfteMQQ4zL

— Pooja Lodhia (@PoojaOnTV) February 27, 2021

If you hadn’t been paying attention, Teslas in self-driving mode crashing into emergency vehicles is kind of a thing that happens more than it should. In this latest episode of “let’s test unfinished products on public streets,” the Tesla vehicle in “self-driving” mode’s systems failed completely to detect not only the five officers, but their dog, according to the lawsuit filed against Tesla:

?The Tesla was completely unable to detect the existence of at least four vehicles, six people and a German Shepherd fully stopped in the lane of traffic,? reads the suit. ?The Tahoes were declared a total loss. The police officers and the civilian were taken to the hospital, and Canine Officer Kodiak had to visit the vet.”

Of course for Musk fans, a persecution complex is required for club membership, resulting in the belief that this is all one elaborate plot to ruin their good time. That belief structure extends to Musk himself, who can’t fathom that public criticism and media scrutiny in the wake of repeated self-driving scandals is his own fault. It’s also extended to the NDAs the company apparently forces Tesla owners to sign if they want to be included in the Early Access Program (EAP), a community of Tesla fans the company selects to beta test the company’s unfinished self-driving (technically “Level 2” driver-assistance system) on public city streets.

The NDA frames the press and transparency as enemies, and urges participants not to share any content online that could make the company look bad, even if it’s, you know, true:

“This NDA, the language of which Motherboard confirmed with multiple beta testers, specifically prohibits EAP members from speaking to the media or giving test rides to the media. It also says: “Do remember that there are a lot of people that want Tesla to fail; Don’t let them mischaracterize your feedback and media posts.” It also encourages EAP members to “share on social media responsibly and selectively…consider sharing fewer videos, and only the ones that you think are interesting or worthy of being shared.”

Here’s the thing: you don’t need to worry about this kind of stuff if you’re fielding a quality, finished product. And contrary to what Musk fans think, people concerned about letting fanboys test 5,000 pound automated robots that clearly don’t work very well are coming from a valid place of concern. Clips like this one, for example, which show the Tesla self-driving system failing to perform basic navigational functions while in self-driving mode, aren’t part of some elaborate conspiracy to make Tesla self-driving look bad and dangerous. There’s plenty of evidence now clearly showing that Tesla self-driving, at least in its current incarnation, often is bad and dangerous:

Ever since the 2018 Uber fatality in Arizona (which revealed the company had few if any meaningful safety protocols in place) it’s been clear that current “self-driving” technology is extremely undercooked. It’s also become increasingly clear that widely testing it on public streets (where other human beings have not consented to being used as Guinea pigs) is not a great idea. Especially if you’re going to replace trained testers with criticism-averse fanboys you’ve carefully selected in the hopes they’ll showcase only the most positive aspects of your products.

We’ve been so bedazzled by purported innovation we’ve buried common sense deep in the back yard. Wanting products to work, and executives to behave ethically, is not some grand conspiracy. It’s a reasonable reaction to the reckless public testing of an unfinished, over-marketed product on public streets.

Filed Under: cars, nda, self-driving, transparency
Companies: tesla

Breaking: Self-Driving Cars Avoid Accident, Do Exactly What They Were Programmed To Do

from the I-can-and-will-do-that,-Dave dept

Fri, Jun 26th 2015 11:34am - Karl Bode

We just got done talking about how, after logging 1,011,338 autonomous miles since 2009, Google’s automated cars have had just thirteen accidents — none of which were the fault of the Google vehicles. By and large the technology appears to be working incredibly well, with most of the accidents the fault of inattentive human drivers rear-ending Google’s specially-equipped Lexus SUVs at stop lights. But apparently, the fact that this technology is working well isn’t quite interesting enough for the nation’s technology press.

A Reuters report making the rounds earlier today proclaimed that two self-driving cars from Google and Delphi Automotive almost got into an accident this week in California. According to the Reuters report, Google’s self-driving Lexus “cut off” Delphi’s self-driving Audi, forcing the Audi to take “appropriate action.” This apparently got the nation’s technology media in a bit of a heated lather, with countless headlines detailing the “almost crash.” The Washington Post was even quick to inform readers that the almost-crash “is now raising concerns over the technology.”

Except it’s not. Because not only did the cars not crash, it apparently wasn’t even a close call. Both Delphi and Google spokespeople told Ars Technica that both cars did exactly what they were programmed to do and Reuters apparently made an automated mountain out of a molehill:

“I was there for the discussion with Reuters about automated vehicles,” she told Ars by e-mail. “The story was taken completely out of context when describing a type of complex driving scenario that can occur in the real world. Our expert provided an example of a lane change scenario that our car recently experienced which, coincidentally, was with one of the Google cars also on the road at that time. It wasn?t a ‘near miss’ as described in the Reuters story.”

Instead, she explained how this was a normal scenario, and the Delphi car performed admirably.

“Our car did exactly what it was supposed to,” she wrote. “Our car saw the Google car move into the same lane as our car was planning to move into, but upon detecting that the lane was no longer open it decided to terminate the move and wait until it was clear again.”

In other words, As Twitter’s Nu Wexler observed, the two cars did exactly what they were programmed to do, though that’s obviously a notably less sexy story than Reuters’ apparently hallucinated tale of automated automotive incompetence.

Breaking: Self-driving cars avoid accident, doing exactly what they are programmed to do

— Nu Wexler (@wexler) June 26, 2015

Filed Under: accidents, autonomous vehicles, cars, driving, near miss, self-driving
Companies: delphi, google

DailyDirt: Autonomous Vehicles

from the urls-we-dig-up dept

Autonomous vehicles are getting better and better all the time as their software learns to navigate all kinds of terrain. Commercial airlines have been using autopilot systems for years, but nowadays more autonomous cars could be driving next to humans. It’s either a really scary idea or a brilliant new way to commute. Here are just a few more links on robot vehicles that are being set loose.

Filed Under: 24 hours of lemons, autonomous, cars, k-max, self-driving, unmanned helicopters, vehicles, x ceedingly bad idea prize
Companies: kaman aerospace