‘AI’ Is Supercharging Our Broken Healthcare System’s Worst Tendencies (original) (raw)

from the I'm-sorry-I-can't-do-that,-Dave dept

“AI” (or more accurately language learning models nowhere close to sentience or genuine awareness) has plenty of innovative potential. Unfortunately, most of the folks actually in charge of the technology’s deployment largely see it as a way to cut corners, attack labor, and double down on all of their very worst impulses.

Case in point: “AI’s” rushed deployment in journalism has been a keystone-cops-esque mess. The fail-upward brunchlord types in charge of most media companies were so excited to get to work undermining unionized labor and cutting corners that they immediately implemented the technology without making sure it actually works. The result: plagiarism, bullshit, a lower quality product, and chaos.

Not to be outdone, the very broken U.S. healthcare industry is similarly trying to layer half-baked AI systems on top of a very broken system. Except here, human lives are at stake.

For example UnitedHealthcare, the largest health insurance company in the US, has been using AI to determine whether elderly patients should be cut off from Medicare benefits. If you’ve ever navigated this system on behalf of an elderly loved one, you likely know what a preposterously heartless shitwhistle this whole system already is long before automation gets involved.

But a recent investigation by STAT showed the AI consistently made major errors and cut elderly folks off from needed care prematurely, with little recourse by patients or families:

“UnitedHealth Group has repeatedly said its algorithm, which predicts how long patients will need to stay in rehab, is merely a guidepost for their recoveries. But inside the company, managers delivered a much different message: that the algorithm was to be followed precisely so payment could be cut off by the date it predicted.”

How bad is the AI? A recent lawsuit filed in the US District Court for the District of Minnesota alleges that the AI in question was reversed by human review roughly 90 percent of the time:

“Though few patients appeal coverage denials generally, when UnitedHealth members appeal denials based on nH Predict estimates—through internal appeals processes or through the federal Administrative Law Judge proceedings—over 90 percent of the denials are reversed, the lawsuit claims. This makes it obvious that the algorithm is wrongly denying coverage, it argues.”

Of course, the way that the AI is making determinations isn’t particularly transparent. But what can be discerned is that the artificial intelligence at use here isn’t particularly intelligent:

“It’s unclear how nH Predict works exactly, but it reportedly estimates post-acute care by pulling information from a database containing medical cases from 6 million patients…But Lynch noted to Stat that the algorithm doesn’t account for many relevant factors in a patient’s health and recovery time, including comorbidities and things that occur during stays, like if they develop pneumonia while in the hospital or catch COVID-19 in a nursing home.”

Despite this obvious example of the AI making incorrect determinations, company employees were increasingly mandated to strictly adhere to its decisions. Even when users successfully appealed these AI-generated determinations and win, they’re greeted with follow up AI-dictated rejections just days later, starting the process all over again.

The company in question insists that the AI’s rulings are only used as a guide. But it seems pretty apparent that, as in most early applications of LLMs, the systems are primarily viewed by executives as a quick and easy way to cut costs and automate systems already rife with problems, frustrated consumers, and underpaid and overtaxed support employees.

There’s no real financial incentive to reform the very broken but profitable systems underpinning modern media, healthcare, or other industries. But there is plenty of financial incentive to use “AI” to speed up and automate these problematic systems. The only guard rails for now are competent government regulation (lol), or belated wrist slap penalties by class action lawyers.

In other words, expect to see a lot more stories exactly like this one in the decade to come.

Filed Under: ai, automation, chat-gpt, coverage denied, healthcare, language learning models, medicare
Companies: unitedhealthcare