healthcare – Techdirt (original) (raw)

Feds Have Warned Medicare Insurers That ‘AI’ Can’t Be Used To (Incompetently And Cruelly) Deny Patient Care

from the I'm-sorry-I-can't-do-that,-Dave dept

“AI” (or more accurately language learning models nowhere close to sentience or genuine awareness) has plenty of innovative potential. Unfortunately, most of the folks actually in charge of the technology’s deployment largely see it as a way to cut corners, attack labor, and double down on all of their very worst impulses.

Case in point: “AI’s” rushed deployment in journalism has been a keystone-cops-esque mess. The brunchlord types in charge of most media companies were so excited to get to work undermining unionized labor and cutting corners that they immediately implemented the technology without making sure it actually works. The result: plagiarism, bullshit, a lower quality product, and chaos.

Not to be outdone, the very broken U.S. healthcare industry is similarly trying to layer half-baked AI systems on top of a very broken system. Except here, human lives are at stake.

For example UnitedHealthcare and Humana, two of the largest health insurance companies in the US, have been using “AI” to determine whether elderly patients should be cut off from Medicare benefits. If you’ve navigated this existing system on behalf of an elderly loved one, you likely know what a preposterously heartless shitscape this whole system already is long before automation gets involved.

Not surprisingly, neither Humana or UnitedHealthcare’s implementation of “AI” was done well. A recent investigation by STAT of the system they’re using (nH Predict) showed the AI consistently made major errors 90% of the time, cutting elderly folks off from needed care prematurely, often with little recourse by patients or families. Both companies are facing class actions.

Any sort of regulatory response has, unsurprisingly, been slow to come by courtesy of a corrupt and incompetent Congress. The best we’ve gotten so far is a recent memo by the Centers for Medicare & Medicaid Services (CMS), sent to all Medicare Advantage insurers, informing them they shouldn’t use LLMs to determine care or deny coverage to members on Medicare Advantage plans:

“For coverage decisions, insurers must “base the decision on the individual patient’s circumstances, so an algorithm that determines coverage based on a larger data set instead of the individual patient’s medical history, the physician’s recommendations, or clinical notes would not be compliant,” the CMS wrote.”

There’s no indication yet of any companies facing actual penalties for the behavior. The letter notes that insurers can use AI to determine whether an insurer is following plan rules, but it can’t be used to sever grandma from essential care. Because, as we’ve seen repeatedly, LLMs are prone to error and fabulism, and executives of publicly traded companies are prone to fatal corner cutting.

Granted this is only one segment of one industry where undercooked AI is being rushed into deployment by executives who see the technology primarily as a corner cutting, labor undermining shortcut to greater wealth. The idea that Congress, regulators, or class actions lay down some safety guard rails before this kind of sloppy AI results in significant mass suffering is likely wishful thinking.

Filed Under: ai, healthcare, insurance, llm, medicare, medicare advantage, nh predict
Companies: humana, unitedhealthcare

‘AI’ Is Supercharging Our Broken Healthcare System’s Worst Tendencies

from the I'm-sorry-I-can't-do-that,-Dave dept

Tue, Nov 21st 2023 05:26am - Karl Bode

“AI” (or more accurately language learning models nowhere close to sentience or genuine awareness) has plenty of innovative potential. Unfortunately, most of the folks actually in charge of the technology’s deployment largely see it as a way to cut corners, attack labor, and double down on all of their very worst impulses.

Case in point: “AI’s” rushed deployment in journalism has been a keystone-cops-esque mess. The fail-upward brunchlord types in charge of most media companies were so excited to get to work undermining unionized labor and cutting corners that they immediately implemented the technology without making sure it actually works. The result: plagiarism, bullshit, a lower quality product, and chaos.

Not to be outdone, the very broken U.S. healthcare industry is similarly trying to layer half-baked AI systems on top of a very broken system. Except here, human lives are at stake.

For example UnitedHealthcare, the largest health insurance company in the US, has been using AI to determine whether elderly patients should be cut off from Medicare benefits. If you’ve ever navigated this system on behalf of an elderly loved one, you likely know what a preposterously heartless shitwhistle this whole system already is long before automation gets involved.

But a recent investigation by STAT showed the AI consistently made major errors and cut elderly folks off from needed care prematurely, with little recourse by patients or families:

“UnitedHealth Group has repeatedly said its algorithm, which predicts how long patients will need to stay in rehab, is merely a guidepost for their recoveries. But inside the company, managers delivered a much different message: that the algorithm was to be followed precisely so payment could be cut off by the date it predicted.”

How bad is the AI? A recent lawsuit filed in the US District Court for the District of Minnesota alleges that the AI in question was reversed by human review roughly 90 percent of the time:

“Though few patients appeal coverage denials generally, when UnitedHealth members appeal denials based on nH Predict estimates—through internal appeals processes or through the federal Administrative Law Judge proceedings—over 90 percent of the denials are reversed, the lawsuit claims. This makes it obvious that the algorithm is wrongly denying coverage, it argues.”

Of course, the way that the AI is making determinations isn’t particularly transparent. But what can be discerned is that the artificial intelligence at use here isn’t particularly intelligent:

“It’s unclear how nH Predict works exactly, but it reportedly estimates post-acute care by pulling information from a database containing medical cases from 6 million patients…But Lynch noted to Stat that the algorithm doesn’t account for many relevant factors in a patient’s health and recovery time, including comorbidities and things that occur during stays, like if they develop pneumonia while in the hospital or catch COVID-19 in a nursing home.”

Despite this obvious example of the AI making incorrect determinations, company employees were increasingly mandated to strictly adhere to its decisions. Even when users successfully appealed these AI-generated determinations and win, they’re greeted with follow up AI-dictated rejections just days later, starting the process all over again.

The company in question insists that the AI’s rulings are only used as a guide. But it seems pretty apparent that, as in most early applications of LLMs, the systems are primarily viewed by executives as a quick and easy way to cut costs and automate systems already rife with problems, frustrated consumers, and underpaid and overtaxed support employees.

There’s no real financial incentive to reform the very broken but profitable systems underpinning modern media, healthcare, or other industries. But there is plenty of financial incentive to use “AI” to speed up and automate these problematic systems. The only guard rails for now are competent government regulation (lol), or belated wrist slap penalties by class action lawyers.

In other words, expect to see a lot more stories exactly like this one in the decade to come.

Filed Under: ai, automation, chat-gpt, coverage denied, healthcare, language learning models, medicare
Companies: unitedhealthcare

The US Healthcare Scam Illustrated In The Impossibility Of Getting A Bill For Five Stitches

from the us-healthcare-is-a-giant-scam dept

Let’s just start off by noting that if you’re not in the US and you live anywhere with some form of single-payer/universal healthcare, we know. You don’t need to tell us. We know. The US healthcare system is a fucking mess.

A decade ago I wrote an article about how the US healthcare system wasn’t a free market, but rather that it was a giant economic scam. The key point made there was that it wasn’t even health insurance companies that were the issue, which is what many people assume. Rather it’s the hospitals. The hospitals, many of them non-profits in name only, present themselves as caring organizations there to help you out when you’re in trouble. When the reality is that these “non-profit” hospitals are basically eating up the entire American economy, and always looking for ways to charge more.

Health care spending is currently 18.3% of our GDP. It’s $4.3 trillion.

This is why you hear stories of people being charged $50 for a single Tylenol.

Hospitals can basically charge whatever the hell they want, and they effectively never have to tell you. Every time you make use of the US healthcare system, no one will tell you ahead of time what it costs. Only later will you start receiving a random series of unclear bills.

I recently experienced a bit of this after getting into a minor bicycle accident, and needing to get five stitches. I had gone to an urgent care facility that had cleaned up all the basic bruises, but I had a gash in my chin that required stitches. The urgent care facility told me they wouldn’t stitch faces because the liability risks were too big, and they charged me nothing at all. So I was mostly cleaned up (for free) except for needing the stitches. I went to the local ER and was in and out in about an hour (it was Super Bowl Sunday and not crowded).

Over at the Daily Beast I’ve written a longish article about my experience trying to get an itemized bill from the hospital trying to understand why they charged over 8500forfivestitches.Note,thisisnotforthestitchesthatweredonebyanERphysician.Thedoctorchargedme8500 for five stitches. Note, this is not for the stitches that were done by an ER physician. The doctor charged me 8500forfivestitches.Note,thisisnotforthestitchesthatweredonebyanERphysician.Thedoctorchargedme65 for his time. It’s just the hospital, that sent a bill many, many months later. The total bill was over 8500,andmyinsuranceagreedtopay8500, and my insurance agreed to pay 8500,andmyinsuranceagreedtopay6500 of it leaving me with a $2,000 bill.

US law requires them to give me an itemized bill, but Dignity Health aka CommonSpirit Health, one of the largest non-profit healthcare facility providers in the country, has pulled out every stop possible to not deliver what they are required to provide by law. Among the tricks they pulled (described in detail in the article) are:

There’s more in the article, but it seems clear that CommonSpirit Health, who has a CEO in Wright Lassiter III who is likely making well over 5million(remember,non−profit!),andrecentlyreported5 million (remember, non-profit!), and recently reported 5million(remember,nonprofit!),andrecentlyreported34.6 billion in revenue for 2023, is basically designed to deliberately squeeze as much money out of its patients as it can get away with. And, it’s kind of hilarious, given that the history of the hospital chain is that it was founded by some Catholic nuns, and the chain plays up over and over again how it’s focused on “health justice” and “human kindness” based on their religious beliefs.

Those religious beliefs seem mainly focused on robbing its patients blind any way they can, and then ignoring them when they simply ask for the hospital to provide an itemized bill they’re required to provide under the law.

But, one of the more interesting things that I discovered in reporting out the article was in talking to two executives from the company Goodbill. Goodbill is one of a bunch of companies that have sprung up to basically negotiate with hospitals for you, because of this mess. Just the fact that there’s an industry necessary to do this should make it pretty clear just how broken the system is.

Even more eye-opening though, was that while I was on a video call with Goodbill execs, they were able to get access to the information I was trying to get about my bill (the details are in the Daily Beast story), where we discovered exactly what trick the hospital was pulling (in this case, what’s known as “upcoding”).

Goodbill had access to the information because it’s been able to setup API access to various large insurance providers, including mine.

In other words, the information is there. It’s even accessible in seconds.

Just not to me.

The patient.

The reality is that there’s no reason at all that US hospitals couldn’t tell you upfront what everything costs. They just don’t. Because, then when they send you an $8500 bill months later, they can make it close to impossible to figure out what games they’re playing with that bill.

I’m sure that makes Wright Lassiter III a bit of an extra bonus each year when his hospital chain reports its billions in revenue, but at the very least, he should stop pretending that he runs a non-profit hospital with a focus on “humankindness and health justice.” Or one that is named “Dignity” health. He should just admit that his company’s “mission” is to rob people blind at their worst moments.

Filed Under: healthcare, itemized bill, scams, stitches, wright lassiter iii
Companies: commonspirit health, dignity health

Illinois Hospital First To Shut Down Completely After Ransomware Attack

from the this-seems-bad dept

Fri, Jun 16th 2023 05:29am - Karl Bode

You may have noticed that for-profit healthcare in the U.S. is already a hot mess, especially in the most already marginalized parts of the country. Giant, mismanaged health care conglomerates have long pushed their underfunded staffers to the brink, while routinely under-investing in necessary technical upgrades and improvements. It’s getting consistently worse everywhere, but in particular in rural or poor regions of the U.S.

And that was before COVID. Not too surprisingly, it doesn’t take much for this kind of fragile ecosystem to topple completely. Like St. Margaret’s Health in Spring Valley, Illinois, which this week was forced to shut down completely because it simply couldn’t recover from a 2021 ransomware attack:

A ransomware attack hit SMP Health in 2021. The attack halted the hospital’s ability to submit claims to insurers, Medicare or Medicaid for months, sending it into a financial spiral, Burt said.

Such attacks can have a chain reaction on already broken hospitals and health care systems. Health care workers are sometimes forced to resort to pen and paper for patient charts and prescriptions, increasing the risk of potentially fatal error. Delays in care can also prove fatal. And ransomware is only one of the problems that plague dated medical IT systems whose repair is being made increasingly costly and difficult by medical health care system manufacturers keen on monopolizing repair.

When hospitals like St. Margarets’ shut down, they create massive health care vacuums among the already underserved. In this case, with St. Margarets being closed, locals have to travel at least a half an hour for emergency room services and obstetrics services. Which, for many, will be fatal:

Kelly Klotz, 52, a Spring Valley resident with multiple medical issues, said she was concerned the drive could lead to medical complications for her and her parents.

“I need access to good medical care at any given time,” she said. “It’s not like I can say I’ll schedule my stroke six months from now. It’s devastating to this area.”

“If you’re having a heart attack or a stroke, may the odds ever be in your favor, because you’re not going to make it there in time,” Klotz said.

Data from the University of Carolina indicates that 99 rural U.S. hospitals have shuttered since 2005. Many hospitals are hit with dozens of such attacks on dated IT infrastructure every day. St. Margarets’ is being deemed the first to be shut down over a ransomware attack (probably not true), but it’s certainly not going to be the last.

Filed Under: er, healthcare, medical, privacy, ransomware, right to repair, security

Horrifying: Google Flags Parents As Child Sex Abusers After They Sent Their Doctors Requested Photos

from the scanning-has-problems dept

Over the last few years, there has been a lot of attention paid to the issue of child sexual abuse material (CSAM) online. It is a huge and serious problem. And has been for a while. If you talk to trust and safety experts who work in the field, the stories they tell are horrifying and scary. Trying to stop the production of such material (i.e., literal child abuse) is a worthy and important goal. Trying to stop the flow of such material is similarly worthy.

The problem, though, is that as with so many things that have a content moderation component, the impossibility theory rears its head. And nothing demonstrates that quite as starkly as this stunning new piece by Kashmir Hill in the New York Times, discussing how Google has been flagging people as potential criminals after they shared photos of their children in response to requests from medical professionals trying to deal with medical conditions the children have.

There is much worth commenting on in the piece, but before we get into the details, it’s important to give some broader political context. As you probably know if you read this site at all, across the political spectrum, there has been tremendous pressure over the last few years to pass laws that “force” websites to “do something” about CSAM material. Again, CSAM is a massive and serious problem, but, as we’ve discussed, the law (namely 18 USC 2258) already requires websites to report any CSAM content they find, and they can face stiff penalties for failing to do so.

Indeed, it’s quite likely that much of the current concern about CSAM is due to there finally being some level of recognition of how widespread it is thanks to the required reporting by tech platforms under the law. That is, because most websites take this issue so seriously, and carefully follow the law, we now know how widespread and pervasive the problem is.

But, rather than trying to tackle the underlying problem, politicians often want to do the politician thing, and just blame the tech companies for doing the required reporting. It’s very much shooting the messenger and using the fact that the reporting by tech companies is shining a light on the underlying societal failures that resulted in this, as an excuse to blame the tech companies, rather than the societal failings.

It’s easier to blame the tech companies — most of whom have bent over backwards to work with law enforcement and to build technology to help respond to CSAM — than to come up with an actual plan for dealing with the underlying issues. And so almost all of the legal proposals we’ve seen are really about targeting tech companies… and, in the process, removing underlying rights. In the US, we’ve seen the EARN IT Act, which completely misdiagnoses the problem, and would actually make it that much harder for law enforcement to track down abusers. EARN It attempts to blame tech companies for law enforcement’s unwillingness to go after CSAM producers and distributors.

Meanwhile, over in the EU, there’s an apparently serious proposal to effectively outlaw encryption and require client-side scanning of all content in an attempt to battle CSAM. Even as experts have pointed out how this makes everyone less safe, and there has been pushback on the proposal, politicians are still supporting it by basically just repeating “we must protect the children” without seriously responding to the many ways in which these bills will make children less safe.

Separately, it’s important to understand some of the technology behind hunting down and reporting CSAM. The most famous of which is PhotoDNA, initially developed by Microsoft and used among many of the big platforms to share hashes of known CSAM material to make sure that the material that has been discovered isn’t more widely spread. There are some other similar tools, but for fairly obvious reasons these tools have some risks associated with them, and there are concerns both about false positives and about who is allowed to have access to the tools (even as they’re sharing hashes, not actual images, the possibility of such tools to be abused is a real concern). A few companies, including Google, have developed more AI-based tools to try to identify CSAM, and Apple (somewhat infamously) has been working on its own client-side scanning tools along with cloud based scanning. But client-side scanning has significant limits, and there is real fear that it will be abused.

Of course, spy agencies also love the idea of everyone being forced to do client-side scanning in response to CSAM, because they know that basically creates a backdoor to spy on everyone’s devices.

Whenever people talk about this and highlight the potential for false positives, they’re often brushed off by supporters of these scanning tools, saying that the risk is minimal. And, until now, there weren’t many good examples of false positives beyond things like Facebook pulling down iconic photographs, claiming they were CSAM.

However, this article (yes, finally we’re talking about the article) by Hill gives us some very real world examples of how aggressive scanning for CSAM can not just go wrong, but can potentially destroy lives as well. In horrifying ways.

It describes how a father noticed his son’s penis was swollen and apparently painful to the child. An advice nurse at their healthcare provider suggested they take photos to send to the doctor, so the doctor could review them in advance of a telehealth appointment. The father took the photos and texted them to his wife so she could share with the doctor… and that set off a huge mess.

In texting them — in Google’s terms, taking “affirmative action,” — it caused Google to scan the material, and it’s AI-based detector flagged the image as potential CSAM. You can understand why. But the context was certainly missing. And, it didn’t much matter to Google — which shut down the guy’s entire Google account (including his Google Fi phone service) and reported him to local law enforcement.

The guy, just named “Mark” in the story, appealed, but Google refused to reinstate his account. Much later, Mark found out about the police investigation this way:

In December 2021, Mark received a manila envelope in the mail from the San Francisco Police Department. It contained a letter informing him that he had been investigated as well as copies of the search warrants served on Google and his internet service provider. An investigator, whose contact information was provided, had asked for everything in Mark’s Google account: his internet searches, his location history, his messages and any document, photo and video he’d stored with the company.

The search, related to “child exploitation videos,” had taken place in February, within a week of his taking the photos of his son.

Mark called the investigator, Nicholas Hillard, who said the case was closed. Mr. Hillard had tried to get in touch with Mark but his phone number and email address hadn’t worked.

“I determined that the incident did not meet the elements of a crime and that no crime occurred,” Mr. Hillard wrote in his report. The police had access to all the information Google had on Mark and decided it did not constitute child abuse or exploitation.

Mark asked if Mr. Hillard could tell Google that he was innocent so he could get his account back.

“You have to talk to Google,” Mr. Hillard said, according to Mark. “There’s nothing I can do.”

In the article, Hill highlights at least one other example of nearly the same thing happening, and also talks to (former podcast guest) Jon Callas, about how it’s likely that this happens way more than we realize, but the victims of it probably aren’t willing to speak about it, because then their names are associated with CSAM.

Jon Callas, a technologist at the Electronic Frontier Foundation, a digital civil liberties organization,called the cases canaries in this particular coal mine.”

“There could be tens, hundreds, thousands more of these,” he said.

Given the toxic nature of the accusations, Mr. Callas speculated that most people wrongfully flagged would not publicize what had happened.

There’s so much in this story that is both horrifying, but also a very useful illustration of the trade-offs and risks with these tools, and the process for correcting errors. It’s good that these companies are making proactive efforts to stop the creation and sharing of CSAM. The article already shows how these companies go above and beyond what the law actually requires (contrary to the claims of politicians and some in the media — and, unfortunately, many working for public interest groups trying to protect children).

However, it also shows the very real risks of false positives, and how it can create very serious problems for people, and how very few people are even willing to publicly discuss it for fear of the impact on their own lives and reputations for even highlighting the issue.

If politicians (pushed by many in the media) continue to advocate for regulations mandating even more aggressive behavior from these companies, including increasing liability for missing any content, it is inevitable that we will have many more such false positives — and the impact will be that much bigger.

There are real trade-offs here, and any serious discussion of how to deal with them should recognize that. Unfortunately, most of the discussions are entirely one-sided, and refuse to even acknowledge the issue of false positives and the concerns about how such aggressive scanning can impact people’s privacy.

And, of course, since the media (with the exception of this article!) and political narrative are entirely focused on “but think of the children!” the companies are bending even further backwards to appease them. Indeed, Google’s response to the story of Mark seems ridiculous as you read the article. Even after the police clear him of any wrongdoing, it refuses to give him back his account.

But that response is totally rational when you look at the typical media coverage of these stories. There have been so many stories — often misleading ones — accusing Google, Facebook and other big tech companies of not doing enough to fight CSAM. So any mistakes in that direction are used to completely trash the companies, saying that they’re “turning a blind eye” to abuse or even “deliberately profiting” off of CSAM. In such a media environment, companies like Google aren’t even going to risk missing something, and its default is going to be to shut down the guy’s account. Because the people at the company know they’d get destroyed publicly if it turns out he was involved in CSAM.

As with all of this stuff, there are no easy answers here. Stopping CSAM is an important and noble goal, but we need to figure out the best way to actually do that, and deputizing private corporations to magically find and stop it, with serious risk of liability for mistakes (in one direction), seems to have pretty significant costs as well. And, on top of that, it distracts from trying to solve the underlying issues, including why law enforcement isn’t actually doing enough to stop the actual production and distribution of actual CSAM.

Filed Under: content moderation, csam, false positives, healthcare, law enforcement, parents
Companies: google

The Pandemic And The Evolution Of Health Care Privacy

from the tradeoffs-are-everywhere dept

When I teach privacy law, I try to make the issues real for the students. It often isn?t that hard ? privacy issues remain in the news almost every day. The evolution of the pandemic has made more of these issues real and is leading to a series of critical questions for the future of health care privacy. These issues are not new, but the focus of the attention on pandemic issues has made the need for discussion and resolution of these issues even more critical.

We are seeing four distinct categories of issues arising from the pandemic.

The differing interests of patients

We have seen over the past several years a variety of health care policy goals where there is a tension between an individual?s interest in privacy and their interests in some other aspect of the operation of the health care system.

For example, in the recent federal debate over ?information blocking,? there was a substantial and visible (and mostly pre-pandemic) discussion about whether the interest of patients in having access to their medical information should take precedence over the protection of those records under the U.S. Health Insurance Portability and Accountability Act Privacy and Security rules. A variety of relevant stakeholders tried to find a ?win-win? in this situation, but the eventual result is that ? because of the limited scope of the HIPAA rules ? there will be situations in which a patient?s interest in receiving access to their medical records will mean that those records, once released, will not be subject to the full protections of the HIPAA Privacy and Security rules.

The primary choice in this situation was to favor a patient?s interest in access to their records over their privacy and security interests (although the regulations tried to balance these the best they could).

A similar issue has played out with the recent Department of Health and Human Services enforcement guidance related to telehealth. As part of its pandemic response, HHS has made clear that it will not be taking enforcement action involving telehealth visits; this means that health care providers interested in providing telehealth services did not need to be concerned about the details of the HIPAA Security Rule in conducting these visits. Whether this enforcement waiver was required is a different question, but the clear intent is to provide support for telehealth visits at a time when telehealth visits are critical to the interests of patients in receiving health care.

Through this health care enforcement waiver, the government selected the benefits to consumers (and the health care system) from enhanced telehealth opportunities over the more specific privacy and security interest of the HIPAA rules.

Balance between privacy interests and health care system interests

HHS also has issued other HIPAA guidance stemming from the pandemic. While the justification for these actions is less clear, the goal is to facilitate the operation of the health care system at a time when the system is stressed, by reducing otherwise applicable HIPAA obligations.

This has led to a waiver of certain HIPAA requirements (including the obligation to provide a privacy notice and an opportunity for a request for restrictions or confidential communication). This was a policy choice, but why this choice actually helped the system ? at a clear detriment to privacy interests ? is less clear.

Similarly, HHS has announced that business associates now can make disclosures of patient information for public health purposes ? increasing the sources of public health disclosures is what the Privacy Rule previously seems to have permitted.

How to address non-HIPAA health data issues (e.g., employee health data)

We also are seeing a focus on health care privacy interests during the pandemic where HIPAA is largely irrelevant. This is not a new issue. I have been writing about this issue of ?non-HIPAA health data? for almost 10 years.

Here, however, the focus has been on health care information of employees and others in connection with access to business locations and business activities. This employee information is not subject to HIPAA (primarily HIPAA for most employers applies only through their health insurance benefits plan), but other laws, such as the Americans with Disabilities Act, clearly apply.

For site visitors, guests, service workers and others, there may be no generally applicable privacy law ? at least in the United States ? regulating how personal health information can be collected and used. This means that when companies in the U.S. think about how they can share specific health information about specific individuals, the current primary health care privacy law is irrelevant.

How to address non-health data relevant to the health care system (e.g., location data for health monitoring)

Last, we also are seeing the evolution of a related health care issue: the increasing recognition in a variety of circumstances that information that isn?t clearly about health does, in fact, matter when operating the health care system.

In the pre-pandemic HIPAA context, there was a regulatory proceeding where HHS was exploring whether to modify the HIPAA rules to permit, for example, the sharing of protected health information with social service organizations ? even though these organizations do not fit cleanly into the HIPAA framework.

The inquiry reflects a recognition that social issues ? food or housing needs, for example ? can play an important role in the overall health of an individual. In the pandemic situation, we are focused now on location data and how it can be used for public health purposes. This data doesn?t ? by itself ? say anything about your health, but it will be used to identify the movements of individuals affected by the coronavirus and identify others for whom there also are health-related risks.

This is both a health care privacy and a civil liberties issue. It is exactly the kind of issue that is addressed throughout the HIPAA rules, where the smooth operation of the health care system was incorporated as a means of modifying otherwise applicable privacy interests.

But this is a different order of magnitude and one in which the full attention of society is focused on these issues in a way that HIPAA seldom catches the public?s attention.

I raise these issues not because there is a clear or obvious answer. These clearly are difficult times, and we must take advantage of the opportunity presented by these pandemic challenges to evaluate the issues, but we must also be careful not to let the emergency circumstances dictate bad choices.

In the national privacy law debate, the role of the health care system has taken a back seat to the larger privacy debate. This is both understandable and problematic. The health care industry has viewed privacy law as relatively settled for many years, but we are increasingly recognizing that this is not really the case.

The HIPAA rules often work well where they apply, but there are both more situations in which they don?t apply, and a broader range of events where the rules may not work well. The pandemic has led to the immediate need to address some of these complications in real time, but we will need to ensure that these issues remain in the public debate and that the increasing complexities of health care privacy can be addressed appropriately in any future U.S. privacy law.

Kirk Nahra is a Partner with WilmerHale in Washington, D.C. where he co-chairs their global Cybersecurity and Privacy Practice.

Filed Under: covid-19, healthcare, hipaa, pandemic, privacy

Kushner's COVID Task Force Is Looking To Expand The Government's Surveillance Of Private Healthcare Companies

from the move-fast-and-break-privacy dept

Jared Kushner’s shadowy coronavirus task force is still at work behind the scenes, bringing this country back to health by leveraging Kushner’s innate ability to marry into the right family. Very little is known about it and very little will be known about thanks to the task force’s decision to run communications through private email accounts.

Kushner’s focus appears to be the private sector — the same area his father-in-law appears to be most worried about. The curve has yet to flatten, but Trump and Kushner want to make sure companies remain healthy even if their employees aren’t.

It appears Kushner is now branching out into the public sector. The private sector will be involved, but as the target for a new strain of surveillance, as Adam Cancryn reports for Politico.

White House senior adviser Jared Kushner’s task force has reached out to a range of health technology companies about creating a national coronavirus surveillance system to give the government a near real-time view of where patients are seeking treatment and for what, and whether hospitals can accommodate them, according to four people with knowledge of the discussions.

This information will be used to determine where resources might need to be allocated. It will also be used to make judgment calls for social distancing and “stay at home” orders, with an eye on getting companies back up and running as quickly as possible.

What the task force is pushing for is relaxed rules on data sharing by private health companies.

[T]he Trump administration has sought to ease data-sharing rules and assure health data companies they won’t be penalized for sharing information with state and federal officials — a move driven in part by Kushner’s push to assemble the national network, according to an individual with knowledge of the decision.

To do this, the administration is likely to lean on its favorite weapon against privacy: national security. There are exceptions built into health privacy laws that make it easier for the federal government to demand access to this data. If the task force can sell the pandemic as a national security crisis, the government will be able to peer into multiple databases and do whatever it wants with that data. And it will be able to do so for as long as it wants, so long as it can claim the threat is still present.

The thing is there’s no need to reinvent the surveillance wheel… unless the additional layer of surveillance is actually what the administration wants, rather than a targeted response to health care needs.

Some public health experts, meanwhile, suggested that the administration might instead build out and reorient an existing surveillance system housed within the Centers for Disease Control and Prevention that aided the response to prior epidemics. The system, called the National Syndromic Surveillance Program, is a voluntary collaboration between the CDC and various state and local health departments that draws data from more than 4,000 health care facilities.

While there may be some short-terms gains from adding another level of health care surveillance, the inherent problem is rolling back that surveillance once it’s no longer needed. Americans may be more agreeable to additional government snooping during a short-term crisis, but they’re less willing to look the other way when the threat to the nation has passed. The government, however, generally doesn’t care what the people want. If it has found some self-serving uses for this increased access, it will keep the access and say enough stuff about national security threats to defeat attempts to scale things back to their pre-COVID levels. We saw this with the 9/11 attacks. And we may see it again with this unprecedented pandemic.

Filed Under: covid-19, healthcare, jared kushner, surveillance

from the not-very-sweet,-not-very-clever dept

One of the most important recent developments in the world of diabetes has been the arrival of relatively low-cost continuous blood glucose monitors. These allow people with diabetes to take frequent readings of their blood sugar levels without needing to use painful finger sticks every time. That, in turn, allows users to fine-tune their insulin injections, with big health benefits for both the short- and long-term. The new devices can be read by placing a smartphone close to them. People use an app that gathers the data from the unit, which is typically placed on the back of the upper arm with an adhesive.

One of the long-awaited technological treatments for diabetes is the “closed-loop” system, also called an “artificial pancreas”. Here, readings from a continuous glucose device are used to adjust an insulin pump in response to varying blood sugar levels — just as the pancreas does. The idea is to free those with diabetes from needing to monitor their levels all the time. Instead, software with appropriate algorithms does the job in the background.

Closed-loop systems are still being developed by pharma companies. In the meantime, many people have taken things into their own hands, and built DIY artificial pancreas systems from existing components, writing the control code themselves. One popular site for sharing help on the topic is Diabettech, with “information about [continuous glucose monitoring] systems, DIY Closed Loops, forthcoming insulins and a variety of other aspects.”

A few months back there was a post on Diabettech about some code posted to GitHub. A patch to Abbott Laboratories’ LibreLink app allowed data from the same company’s FreeStyle Libre continuous monitor to be accessed by other apps running on a smartphone. In particular, it enabled the blood-sugar data to be used by a program called xDrip, which provides “sophisticated charting, customization and data entry features as well as a predictive simulation model.” Innocent enough, you might think. But not according to Abbott Laboratories, which sent in the legal heavies waving the DMCA:

It has come to Abbott’s attention that a software project titled “Libre2-patched-App” has been uploaded to GitHub, Inc.’s (“GitHub?) website and creates unauthorized derivative works of Abbott’s LibreLink program (the “Infringing Software”). The Infringing Software is available at https://github.com/user987654321resu/Libre2-patched-App. In addition to offering the Infringing Software, the project provides instructions on how to download the Infringing Software, circumvent Abbott’s technological protection measures by disassembling the LibreLink program, and use the Infringing Software to modify the LibreLink program.

The patch is no longer available on GitHub. The original Diabettech post suggested that analyzing the Abbott app was permitted under EU law (pdf):

Perhaps surprisingly, this seems to be covered by the European Software Directive in article 6 which was implemented in member states years back, which allows for decompilation of the code by a licensed user in order to enable interoperability with another application (xDrip in this case).

As Cory Doctorow points out in his discussion of these events, in the US the DMCA has a similar exemption for reverse engineering:

a person who has lawfully obtained the right to use a copy of a computer program may circumvent a technological measure that effectively controls access to a particular portion of that program for the sole purpose of identifying and analyzing those elements of the program that are necessary to achieve interoperability of an independently created computer program with other programs, and that have not previously been readily available to the person engaging in the circumvention, to the extent any such acts of identification and analysis do not constitute infringement under this title.

Legal issues aside, there is a larger point here. As the success of open source software over the last twenty years has shown, one of the richest stores of new ideas for a product is its user community. Companies that embrace that group are able to draw on what is effectively a global research and development effort. Abbott is not just wrong to bully people looking to derive greater benefit from its products by extending them in interesting ways, it is extremely stupid. It is throwing away the enthusiasm and creativity of the very people it should be supporting and working with as closely as it can.

Follow me @glynmoody on Twitter, Diaspora, or Mastodon.

Filed Under: artificial pancreas, blood sugar, blood sugar data, copyright, data, diabetes, diabettech, diy, dmca, healthcare, librelink, reverse engineering, xdrip
Companies: abbott labs, github

Another Way In Which Patents Contributed To The Opioid Crisis: Hospitals Ordered Not To Use Better, Less Problematic Medicines

from the this-is-fucked-up dept

Two years ago, we wrote about a stunning (and horrifying) study that explained how patents deeply contributed to the opioid crisis. It described the lengths that drug companies — including OxyContin maker Purdue Pharma — went through to block any and all generic competition. It was quite a story.

However, on a recent episode of Terry Gross’s “Fresh Air” she interviewed medical bioethicist Travis Rieder about his new book, In Pain. It tells the story of how, even as a “medical bioethicist,” Rieder himself got addicted to opioids after being in a severe motorcycle accident — and then was shocked to find that none of his doctors either knew how or cared enough to help him get off the painkillers. The story is fascinating — and harrowing.

Deep into the discussion, however, one part caught my attention. Rieder tells a story about how, rather than putting him on opioids, they could have just given him acetaminophen:

GROSS: One of the pain killers that you were given when you were in the hospital was intravenous acetaminophen. And you thought that that was really, surprisingly effective as a painkiller, but you were only given a few doses, even though you kind of begged for more more because it was effective and not habit-forming. So why couldn’t you get more of it?

RIEDER: Yeah, this is such a wild story. I didn’t know for a long time, and so all I had was this immediate experience where, after that fifth surgery, when I was really behind the pain, the pain management team upped all of the doses of everything I was on, but then also gave me three doses over 24 hours of IV acetaminophen. And for me, the way I described it at the time, it was as good as morphine in the short term, but it didn’t knock me out. It didn’t sedate me. I didn’t have to worry about my breathing. And so I really liked it for that reason, and I asked for more.

And I remember one of the residents being kind of hesitant – you know, one of these young doctors in training – and kind of mumbling something about, I don’t think you can have more because of your liver, or something. I didn’t question it.

But, turns out, it’s got nothing to do with his liver:

Months later, I’m an invited speaker at an anesthesiology conference, and I’m hanging out with some of the docs over a coffee break. And I’m telling them the story because I’m like, hey, I’ve got these, you know, really smart people. I’m going to pick their brains. And I get to the point where the resident mumbles this excuse to me, and they all chuckle. And I look at them, and I say, what? Is that not the reason? And I can tell in that moment that they all know something and that they all know that they all know. And one of them looks up at me and says, they’re not giving it to you because it’s too expensive.

(Laughter) And my mind was blown. I was like, wait a minute – what do you mean it’s too expensive? It’s just Tylenol, right? They said, yeah, but the IV form is still on patent. And so once it goes off patent, it’ll be standard of care because it works great. But, you know, for now, it’s too expensive, so most of us have hospital orders not to use it.

So, let’s get everyone hopped up on addictive and destructive opioids, because this form of Tylenol is still on patent. That’s just great. He continues:

I think what it started for me was a dive down the rabbit hole of, how does money play a role in how we treat pain and how we overutilize opioids for pain, right? Because what it made really clear is that opioids are dirt-cheap because a bunch of them have been off patent for decades, and that these other sorts of therapies can be really expensive.

For all the talk of how patents create incentives for new life-saving medicines, it’s important to recognize that they create some pretty fucked up incentives at times as well.

Filed Under: acetaminophen, drug prices, healthcare, opioids, pain medicine, patents, travis rieder

Why Is The US Government Letting Big Pharma Charge Insane Prices On Patents The US Owns?

from the big-questions dept

As we’ve discussed plenty of times in the past, when the federal government creates something that could be covered by copyright law, US copyright law requires it to be put into the public domain for the benefit of the public. I’ve never quite understood why the same is not true for patents. Instead, the US government does big business licensing off patents. While some may argue that this is a good revenue generation scheme for the US government (which theoretically should lower taxes elsewhere), it has significant downstream effects. And that’s especially true in the healthcare market.

As we’ve discussed before, you’ll often hear big pharma insisting it needs patents because it takes some ungodly sum to research and bring a patent to market. That number goes up every year. By a lot. In the early 2000s, the numbers was clocked at 800million.Lastyear,drugcompanieswerenowclaiming[800 million. Last year, drug companies were now claiming [800million.Lastyear,drugcompanieswerenowclaiming2.7 billion. But much of that is a total myth. Indeed, research shows that big pharma is often adding up the costs that the federal government itself spends on encouraging new drug development and adds it to the total cost as if that cost is borne by the pharmaceutical industry, rather than the taxpayer.

And yet, even though the US taxpayer tends to pay for a significant share of the research and development in new drugs, big pharma companies which take over the project down the road get to keep 100% of the profits — and, thanks to a totally broken patent system that gives them a literal monopoly, they jack up the prices to insane levels (and this works because of our idiotic healthcare setup in which no one ever knows the cost of what we’re buying, and insurance companies act as weird middlemen).

I’m reminded of all this in reading a new piece by Dr. Eugene Gu, talking about the absolute insanity of Truvada, an important drug for HIV patients, which is controlled by pharma company Gilead Sciences. Gu outlines a story that reflects exactly what we discussed above. Gilead charges impossibly high fees for Truvada even though most of the development was paid for by US taxpayers:

While the generic version of Truvada is available in many countries outside the United States for around 840annuallyperpatient,GileadusesitspatentonthedrugtochargeAmericansclosetoaround840 annually per patient, Gilead uses its patent on the drug to charge Americans close to around 840annuallyperpatient,GileadusesitspatentonthedrugtochargeAmericansclosetoaround24,000 annually per patient. That?s for the exact fixed dose combination of tenofovir and emtricitabine that costs around $60 annually per patient to produce.

[….]

What’s infuriating is that American taxpayers funded much of the research and development for Truvada. So much, in fact, that according to the Yale Global Health Justice Partnership it’s the CDC that actually owns the patent for the drug. So Gilead has basically been making $3bn a year selling a drug that actually belongs to Americans themselves.

And, as Gu notes, the situation gets even more ridiculous and more corrupt:

And that?s not all. Gilead recently partnered with Secretary of Health and Human Services Alex Azar and President Donald Trump to roll out a public relations scheme to fool the public. During this, Gilead declared that it would be donating enough Truvada to treat 200,000 patients each year until 2030. While it sounds great on the surface, that basically means it will donate around $12m a year while making billions in profits and getting a tax break.

There are all sorts of reasons why our healthcare system is truly messed up, but the fact that taxpayers pay for the development of critical life saving drugs, but then the government allows big pharma companies to effectively control the patent, extract massive monopoly rents, and then give them tax breaks for donating a tiny percentage… seems particularly fucked up.

Filed Under: drug development, extortion, funding, healthcare, monopolies, monopolies rents, patents, pharmaceuticals, truvada, us government
Companies: gilead sciences