More than a year ago, UnitedHealthcare Group Inc.’s murdered CEO had put into place an artificial intelligence (AI) system that automatically denied claims from sick elderly customers, prompting death threats to CEO Brian Thompson, a class-action lawsuit, and widespread concern over the ethical use of AI in health care.
AI use by UHC and other leading insurers to automatically deny claims has sparked a debate that has only intensified since the killing of Thompson and national reaction to it. There is a pitched debate within the field over what health care professionals call an AI arms that has resulted in the so-called “slow-motion HAL 9000” effect — named after the devious robot in the film “2001: A Space Odyssey” that methodically turns off the life-support systems of hibernating astronauts, resulting in their deaths — that has pitted hospitals and doctors against insurers.
“We are right on the cusp of a lightning rod issue with the intersection of AI and health care transformation,” Sherri Douville, CEO of Medigram, said in an interview. “When you introduce tech, there is a lot of despair and frustration with the issue of (higher rates of) denials” because of AI, she said.
Indeed, Thompson had been receiving death threats about “lack of coverage,” his wife said, according to a Wall Street Journal report.
A November 2023 lawsuit charged UnitedHealthcare with using an AI algorithm, known as nH Predict, that not only denied and overrode claims to elderly patients that had been approved by their doctors but carried a staggering 90% error rate.
AI is going to “amplify what is already working well and it is going to exacerbate what is not working well,” in health care and insurance, Sheila Phicil, CEO of Phicil-itate Change LLC, said in an email. “UnitedHealthcare has used AI to do some good as it relates to identifying and addressing social determinants of health. Those same AI tools resulted in mass denials of patient care, overriding medical professionals’ judgement. It is truly a double-edged sword.”
In addition to paying attention to regulation around the uses of AI, she said, there must also be an understanding and discussion of the systems that are already designed to produce “unjust results.”
“There are two sides to the use of AI in health care, as it is in most industries. The opportunity lies in AI’s ability to enhance diagnostics to improve health outcomes, bridge gaps in clinician shortages that are expected to worsen through the end of the decade, and facilitate a transition to a new care delivery paradigm,” Rohan Kulkarni, executive researcher at HFS Research, said in an email.
“The challenge, however, is that health plans are using AI in a perverse way to delay and deny care, while health systems leverage it to enhance medical coding aimed at maximizing reimbursement rates, often biased toward upcoding,” Kulkarni said. “AI has become the new weapon in the battle between payers and providers, prioritizing their gains and leaving consumers in a precarious position — one that reduces access to care, denies coverage, and ultimately worsens health outcomes.”
The motive of the killer, a person of interest detained and identified Monday afternoon as Luigi Mangione, is unclear. But a few pieces of evidence showed he was likely aggrieved against the insurance industry’s aggressive denials of coverage to sick patients. Shell casings and bullets engraved with the words “delay,” “deny,” and “depose” were found at the scene of the shooting in Manhattan, according to the New York City Police Department. A manifesto criticizing health care companies was found on the suspect when he was detained.
UHC’s Use of AI
Two years ago, the nation’s largest insurance company explored the use of AI to predict which denials of post-acute care cases were likely to be appealed and which of those appeals were likely to be overturned.
Nearly a year later, UnitedHealthcare was hit with a class-action lawsuit that accused it and its subsidiary, NaviHealth, of illegally deploying AI in place of medical professionals to wrongfully deny elderly patients care owed to them under Medicare Advantage Plans.
The suit claimed despite a 90% error rate, UnitedHealthcare and NaviHealth “continue to systemically deny claims using their flawed AI model” because they know that only 0.2% of policyholders will appeal denied claims and most will pay out-of-pocket costs or forgo the remainder of their prescribed post-acute care.
At least two health care professionals said UHC’s use of AI was an industrywide practice in the pursuit of more profits. “I have no doubt other insurance companies are using AI to find new ways to deny claims. The reality is that insurance companies deny things to see if anyone appeals,” Daniel Lynch, CEO of Medical Bill Gurus, said in an interview. “And a lot of denied claims are due to easily solved errors. That is the reality of health care in 2024-25.”
Democrats on the U.S. Senate Permanent Subcommittee on Investigations released a report in October claiming UnitedHealthcare’s prior authorization denial rate for post-acute care jumped to 22.7% in 2022 from 10.9% in 2020. UnitedHealthcare said the report “mischaracterizes the Medicare Advantage program and our clinical practices, while ignoring CMS (Centers for Medicare & Medicaid Services) criteria demanding greater scrutiny around post-acute care.”
Health care professionals believe the issue with AI can be resolved through policy choice and possibly will, as fallout from the lawsuit and Thompson’s death. But in the interim, the ethical use of AI may prove to be more than just an issue for health care executives.
Fearful about personal safety, some Silicon Valley leaders — many of whom have ambitious AI plans — are turning their homes into fortresses and beefing up security details with drones, facial recognition and high-tech sensors. In recent days, Amazon.com Inc. and Blackrock have job listings for vice president of executive protection.