top of page
Writer's pictureShidonna Raven

Pharma Case


By Ben Penn

January 29, 2024

Photo Source: Unsplash,


Justice Department investigators are scrutinizing the healthcare industry’s use of AI embedded in patient records that prompts doctors to recommend treatments.


Prosecutors have started subpoenaing pharmaceuticals and digital health companies to learn more about generative technology’s role in facilitating anti-kickback and false claims violations, said three sources familiar with the matter. It comes as electronic health record vendors are integrating more sophisticated artificial intelligence tools to match patients with particular drugs and devices.


It’s unclear how advanced the cases are and where they fit in the Biden administration’s initiative to spur innovation in healthcare AI while regulating to promote safeguards. Two of the sources—speaking anonymously to discuss ongoing investigations—said DOJ attorneys are asking general questions suggesting they still may be formulating a strategy.


“I have seen” civil investigative demands “that ask questions about algorithms and prompts that are being built into EMR systems that may be resulting in care that is either in excess of what would have otherwise been rendered, or may be medically unnecessary,” said Jaime Jones, who co-leads the healthcare practice at Sidley Austin. DOJ attorneys want “to see what the result is of those tools being built into the system.”


A Justice Department spokesman declined to comment.

The technology relies on algorithms that mine health data, spot trends, and identify patients who may have certain conditions and be eligible for remedies that physicians might not otherwise consider. That can eventually help save lives and make health care delivery more efficient, while also easing AI abuse by profit-seekers peddling their products to doctors.


At least three publicly-traded pharma giants—GSK Plc in 2023, AstraZeneca Plc in 2020, and Merck & Co. in 2019—disclosed to shareholders that they were served subpoenas by DOJ related to electronic medical records. The department hasn’t announced resolutions with the companies.


Purdue Model

The probes bring fresh relevance to a pair of 2020 criminal settlements with Purdue Pharma and its digital records contractor, Practice Fusion, over their collusion to design automated pop-up alerts pushing doctors to prescribe addictive painkillers.


The theory behind the kickback scheme, which led to a $145 million penalty for Practice Fusion, was pioneered by an enterprising federal prosecutor in Vermont, one of the smallest US attorney’s offices in the nation. He found that marketers from Purdue, which pleaded guilty and paid $8.3 billion, worked in tandem with Practice Fusion to build clinical decision alerts relying on algorithms.


Four years later, the AI tools now on the market can produce far more problematic results, even as they hold potential for diagnostic breakthroughs, attorneys say.


“The risk of harm is greater because it can metastasize quite a bit quicker without any checks on it,” said Owen Foster, who spearheaded investigations against Practice Fusion and four other EMR vendors—all of which ended with steep penalties—before leaving the Vermont US Attorney’s office in 2022.


“Even in Practice Fusion, there were some levels of effort at compliance, whereas if you have AI rewriting code and putting out different alerts, that can happen without any review, and that’s really where harm can happen fast and deep,” added Foster, who now represents whistleblowers.


Today, Foster still battles a healthcare defense bar with whom he agrees on at least one issue: Practice Fusion is a harbinger of where US prosecutors are likely headed in grappling with AI’s ability to assess patients—and the legal liability for companies that benefit.

“That seems to me to be like two cars just slowly crashing into each other, because I don’t know how generative AI can work and thrive under the current applications of the False Claims Act and, more squarely, the Anti-Kickback Statute,” said Michael Shaheen, a former DOJ civil fraud attorney who’s now a partner at Crowell & Moring. “Practice fusion is kind of the poster child for how it could go down.”


Bigger Challenge

The 1972 anti-kickback law forbids the exchange of anything of value for the purpose of inducing healthcare business, and is frequently used as a predicate in civil FCA cases, which carry treble damages and allege the government was billed for fraudulent claims.


The automated nature of AI can make it challenging for investigators to trace criminal willfulness on the scale Foster found at Practice Fusion and Purdue—which included prompts informed by inaccurate data inputs. But the vendor’s civil violations are seen by industry lawyers as more fertile ground for enforcement.


Even civil cases may be difficult for the department to establish that companies and individuals were responsible. DOJ lawyers can look for internal emails discussing the AI’s design or evidence that vendors ran return-on-investment projections. They’re also likely to conduct statistical analyses of the AI’s impact on scripts, or count on coders to step forward as whistleblowers, former prosecutors say.


“Where would you find the fingerprints?” asked Nathaniel Mendell, a partner at Morrison & Foerster and the former acting US attorney in Boston. He’s been gaming out with clients how a Practice Fusion-modeled investigation would apply to current AI.


“As opposed to even a sophisticated algorithm, AI makes it more difficult to trace those breadcrumbs,” Mendell said.


For instance, the AI could study how physicians respond to the alerts and get smarter, adjusting the wording to drive different desired outcomes.


Growth Capacity

Prosecutors said in a court filing that Practice Fusion entered into unlawful agreements with multiple other pharmaceutical manufacturers to develop clinical decision tools. Purdue is the only one that’s been identified, leaving plenty of room for further enforcement.


“Historically, the focus of enforcement was on the EHR vendors themselves, but as this industry grows and there are more and more sponsored digital health programs by pharmaceuticals and medical device manufacturers, there is room for DOJ’s enforcement to really evolve in this area,” said Samantha Badlam, a partner at Ropes & Gray.


Although some cases have been in the pipeline for several years, history suggests that whenever DOJ enforcers start responding to new technology, it takes time to see results, said Manny Abascal, a partner at Latham & Watkins who represented one of the EHR vendors that settled with the Vermont office.


“I think the AI false claims investigations will be like a 2027, 2028 problem,” Abascal said.


‘Absolute Boon’

That’s not stopping Abascal and others counseling healthcare companies to carefully structure AI—and their relationships with business partners—to avoid becoming a target of prosecutors.


“It is very ripe for enforcement because any slight manipulation of the inputs or the outputs of AI that have a consequence for clinical decisionmaking” can be “an absolute boon for you as a manufacturer of pharmaceuticals or medical devices,” said Kyle Faget, who co-chairs Foley & Lardner’s health care practice.


“The predictive AI may be able to tell you, ‘hey, these patients are all at risk for x, y, and z, and that should influence their care plan in the following ways. I think that’s the world we’re headed toward,” Faget said. “But again, you have to be so careful about the inputs—what are the questions that you’re asking to get that end result, and are the assumptions correct?”


Purdue’s sponsorship of Practice Fusion alerts delivered what Foster called the “holy grail” for Big Pharma. A manufacturer was effectively standing over the shoulder of doctors in the exam room as they’re considering what to prescribe.


It’s a cautionary tale today, even as AI’s use in medical records software is touted in medical research for its potential benefits to doctors and patients.


“If in fact AI’s a good faith effort to improve medicine, that’s great and it’s probably very effective,” said Daniel Anderson, who retired in 2019 as deputy director of DOJ’s civil fraud section.


“If there are incentives being paid to favor one pill over another pill,” added Anderson, who coordinated some of the EHR investigations with Foster, “then a red flag immediately goes up.”


What is the digital divide? How can it impact your health? Why?




Share the wealth of health with your colleagues and friends by sharing this article with 3 people today.


If this article was helpful to you, donate to the Shidonna Raven Garden and Cook E-Magazine Today. Thank you in advance.




Comments

Rated 0 out of 5 stars.
No ratings yet

Add a rating
bottom of page