At a Glance

  • AI is increasingly being utilized for diagnosing and treating diseases, but still has a long way to go to gain public acceptance and understanding.
  • Health insurance, health care, digital health, and pharmaceutical companies are facing scrutiny from litigators, government agencies, and Congress for their use of AI in determining coverage and matching patients with drugs and devices.
  • International legal frameworks are being drawn up to help determine the risks and benefits of the use of AI.
  • Health sector stakeholders need to educate themselves about AI oversight, law, and regulation, and will need to engage regulators, policymakers and patients to explain their AI use, privacy safeguards, and benefits and risks to patients. 

 

The Issue

Parties using newly emerging technologies—think AI or gene-based medicine—still have a long way to go in building public trust and acceptance, according to Edelman’s 2024 Trust Barometer. The recently released report examined the public’s trust and found respondents “at a crossroads,” similarly enthusiastic and resistant to these innovations. Perhaps nowhere is this dynamic more evident than at the intersection of tech and health—in the increasing use of AI to help diagnose, determine eligibility for care, and treat diseases.  

 

Cases Of Litigation

A recent story in Bloomberg Law notes a wave of new litigation against US health insurance companies “for allegedly deploying advanced technology to deny claims.” The lawsuits center around private Medicare Advantage plans using AI or algorithmic software to help determine coverage. These plans are under fire from lawmakers, regulators, and patients “for using advanced approval—or ‘prior authorization’—to deny coverage that’s typically granted in fee-for-service Medicare.” This was a pre-existing issue but lawmakers say AI may have exacerbated the problem. 

 

DOJ Joins the Scrutiny

The Department of Justice is now scrutinizing health care companies for their use of AI. In a separate story, Bloomberg reports on the industry’s use of AI embedded in patient records to prompt doctors with recommended treatments—an effort by electronic health record vendors to match patients with particular drugs and devices. The article notes: “Prosecutors have started subpoenaing pharmaceuticals and digital health companies to learn more about generative technology’s role in facilitating anti-kickback and false claims violations.”

These health practices are using AI to help determine and direct care to patients. Such strategies can potentially produce better, more appropriate outcomes for patients by matching their medical needs to state-of-the-art treatments and ensuring that the most appropriate care is chosen. It may also reduce healthcare costs.  

At the same time, as Edelman’s Trust Barometer data shows, the public is bound to have questions about how the technology is used and will need to be convinced that the benefits outweigh the costs and the risks. The fact that litigators, government health care agencies, and Congress are beginning to examine these issues closely poses not only legal threats to such uses of AI, but may also create a negative narrative about their use, influencing public attitudes that are still just being shaped. 

 

Global Implications

Other global players are already becoming involved in the game of AI regulation. The recently released draft EU Artificial Intelligence Act, for one, creates a broad legal framework for the appropriate use of AI technologies that identifies areas of risks and weighs those against benefits (more EGA analysis of the EU AI Act here). Notable for the healthcare sector is that the draft EU AI Act proposes categorizing “automated insurance claims” as a “high-risk” application of AI and subjecting it to greater scrutiny and regulatory requirements. Similarly, the US Food and Drug Administration has yet to approve any device featuring Generative AI or LLMs, but new frameworks are coming and the agency’s role is set to grow and become more active, even as it calls for increased transparency in “product’s intended use, development, performance, and logic.”

The key takeaway is that a broad range of stakeholders in the health sector who are already or just considering using AI in their business operations—especially where they interface with patients— need to educate themselves about the current trends in AI oversight, law, and regulation, and will need to engage regulators, policymakers and, not least of all, patients to explain how they are using the new technology, how they are safeguarding privacy and health in doing so, and what the potential benefits and risks are to patients. Their challenge is to leverage the potential of AI while addressing concerns around trust.  

EGA’s health experts can help companies and other stakeholders navigate emerging legal, policy, and public trends to earn trust and be responsible shapers of both policy and public attitudes toward AI technologies. 

 

Joe Damond leads Edelman Global Advisory, Health Policy and Public Affairs, an international team of professionals dedicated to helping clients achieve their ambitions for a healthier world and a stronger economic stake in the strategic healthcare, life sciences, and innovative biopharmaceutical sectors. Our team is at work in advanced markets as well as within newly emerging regions of the global biopharma ecosystem, including the Middle East, Asia Pacific, Africa, and Latin America.