Skip to main content

 

Politics Are Key Factor in Policy Progress

As we approach the culmination of the biannual event known as “the most important election of our lifetime,” it is an opportune moment to assess what this election has in store with regard to the medical professional liability community.

Addressing Medical Damages in a Destabilized Healthcare Environment

Join us for a new webinar on Tuesday, September 23, at 12:30 p.m. (ET). Free for members and partners!

The MPL Association Elects Board Chair and Appoints Officers

J. Michael Conerly, MD, FACS, MBA, President and Chief Executive Officer of LAMMICO, has been selected to serve as the Association’s Chair of the Board of Directors.


 

FEATURE

Establishing Healthcare AI Governance

Onboard AI Tools and AI Use with Policies, Procedures, and Formal Vetting Processes


By Amy Buttell


As the use of artificial intelligence (AI) expands in healthcare, health systems, insurers, and other stakeholders that implement robust AI governance programs will reap the benefits of aligning AI with organizational objectives, strengthen cybersecurity, and reduce risk. Adoption of AI—especially generative AI (gen AI)—is running ahead of governance, creating risk that could result in decreased patient safety, treatment errors, security breaches, and more. An intentionally developed AI governance policy is designed expressly to mitigate these risks.

AI adoption is rising in virtually all healthcare settings and among a wide variety of practitioners. At health systems, 65% of hospitals reported utilizing AI-assisted predictive models, according to an American Hospital Association survey. Hospitals use these models to predict inpatient health trajectories, identify high risk outpatients, and facilitate scheduling. In clinical practice, two-thirds of physicians use AI, according to the American Medical Association. A survey conducted by McKinsey and the American Nurses Foundation revealed that more than 60% of nurses want to see more AI tools incorporated into their work.

The potential use cases for AI in hospitals continue to rise and include medical imaging and diagnostics, robotic surgery, personalized treatment plans, automated scheduling, patient flow, claims processing, billing, patient monitoring, medication management, and more. In clinical practice, physicians are employing AI to generate clinical notes, assign billing codes, monitor patients, educate patients, provide virtual assistants, record dictation, and support clinical decisions.

As AI use cases grow, federal and state regulations—and industry standards—are evolving around the use of AI in healthcare. In this article, we’ll discuss the risks of AI and how creating a governance structure around the use of AI in your organization can not only mitigate risk, but also create a positive environment to leverage AI for efficiency.

Potential AI Risks in Healthcare

The rapid changes in the technology underpinning AI and the rapid adoption of AI in healthcare create risks that need to be understood and mitigated through the creation and adoption of AI governance policies. These risks include:

Bias in AI models: Biased and inaccurate data that is fed into AI models has the potential to perpetuate systemic inequalities in healthcare. Because the information that AI produces is only as good as the data fed into it, biases based on sex, gender, race, ethnicity, age, socioeconomic status, geographic location, and more can lead to inaccurate and inappropriate diagnosis and treatment.

Privacy concerns: AI use in healthcare can lead to several different types of privacy risk. If healthcare administrators, providers, and staff bring their own AI, they may input patient or other proprietary data into public AI models. Within AI models provided by hospitals, data breaches can expose sensitive and personal information, while data sharing without full patient consent or knowledge also creates privacy risk.

Cybersecurity risks: Hospitals, healthcare providers, and insurers already create, receive, store, and transmit large amounts of data; adding AI into the mix not only increases the amount of data, but also potentially attracts bad actors who seek to exploit vulnerabilities within AI data pipelines.

Diagnostic and treatment errors: Because AI treatment recommendations and results are based on algorithms, while they are highly accurate, they are not perfect. A reliance on AI alone, without human interpretation, could result in diagnostic and treatment errors. Even with human judgement added, AI recommendations essentially come from a black box, which means providers won’t likely understand the reasoning behind the recommendation. The tendency of generative AI models to hallucinate or fabricate data also creates risks.



Mitigate AI Risks with an AI Governance Framework

AI governance is a framework of policies, procedures, and ethical guidelines to ensure the responsible development, deployment, and use of AI systems in healthcare. The framework is designed to mitigate risk, promote ethical use, and align AI with organizational mission, vision, and values. It is also designed to address legal and practical concerns, including medical professional liability.

Because AI regulation is neither uniform nor clear in the US, developing governance frameworks appropriate to your organization is imperative. Many clinicians, administrators, staff, and patients are distrustful of AI, which is another reason why developing a transparent governance framework is important for building trust and creating an environment where they contribute positively to administrative functions and clinical outcomes.

The first step in any framework is the essential question: Is AI even the right tool to solve this problem? A clear understanding of what use cases AI will be used in and what the potential benefits and costs are is an important initial step in your AI journey. Creating an AI governance committee composed of leaders in different areas, including administrative and clinical, tasked with answering that essential question and exploring all of the viable options prior to committing to an AI solution is highly recommended.

Once it is determined that AI is the right answer in any given situation, a thorough review of the options within the AI marketplace must follow to make sure that the product ultimately selected will align with the organization’s mission, vision, and values. Any system reviewed should be subject to data governance standards that ensure data completeness, representativeness, and freedom from bias.

As use cases are built for an AI system, the committee should predefine key performance indicators so everyone understands what results are expected from the system once it is installed and operational. An implementation framework should also be created that establishes AI usage protocols and ensures that oversight by humans is centralized within all operational workflows.

There are many options in the marketplace, including off-the-shelf solutions that organizations can buy and implement quickly or proprietary solutions that organizations can build internally, which are more expensive and labor intensive, but are likely to be more aligned with organizational needs. Different AI solutions may be suitable for different needs within an organization. There’s also the option to buy an off-the-shelf solution and then customize it for specific needs.

After a product has been selected and customized or created for those specific needs, the clinicians, administrators, or staff who will use the application should test it in simulated circumstances to make sure that it functions appropriately and that they know how to use it effectively. Validating the AI model’s performance will facilitate identifying hidden risks and potential performance degradation.

The testing of an AI solution or solutions should be iterative as new capabilities are introduced within the AI platform and new uses are planned in the hospital environment. The process around testing should be transparent and technically sound, so that all of those affected by the incorporation of the AI can be sure of its purpose and benefit.

Within an AI governance process, the development of an independent and centralized reporting system for issues related to the AI should occur. An independent and centralized system creates more accountability than the reporting of problems to individual department managers because AI issues are likely to be systemic. Such consolidated reporting also supports compliance with regulatory updates and unexpected problems. With a robust reporting process in place, it will be easier to assure providers, patients, and business partners of the AI’s safety and efficacy.

Once a process has been established and refined, it should be subject to ongoing re-evaluation due to the ever-changing nature of the AI landscape and organizational needs. New AI models and platforms are coming online in the marketplace continuously. In addition, AI companies will be acquired by others and may also go out of business. Organizational needs are evolving at the same time as the technology, and the players in those technological spaces are changing. By establishing a framework and timeline to re-evaluate AI use cases and technology partners, users can ensure that AI remains aligned with their needs and risk tolerances.

Mitigate AI Risk with Robust Governance Policies

If undertaken correctly, AI governance policies can mitigate AI risk by safeguarding patient safety, maintaining ethical standards, fostering regulatory compliance, promoting trust through transparency and accountability and managing privacy and legal concerns. Ethical use of AI in healthcare demands appropriate protections, which robust governance policies provide.


 


Amy Buttell is the editor of Inside Medical Liability Online.

AI governance is a framework of policies, procedures, and ethical guidelines to ensure the responsible development, deployment, and use of AI systems in healthcare.