For nearly 50 years, artificial intelligence (AI) has made inroads into healthcare. Since the 1970s, AI systems have been employed to recognize, identify, evaluate, diagnose, analyze, screen, and detect and predict diseases, illnesses, and other healthcare events and conditions. AI is also used to improve workflow efficiency and handle routine and repetitive tasks within healthcare.
At its core, AI is a field of knowledge within computer science. Its objective is simulating human intelligence with computers—hence the name, artificial intelligence. AI is widely used today by scientists, healthcare providers, insurers, consumers, and many others to perform tasks and obtain information. Most Americans interface with AI on a daily basis as they rely on a variety of applications to generate directions, receive product recommendations, obtain information from customer service chatbots, categorize email, and more.
While AI in general is widely embedded in our daily lives and in healthcare, a newer subset of AI known as generative AI, driven by large language models, is much more recent. When OpenAI released ChatGPT in November 2022, a generative AI boom ensued. More than one million users signed up in the first week after the app’s release; by the fall of 2023, there were an estimated 96 million visitors to the ChatGPT website per month. A blizzard of generative AI apps are either already developed or being developed within the healthcare space.
This article, which is the first part of a three-part series, is designed to explain the various types of artificial intelligence; their uses, or potential uses, in healthcare; and the risk of AI, also in healthcare. The series will attempt to demystify AI—in particular generative AI—while offering insights into how this technology will affect healthcare and the MPL industry. In part two, we’ll offer analysis of the potential impact on healthcare and MPL; part three will explore how MPL insurers can use generative AI to create efficiencies.
Types of AI Used in Healthcare
There are many subsets of AI—within healthcare, there are at least six major types of AI that are currently in use, including:
- Generative AI: Generates original data based on large language models that are trained on large amounts of specific information. This data can include text, images, music, code, and more. Healthcare applications include automating scribing, pre-authorizations, and appointment scheduling; accelerating drug development; improving post-visit compliance; organizing, retrieving, and synthesizing complex medical facts, notes, and research; streamlining revenue cycle management; and more.
- Machine learning: Enhances statistical modeling to allow models to learn through training with data. Healthcare applications include recognizing cancer in radiology imaging and determining the likelihood that a patient will contract a particular illness.
- Natural language processing: Allows computers to analyze and process human language to perform repetitive tasks. Healthcare applications include preparing reports on examinations and analyzing clinical notes.
- Rule-based expert systems: Creates “if-then” rules-based applications within particular systems. Healthcare applications include electronic health record systems and clinical decision support applications.
- Robotic process automation: Performs structured administrative digital tasks involving information systems. Healthcare applications include scheduling appointments, managing claims, and self-service check-ins.
What Is Generative AI?
Generative AI is distinctly different from other types of AI in that it establishes a conversational interface between a large language model AI and the AI user to generate new outputs, such as content, pictures, code, music, audio, video, simulations, and more in response to user questions. While traditional AI can perform narrow sets of tasks with intelligence based on data, generative AI can actually create new content based on user queries. In other words, “generative AI models are trained on a set of data and learn the underlying patterns to generate new data that mirrors the training set,” according to Forbes; the difference between traditional AI and generative AI is that “traditional AI excels at pattern recognition, while generative AI excels at pattern creation.”
Any consumer with a device and an internet connection can, for example, log onto OpenAI or Bard and ask a question. No other type of AI, to date, has this capability. This capability derives from the content used to train a specific generative AI. What does that mean? Developers feed gigantic amounts of existing content to one of these models. The model then analyzes all of that data to the point where it can generate predictions based on the words used in a user question. Essentially, the models learn from the content they are trained on to respond—more or less accurately—to user questions or prompts.
Generating accurate word patterns, as in the case of text-based generative AI, requires an immense amount of data. For example, experts estimate that ChatGPT-3 was trained on 45 terabytes of text data, which is equivalent to a quarter of the entire Library of Congress or one million feet of bookshelf space. This requires a lot of raw computing power, which is a major reason why OpenAI teamed up with Microsoft and why most other tech behemoths are involved in generative AI to one degree or another.
How Generative AI Is Employed
Generative AI models have many business and consumer applications. Organizations are rushing to monetize generative AI for the potentially lucrative B-to-B market. For example, Amazon Web Services offers three applications designed to help companies leverage generative AI for their own purposes. Llama 2, a family of large language models offered by Meta and Microsoft, is designed to be used by developers to create chatbots or other generative AI tasks. In other words, you, as a consumer, wouldn’t hop on Llama 2 to ask about a recipe for dinner tonight like you would with ChatGPT or Bard.
Within healthcare, Google’s large language model, Med-PaLM 2, is specifically trained on medical data. In fact, the app achieved a grade of 86.5% on USMLE medical licensure exam type questions. The app won’t be utilized in patient-facing situations, and personal patient data won’t be used to train it. The Mayo Clinic, HCA, and Meditech are among the organizations that are testing the application.
Notable start-ups capitalizing on generative AI’s large language models in healthcare include, out of many in the marketplace:
- Abridge, which helps providers write notes
- Syntegra, which creates synthetic data that preserves privacy for innovative outcomes
- Atropos Health, which provides evidence for clinical decisions and research
- Navina, which organizes patient data from multiple sources, feeding it to providers
- Subtle Medical, which improves the clarity and speed of medical imaging
Essentially, generative AI has applications in many healthcare delivery domains, including consumer, continuity of care, network and market insights, clinical operations, clinical analysis, quality and safety, value-based care, reimbursement, and administrative functions. For example, generative AI applications could improve operating room management, predict operating room use, and improve risk management and deliver operating room analytics in real time to improve utilization.
On the consumer side, generative AI apps offer shortcuts over traditional search engines. For example, while you can google a recipe for chicken for dinner tonight, you could actually list the ingredients you have available in your kitchen now and ask ChatGPT or Bard to generate a recipe for you. Need an Excel formula for a specific purpose? Ask generative AI and you’ll get one. Don’t know how to code? No worries—you can get code from consumer generative AIs. Concerned about the potential for redundancy in the face of generative AI, search engines are already adopting generative AI features.
Risks of Generative AI in Healthcare
No conversation about AI—or generative AI—in healthcare can be complete without a discussion about risk. The transformational nature of generative AI, as well as its popularity, means that it’s potentially a lawsuit waiting to happen. Part two of this series will explore many of these risks and include commentary from industry-leading experts.
The risks are potentially so profound that the World Health Organization (WHO) issued a statement calling for “safe and ethical AI for health.” “While the WHO is enthusiastic about the appropriate use of technologies, including [large language models] LLMs, to support healthcare professionals, patients, researchers, and scientists, there is concern that caution that would normally be exercised for any new technology is not being exercised consistently with LLMs. This includes widespread adherence to key values of transparency, inclusion, public engagement, expert supervision, and rigorous evaluation.”
“Precipitous adoption of untested systems could lead to errors by healthcare workers, cause harm to patients, erode trust in AI, and thereby undermine (or delay) the potential long-term benefits and uses of such technologies around the world,” the statement continued.
In a look ahead to our next article on AI, here is a list of potential risks specific to generative AI in healthcare:
- Cyber risks: From disclosure of sensitive company information to generative AI chatbots to the potential for bad actors to inject malicious information into training data, generative AI can add new cyber risks within your organization.
- Privacy risks: Large language models interfacing with real-world patient data could be vulnerable to cyber-attacks or reverse engineering that would disclose or repopulate patient data.
- Credibility and accuracy risks: Large language models may generate inaccurate information, known as hallucinations, undermining their credibility and supplying potentially damaging inaccurate healthcare information.
- Bias risks: Data used to train large language models may be biased, leading to perpetuating and even amplifying bias in treatment, potentially widening healthcare disparities.
- Sourcing risks: Large language models don’t provide sources for the information they offer; sourcing information is often hallucinated, meaning that users can’t determine the sources of the information offered and whether they are credible.
- Accountability risks: As a new technology, generative AI offers no norms for assigning fault or blame in the event of negative consequences between the user, the owner, and the developer.
Next AI Frontier
Clearly, the era of generative AI is just beginning, with many opportunities and challenges for healthcare and related fields. Still, generative AI is not the last word. Apple, in its latest generation of iPhones, has promoted intuitive AI. Intuitive AI is designed to augment and support human intelligence in a more unobtrusive way than generative AI.
Part two in the Inside Medical Liability Online generative AI series will explore the implications of generative AI in healthcare for medical professional liability.