Medicine Meets AI: Artificial Intelligence Has Potential in Patient Care
By Alisa Pierce Texas Medicine October 2023

Nov_23_TM_Cover_ChatGPT

Artificial intelligence (AI)-driven technology isn’t a futuristic concept for League City family physician Priya Kalia, MD.

In fact, as an assistant clinical professor of family medicine at The University of Texas Medical Branch at Galveston, Dr. Kalia often experiments with ChatGPT – the latest and largest language model by the San Francisco-based company OpenAI – and even uses it to draft recommendation letters for medical students headed to residency.

“Writing a recommendation letter would usually take me hours,” she said. “But ChatGPT can write one in under a minute. And, considering that, I often wonder how efficient this program could make health care if we figured out how to use it to our advantage.”

ChatGPT is an AI chatbot built on three language models collectively known as “generative pretrained transformers,” or GPT-3. These models are trained on data from the internet – including articles, websites, and social media – to understand and generate human-like responses to text prompts.

Following its release in November of last year, ChatGPT has become the fastest-growing consumer application in history, according to a February 2023 article in the Journal of Medical Internet Research, with more than 1 million users in the first few days of its launch and 100 million in the first two months.

And for good reason, says Dr. Kalia.

“ChatGPT is free, easy to use, and trained to follow instructions and provide detailed responses, enabling it to converse on a variety of topics, including medicine,” she said.

In health care, ChatGPT and platforms like it have the potential to automate daily tasks, like generating patient records, writing letters of medical necessity, or drafting home care instructions for patients.

AI technology has been integrated into multiple electronic health record (EHR) systems for those very tasks, including EPIC, Doximity, eClinicalWorks, and Athenahealth, whose GPT-based platforms can format clinical documentation and common medical correspondence, among other benefits.

And while still in the early stages of development and use in medicine, ChatGPT and similar programs may feature diagnostic applications, says a May 2020 Elsevier study.

ChatGPT’s training enables it to generate responses that, in its own words, emulate a “friendly and intelligent robot” informed on many topics.

But can the program rival the knowledge of physicians?

The answer is not just no, but “of course not,” said San Antonio radiologist Zeke Silva, MD.

Dr. Silva uses AI language platforms in his practice under the direct supervision of himself and health care professionals on his staff – training both AI technology on how to improve health care and clinicians on how to improve AI.

“Much like ChatGPT, our system uses natural language processing to look at text,” he said. “We use voice recognition software to dictate a study and then use my dictations to train our algorithm to predict how I will comment on certain findings.”

By using physician-prompted AI, Dr. Silva can analyze large volumes of data more efficiently, potentially helping him to identify health risks and craft personalized treatment plans sooner.

Dr. Silva stresses, however, that these gains do not speak to the medical intelligence of AI. Rather, ChatGPT’s medical success is a result of physicians training the technology. Without that training, he says, AI would not be able to discern randomly pulled information from applicable medical data.

“ChatGPT might be capable of regurgitating information from the internet, but it does not take individual experience or real-life medical training into account,” he said. “This technology cannot yet ensure it’s representative of patients with a wide range of personal needs.”

As a member of TMA’s Council on Legislation and an American Medical Association alternate delegate, Dr. Silva recognizes the need for further testing of ChatGPT and other AI systems before their use becomes widespread in health care. And organized medicine has urged lawmakers to adopt policy addressing the technology.

“AI as a technology and as a science is evolving,” he said. “Policy must evolve with it. What are the guardrails for potential downsides, like misinformation, poor outcomes for patients, or ethical violations? I think policymakers should proactively answer some of those questions.”

Physician inventors needed

Dr. Kalia began experimenting with AI as a project within TMA’s Leadership College. She used ChatGPT to draft treatment protocols for herself and medical professionals to follow when prescribing medication and intervention options for patients with obesity and type 2 diabetes.

She did this by feeding prompts on obesity management to ChatGPT so it could generate treatment guidance to staff. From there, she input that guidance into her EHR for staff unfamiliar with obesity protocols to follow when discerning what symptoms to look for and what treatments to recommend.

Although experimental, Dr. Kalia says having those guidelines available allowed staff to work more autonomously without needing repeat guidance, and in turn, allowed her to have more time with her patients.

“The extra time AI technology provides could drastically reduce physician burden and create more meaningful interactions with patients,” she said. “Imagine not having to spend hours on documentation or replying to a heavy inbox. That’s the luxury ChatGPT can provide with the right oversight.”

However, AI is notoriously bad at understanding “two critical elements for safe and effective patient care,” according to a February 2023 study published in the interdisciplinary journal PLOS Digital Health. Those elements – context and nuance – require the use of medical knowledge, concepts, and principles in real-world settings.

Furthermore, a June 2023 research letter in JAMA Network Open established that when it comes to medical crises, ChatGPT provides critical resources (like a 1-800 helpline number) to users only 22% of the time – suggesting the software is far from ready for patient-facing use.

“AI extracts data that has the highest chance of being correct,” Dr. Kalia said. “It pulls from statistics but not from actual patient experiences. The most common diagnosis for a young patient with chest pain may be sore muscles, but what if they’re an outlier? What if they’re experiencing a pulmonary embolism or heart attack? ChatGPT may not consider those options due to internet data suggesting they’re less frequent.”

Instead of relying on AI to grow on its own, Dr. Silva suggests physicians tweak AI algorithms with their medical knowledge and fix bugs along the way – almost like an inventor.

“How do we build confidence in our patients so that they’re trustful of an algorithm?” he said. “We first monitor its accuracy and its precision ourselves.”

Nov_23_TM_Cover_ChatGPT_Sidebar

Augmented, not artificial

With that balance in mind, TMA developed policy in 2022 outlining why AI should be used as an augmented tool set. Whereas artificial intelligence uses the internet to drive its decisions, augmented intelligence is used as an enhancement aid, and defers to human knowledge to craft its responses.

“We use the term ‘augmented intelligence’ versus ‘artificial intelligence’ to ensure that physicians know this is supposed to be a tool rather than a replacement,” said Manish M. Naik, MD, chair of TMA’s Committee on Health Information Technology. “We still want physicians, with their clinical training, to review and verify the validity and the accuracy of the automated response.”

This distinction is especially important in health care.

The Food and Drug Administration’s (FDA’s) 2021 action plan for AI-based medical devices requires “real-world performance monitoring” for AI and other machine learning-based software when used as a medical device. FDA policy outlines its position that the technology should complement the opinions of specialists rather than substitute their knowledge.

This can mean that if a physician uses an AI-enabled medical device for the diagnosis or treatment of a patient and the use deviates from an established standard of care, the physician could be held liable for improper use, says the multi-disciplinary journal Milbank Quarterly in a September 2021 article.

And according to a July 2021 survey in the Journal of the American Medical Informatics Association, 66% of patients believe that physicians should be held responsible for errors that occurred during AI-instructed care as opposed to the manufacturer (which would be subject to product liability, if applicable).

“If an algorithm has a 70% threshold of being correct, 30% of the time it fails,” Dr. Silva said. “Now imagine I listen to AI without addressing that lack of knowledge. Who’s liable, me or the machine?”

The answer remains unclear. For now, TMA’s augmented intelligence policy warns:

“Sufficient safeguards should be in place to assign appropriate liability inherent in augmented intelligence to the software developers and not to those with no control over the software content and integrity, such as physicians and other users.”

TMA policy also cautions that clinicians should be aware of the privacy implications of using ChatGPT and other AI software.

OpenAI says it accepts business associate agreements to support physician’s HIPAA compliance, but additional criteria must be met.

Without HIPAA compliance, there is a risk of a patient’s protected health information being accessible to OpenAI employees or being used to train ChatGPT further, TMA policy cautions.

“Sellers and distributors of augmented intelligence should disclose that it has met all legal and regulatory compliance with regulations such as, but not limited to, those of HIPAA, the U.S. Department of Health and Human Services, and the U.S. Food and Drug Administration,” TMA policy states. “Use of augmented intelligence, machine learning, and clinical decision support has inherent known risks. These risks should be recognized and shared among developers, distributors, and users with each entity owning responsibility for its respective role in the development, dissemination, and use of products used in clinical care.”

AMA also adopted new policy this year supporting the use of AI systems if properly regulated by physicians and health care professionals with AI expertise, and if those systems comply with federal, state, and medical practice licensure laws.

The policy calls for individuals or entities with knowledge of an AI system’s risks to avert or mitigate harm through “design, development, validation, and implementation,” and for developers of AI systems with clinical applications – such as screening, diagnosis, and treatment – to accept potential liability.

“Where a mandated use of AI systems prevents mitigation of risk and harm, the individual or entity issuing the mandate must be assigned all applicable liability,” the policy states.

Dr. Kalia believes such policy is a step in the right direction.

“I would not use ChatGPT as a tool to diagnose just as I wouldn’t use Google,” she said. “AI is in its infantile phase. For now, augmented use is what I recommend.”

Room for upgrades

Nevertheless, the software provides a unique opportunity for both physicians and technology to grow, she says.

“This is why we call it practicing medicine. We practice with new technology and find solutions to make things better for everyone in medicine.”

Data are starting to agree.

An April 2023 Journal of the American Medical Association study found patients considered ChatGPT-generated responses more compassionate than physician-led conversation 3.6% of the time.

While those findings may sound upsetting at first, physicians can learn from ChatGPT to be more “human-like” instead of relying on medical jargon, says Dr. Kalia.

“Medical terminology used by doctors sometimes isn’t translated to language a patient can understand,” she said. “That’s where the ChatGPT comes in to explain those confusing terms. That’s the beauty of this large language model.”

For Dr. Silva, a radiologist, AI presents an opportunity to examine images more efficiently. He can use the technology to produce a copy of a body part for study.

“I can take CAT scans that I’ve read in the past that have bleeds present and teach the algorithm what that looks like,” he said. “Now, when I feed it new data, it can look at that imaging and point out future bleeds to the interpreting physician.”

According to a September 2019 study in the Journal of the National Cancer Institute, AI also can make organizing the many medical images radiologists keep track of less onerous by identifying vital pieces of a patient’s history and then presenting only relevant images to physicians, saving time for both the physician and the patient.

While those advances are impressive, Dr. Silva says the most important aspect of ChatGPT and other AI software is its ease of use.

“AI is one piece of a broad and exciting digital health expansion,” he said. “And you don’t have to be a computer programmer to use ChatGPT. You can make programs and modules without learning code.”

Dr. Kalia says physicians may be using such technology every day, whether they know it or not.

“It has some growing pains. But technology that could help both us and our patients … [w]ell, isn’t that worth trying?”

 

Last Updated On

December 20, 2024

Originally Published On

September 29, 2023

Alisa Pierce

Reporter, Division of Communications and Marketing

(512) 370-1469
Alisa Pierce

Alisa Pierce is a reporter for Texas Medicine. After graduating from Texas State University, she worked in local news, covering state politics, public health, and education. Alongside her news writing, Alisa covered up-and-coming artists in Central Texas and abroad as a music journalist. As a Texas native, she enjoys capturing the landscape on her film camera while hiking her way across the Lonestar State.

More stories by Alisa Pierce