Generative AI in Healthcare

What are some examples of generative AI/what is it?

Generative AI is any AI that uses real-world data to generate new content, whether it be text for patient EHRs, enhancements of medical imaging, or synthetic data to train itself or others. Popular chatbots and text-to-art generators include ChatGPT and DALL-E 2. Generative AI has been around for years, but has exploded in popularity due to recent advancements in the machine learning algorithms that have expanded the capabilities and potential use cases of generative AI.

What makes generative AI appealing to healthcare?

Generative AI is most appealing to the healthcare setting due to its ability to process “big data,” which would otherwise take a human months or years to study. The AI “brain,” in this manner, can be trained much differently (and much more efficiently) than the human brain. Such innovation can already be seen in the case of the Covid-19 pandemic, wherein researchers and doctors used AI and machine learning algorithms to quickly review constantly changing data, assess geographic hotspots, track spread, and diagnose Covid-19 pneumonia versus common pneumonia. This ability to rapidly process large amounts of data is what makes AI so attractive to healthcare.

How does generative AI differ from other AI/ML solutions?

The HHS “Trustworthy AI Playbook” outlined five primary concerns with AI: data privacy, impartiality and bias, transparency in algorithms, data safety and security, and robust and reliable results. The final of these concerns is most troubling within generative AI due to its “hallucinations,” outputs that are fabricated and often downright false. Such hallucinations have already made news, such as the lawyer who faces sanctions for using ChatGPT to write a brief, which provided him with case law that simply did not exist. These hallucinations also can occur when using generative AI that was trained on certain datasets to evaluate particular populations, which can lead to reduced efficacy in populations differing from the training dataset. This tendency to “invent facts in moments of uncertainty,” as OpenAI researchers called it, poses significant danger to AI’s application in the medical sphere. In a highly risky area like healthcare, such hallucinations can prove inefficacious at best, and deadly at worst. 

What could be potential use cases for generative AI in healthcare in the next three to five years?

In the past, AI has been used most often in radiology and imaging technology, as well as in administrative roles. As AI tools improve, however, they are moving from this assistive role into more advanced and direct capabilities. These include digitized pathologies, assisting in surgery, preventing unnecessary surgery, and much more. As these technologies advance, AI continues to play a more and more central role in the care of patients. Soon, we may see a shift from machine learning algorithms as assistive technologies into primary care technologies. 

How can generative AI become integrated into a complex healthcare environment?

Aside from legal barriers such as health information privacy laws, which make training AI tools on large datasets difficult, or FDA regulations on technologies as medical devices, the potential shift of technological tools from assistive to primary care instruments raises numerous concerns. Although now the doctor who uses generative AI will be preferred to the doctor who doesn’t, as they will be better equipped, there may soon come a time where generative AI may surpass doctors’ capabilities, since it has had the opportunity to train on hundreds of thousands of datasets that a human mind couldn’t even begin to utilize. When and if this day comes, who will be practicing medicine, the doctor or the machine? Who will then hold liability for medical mistakes, the fourth leading cause of death in the United States? While doctors now supervise AI tools, will AI tools come to supervise doctors’ decisions? These are but a few of the ethical and legal implications raised by the huge potential of AI in the medical field. Given the pre-existing complexity in the healthcare system, these questions must be answered before AI can be fully integrated into the diagnosis and treatment processes, else its implementation would only serve to further obfuscate already murky legal and ethical questions around responsibility and liability. 

Leave a comment

About

Health Bites is a newsletter aimed at keeping you up to date about all the most important health information, at every level of analysis. Read all about it here!