Generative AI, which can create and analyze images, text, audio, video and more, is increasingly making its way into healthcare, promoted by both big tech companies and startups.
Google Cloud, Google’s cloud services and products division, is partnering with Highmark Health, a Pittsburgh-based nonprofit healthcare company, to produce artificial intelligence tools designed to personalize the patient intake experience. Amazon’s AWS division says it’s working with unnamed customers on a way to use genetic AI for analysis medical databases on “social determinants of health”. And Microsoft Azure is helping build a productive AI system for Providence, the nonprofit healthcare network, to automatically triage messages to care providers sent by patients.
Prominent healthcare AI startups include Ambience Healthcare, which is developing a genetic AI application for clinicians. Nabla, an ambient AI assistant for professionals. and Abridge, which creates analytics tools for medical documentation.
Much enthusiasm for genetic AI is reflected in investment in AI production efforts aimed at health care. Collectively, genetic AI in healthcare startups have raised tens of millions of dollars in venture capital to date, and the vast majority of healthcare investors say that genetic AI has was significantly affected their investment strategies.
However, both practitioners and patients disagree on whether genetic AI focused on health care is ready for prime time.
Genetic AI may not be what humans want
In a recent survey by Deloitte, only about half (53%) of US consumers said they believe genetic AI could improve health care — for example, by making it more affordable or shortening appointment wait times. Fewer than half said they expected genetic AI to make medical care more affordable.
Andrew Borkowski, chief of artificial intelligence at the VA Sunshine Healthcare Network, the US Department of Veterans Affairs’ largest health system, doesn’t think the cynicism is unwarranted. Borkowski warned that the development of genetic AI could be premature because of its “significant” limitations — and concerns about its effectiveness.
“One of the key issues with genetic AI is its inability to handle complex medical questions or emergencies,” he told TechCrunch. “Its finite knowledge base—that is, the absence of up-to-date clinical information—and lack of human expertise make it unsuitable for providing comprehensive medical advice or treatment recommendations.”
Several studies show that there is credibility in these points.
In a publication in the journal JAMA Pediatrics, ChatGPT, OpenAI’s creative AI, which some healthcare organizations have piloted for limited use cases, was found himself making mistakes diagnosis of pediatric diseases in 83% of cases. And to tests Using OpenAI’s GPT-4 as a diagnosis assistant, doctors at Beth Israel Deaconess Medical Center in Boston noticed that the model ranked the wrong diagnosis as its top answer nearly two out of three times.
Today’s genetic AI also grapples with medical administrative tasks that are an integral part of clinicians’ daily workflow. In the MedAlign benchmark, to assess how well the AI being created can do things like summarizing patient health records and searching notes, GPT-4 failed 35% of the time.
OpenAI and many other AI producers caution against relying on their models for medical advice. But Borkowski and others say they could do more. “Relying solely on genetic AI for healthcare could lead to misdiagnoses, inappropriate treatments, or even life-threatening conditions,” Borkowski said.
Jan Egger, who leads AI-guided therapies at the University of Duisburg-Essen’s Institute for Medical Artificial Intelligence, which studies applications of the emerging technology for patient care, shares Borkowski’s concerns. He believes the only safe way to use genetic AI in health care for now is under the close, watchful eye of a doctor.
“The results can be completely wrong, and it’s getting harder and harder to stay aware of that,” Egger said. “Certainly, genetic AI can be used, for example, to pre-write discharge letters. But the doctors have a responsibility to review it and make the final call.”
Genetic AI can perpetuate stereotypes
One particularly harmful way that genetic AI in healthcare can go wrong is by perpetuating stereotypes.
In a 2023 study from Stanford Medicine, a team of researchers tested ChatGPT and other AI-powered chatbots on questions about kidney function, lung capacity and skin thickness. Not only were the ChatGPT responses often incorrect, the co-authors found, but the responses included several reinforced, long-held, untrue beliefs that there are biological differences between blacks and whites—untruths that have been known to lead doctors to misdiagnose health problems.
The irony is that the patients most likely to be discriminated against by genetic AI for healthcare are also the ones most likely to use it.
People without health care coverage — people of color, in general, according to a KFF study — are more willing to try genetic AI for things like finding a doctor or mental health support, the Deloitte survey found. If AI recommendations are tainted by bias, it could exacerbate disparities in treatment.
However, some experts argue that genetic AI is improving in this regard.
In a Microsoft study published in late 2023, The researchers said they achieved 90.2% accuracy. in four challenging medical benchmarks using the GPT-4. Vanilla GPT-4 couldn’t reach that score. But, the researchers say, through direct engineering — designing prompts for GPT-4 to produce specific results — they were able to boost the model’s score by up to 16.2 percentage points. (Microsoft, it’s worth noting, is a major investor in OpenAI.)
Beyond chatbots
But asking a chatbot a question isn’t the only thing genetic AI is good for. Some researchers say that medical imaging could greatly benefit from the power of genetic artificial intelligence.
In July, a team of scientists presented a system called cAdjournment in clinical workflow (CoDoC) based on complementarity, in a study published in Nature. The system is designed to calculate when medical imaging specialists should rely on artificial intelligence for diagnoses versus traditional techniques. CoDoC outperformed specialists while reducing clinical workflows by 66%, according to the co-authors.
In November, a Chinese research team presented Pandaan artificial intelligence model used to detect possible damage to the pancreas in X-rays. A study showed Panda is very accurate in classifying these lesions, which are often detected too late for surgery.
Indeed, Arun Thirunavukarasu, a clinical researcher at the University of Oxford, said there is “nothing unique” about genetic AI that precludes its deployment in healthcare settings.
“More mundane applications of genetic AI technology are possible in short and medium term and include text correction, automatic documentation of notes and letters and improved search capabilities to optimize electronic patient records,” he said. “There is no reason why genetic AI technology – if effective – could not be developed in those kinds of roles right away.”
“Hard Science”
But while genetic AI holds promise in specific, narrow areas of medicine, experts like Borkowski point to technical and compliance hurdles that must be overcome before genetic AI is useful — and reliable — as a comprehensive health care adjunct. .
“Significant privacy and security concerns surround the use of genetic artificial intelligence in health care,” Borkowski said. “The sensitive nature of medical data and the potential for misuse or unauthorized access pose serious risks to patient privacy and trust in the healthcare system. In addition, the regulatory and legal landscape surrounding the use of genetic AI in healthcare is still evolving, with questions about liability, data protection and the practice of medicine from non-human entities yet to be resolved.”
Even Thirunavukarasu, who is bullish as he is on genetic AI in healthcare, says there needs to be “rigorous science” behind patient-facing tools.
“Particularly without direct clinical oversight, there would need to be realistic randomized control trials demonstrating clinical benefit to justify the development of patient-facing genetic AI,” he said. “Proper governance going forward is essential to cover any unforeseen losses after deployment at scale.”
Recently, the World Health Organization published guidelines advocating for this kind of science and human oversight of genetic AI in healthcare, as well as the introduction of independent third-party audit, transparency and impact assessments of this AI. The goal, the WHO explains in its guidelines, would be to encourage participation from a diverse group of people in the development of genetic AI for healthcare and an opportunity to raise concerns and provide input throughout the process.
“Until concerns are adequately addressed and appropriate safeguards are put in place,” Borkowski said, “widespread implementation of medical genetic AI may be … potentially harmful to patients and the healthcare industry as a whole.”