A wide-ranging study from GE HealthCare indicates that while the use of artificial intelligence (AI) technology could have wide-ranging applications for health care, high levels of mistrust in the technology could hamper immediate adoption.
The double-blinded quantitative survey of 5,500 patients and patient advocates along with 2,000 clinicians found while most clinicians believe AI can support clinical decision making (61 percent), distrust and skepticism around AI in medical settings– without reference to specific products– is prevalent among all stakeholders.
Only around a quarter (26%) of clinicians in the U.S. said they believe AI can be trusted (compared with 42% globally) and clinicians the world over expressed concerns that the technology is also subject to built-in biases.
Despite these concerns, clinicians also said they believe use of AI could enable faster health interventions (54 percent) and help improve operational efficiency (55 percent).
One use case for generative AI, such as a chatbot, could be to assist with creating visit notes based on conversations between health care providers and patients.
AI-powered voice assistant technology can support clinicians’ documentation processes by allowing them to navigate charts, place orders or document visit notes through voice commands instead of manual input.
The next step in this development could be to combine ambient AI (voice technology) with generative AI to automatically generate a visit note from the conversation between the provider and patient.
Dr. Nele Jessel, chief medical officer for Athenahealth, explains a key focus of AI in alleviating administrative burnout is to automate tasks, streamline documentation, enhance patient-provider interactions and provide personalized patient outreach and education.
“By leveraging AI in the following ways, health care providers and staff can alleviate administrative burdens and improve efficiency and quality of care,” she says.
However, she notes the chief concerns health care organizations have about adopting AI include ensuring AI systems’ safety, effectiveness and fairness.
“Organizations need to be wary of accuracy risks due to potential errors and biases in AI-powered chatbot responses, the lack of customization and potential for generic answers, the risk of false statements and the absence of emotional intelligence and empathy,” Jessel says.
Additionally, health care organizations should carefully evaluate physicians’ readiness to use the AI capabilities they purchase and implement.
“By addressing these concerns and implementing appropriate safeguards, health care organizations can leverage AI’s benefits while mitigating potential risks and ensuring patient safety and privacy,” she notes.
Ash Shehata, KPMG national sector leader for health care and life sciences, points out because technology is still so new, this is something that needs to be done on the board level, with internal audits around ethics and governance.
“The C-suite will also need to be included and well engaged in this strategy,” he says. “Finally, the IT organization will need to be involved from a technology roadmap perspective.”
From his perspective, a multi-level approach is necessary in each organization to address the emerging needs, opportunities and risks of AI.
Jessel agrees implementing an AI strategy within a health care organization typically falls on a multidisciplinary team that includes executive leadership, IT professionals, data scientists, clinicians and administrators.
“Given the regulatory landscape in health care, compliance and legal teams also play a crucial role in ensuring that AI implementations meet regulatory requirements and ethical guidelines,” she says.
She explains AI can also play a crucial role in promoting medication adherence and safety by delivering personalized outreach, providing educational information and assisting health care providers in identifying potential risks.
“Through AI-powered automation, practitioners can send personalized reminders and educational messages to patients to reinforce the importance of medication adherence and provide additional information about their prescribed medications,” Jessel says.
AI algorithms can analyze patient data, including medical history, current prescriptions and potential drug interactions, to generate tailored messages addressing each patient’s specific needs and concerns.
Furthermore, AI can help prevent medication errors and enhance patient safety by assisting health care providers in identifying potential drug interactions, contraindications or adverse effects.
AI can flag potential risks by analyzing vast amounts of patient data, including electronic health records (EHRs) and alert health care professionals to take appropriate action.
Shehata says when it comes to medication adherence and safety, an important factor is being able to look at real world evidence.
“This means being able to evaluate the ways drugs and therapies are actually operating in the real world,” he says. “We examine the impacts of environment, impact on different demographics and analyze how all of these factors impact the efficacy of the drug.”
He says AI could potentially accelerate the pace of drug discovery and improve the quality and effectiveness of medications.