
Artificial intelligence (AI) technologies are becoming a revolutionizing force in health care in almost every area. From delivering patient care on the front end to back-office administration, they’re being used for tasks like data analysis, diagnostics, treatment recommendations and similar jobs.
Efficiency, accuracy, outcomes and accessibility are all enhanced in these processes while access to quality care is improved for a broad range of patients and communities. Some diseases can be detected earlier, thanks to AI, and better diagnostics and more personalized treatment plans are being made possible.
That said, with such benefits comes risks. AI has significant data security and privacy concerns and a bias problem in the algorithms. Additionally, rural and other areas with low access to the basic high-speed internet needed for AI applications are still not able to avail the technology as their urban counterparts.
As organizations in health care adopt the technology, they have a responsibility to ensure that the data and tools they develop and the way they’re used mitigate bias, promote transparency and foster community engagement, according to the authors of a Brookings Institute report released this week.
“These applications of AI in health amplify the opportunities of its use but also make more transparent the flaws and risks for medically marginalized communities and providers who increasingly rely on these technologies to deliver care,” they wrote.
From the AI Equity Lab
The report, “Health and AI: Advancing Responsible and Ethical AI for All Communities,” was developed in The AI Equity Lab, a Brookings program that launched last year aimed at bringing together experts from inside and outside the think tank for researching the issues of responsible, ethical and inclusive development of AI in areas like health care, journalism, criminal justice and education, looking at both the benefits and potential harms.
The AI Equity Labs’ Health and AI Working Group included 14 such experts.
AI Adoption in Health Care
The report comes as healthcare organizations ramp up adoption and use of AI. Software development firm Vention in a report last year found that 94% of health care companies were using AI and machine learning in some way. The HIMSS, a health care-focused nonprofit, wrote that 86% of survey respondents last year already were using AI in their organizations, with 60% noting its ability to uncover health patterns and diagnoses better than humans. However, 72% pointed to data privacy as a significant risk.
Global spending on AI in the healthcare space is expected to grow from $14.92 billion last year to $164.16 billion by 2030.
Risks and Rewards
Brookings experts overall “agreed that AI has the potential to advance health equity,” the authors wrote. “Still, special consideration should be given to intentional and ethical approaches to enable inclusive AI design, distribution, and regulation. The group addressed these topics through a series of questions, which unearthed specific themes related to health that must lead the debate: Access, trust, and education.”
Regarding access, they noted statistics as 89% U.S. counties are designated by the Health Resources and Services Administration to have shortages of primary care professionals and 77 million citizens face provider shortages. The challenges are particularly true for those in marginalized communities, according to the report.
In addition, underrepresented communities have difficulty trusting the health care organizations, and physicians in such areas are concerned that datasets used to train AI technologies lack the needed diversity to improve the heath care of minorities, so they are reluctant to recommend such tools.
Educating About AI
In terms of education, the panel wrote that “building an AI-literate population is vital to enhancing public confidence, national competitiveness, workforce preparedness, privacy and security, and online safety. These educational efforts apply to both broad AI applications and use, and more specifically to health care, where the risks are seemingly higher as the challenges become more clinically oriented.”
A national education “with key stakeholders from underrepresented communities could help address these systemic misperceptions, misinformation, concerns, and fears related to AI,” they added.
Recommendations
Brookings Health and AI working group outlined several strategies that policy makers and others in the industry should consider:
- Use the necessary infrastructure to run use cases or AI in health care to help measure in more detail the opportunities and risks of the technology.
- Accelerate AI literacy and awareness among patients, clinicians, practitioners and industry professionals by using public health workers to help with messaging, particularly among underserved communities and vulnerable patients.
- Make sure there is wide patient representation in terms of demographics, region and similar attributes at each stage of AI design and deployment.
- Make sure the data used to train AI models address the different treatment of patients to avoid health disparities.
- Ensure the explainability and transparency in AI development by sharing AI benefits, technical constraints and explicit and implicit deficits in training data.
- Use research to get a comprehensive understanding of the opportunities and risks of AI in health care.
- Use governance frameworks and practices to develop a more comprehensive approach to the issue.
“The development and deployment of AI are complex, nuanced, and present undefined challenges,” the group wrote. “Efforts to ensure equity in health care are still evolving. However … deliberate and ethical approaches must be applied at all stages of the AI lifecycle.”