The U.S. Department of Health and Human Services recently published a final rule clarifying that health care organizations, health insurers, and clinicians who participate in Medicare, Medicaid or any other federally funded program are legally responsible for managing the risk of discrimination.
In focus is something called Algorithmic Discrimination, with scrutiny on AI decision-making and how it may affect ethnic minorities, women, older people and people with intellectual disabilities, and extending to all protected categories. The rule is essentially a classification of the elements of Section 1557 of the Affordable Care Act, which prohibits discrimination based on race, color, national origin, sex, age or disability in certain health care programs and activities.
HHS Secretary Javier Becerra called the rule “a giant step forward for this country toward a more equitable and inclusive health care system, and means that Americans across the country now have a clear way to act on their rights against discrimination when they go to the doctor, talk with their health plan, or engage with health programs run by HHS.”
Standing up to Discrimination
Secretary Becerra added, “I am very proud that our Office for Civil Rights is standing up against discrimination, no matter who you are, who you love, your faith or where you live. Once again, we are reminding Americans that we have your back.”
In a press release, HHS states, “Given the increasing use of artificial intelligence in health programs and activities, the rule clarifies that nondiscrimination in health programs and activities continues to apply to the use of AI, clinical algorithms, predictive analytics and other tools. This clarification serves as one of the key pillars of HHS’s response to the President’s Executive Order on Safe, Secure and Trustworthy Development and Use of Artificial Intelligence.”
Office for Civil Rights Director Melanie Fontes Rainier said of the ruling, “Traveling across the country, I have heard too many stories of people facing discrimination in their health care. The robust protections of 1557 are needed now more than ever, whether it’s standing up for LGBTQI+, Americans nationwide, making sure that care is more accessible for people with disabilities or immigrant communities, or protecting patients when using AI in health care, OCR protects Americans’ rights.”
The use of AI in health care and the protections that are necessary to prevent bias reflects the anti-discrimination efforts at the federal level in another area, that of hiring, and the Equal Employment Opportunity Commission has focused as much energy on ensuring that AI doesn’t discriminate against job candidates.
On January 31st, 2023, EEOC Chair Charlotte A. Burrows led a panel discussion titled, ‘Navigating Employment Discrimination in AI and Automated Systems, A New Civil Rights Frontier.’
“As a society, we must ensure that new technology is used in ways that protect the basic values that throughout our history have helped make America better, stronger, and fairer,” Chair Burrows said. “Increasingly, automated systems are used in all aspects of employment, from recruiting, interviewing and hiring, to evaluations and promotions, among many others. By some estimates, as many as 83% of employers and up to 99% of Fortune 500 companies now use some form of automated tool, or screening, or rank candidates for hire. A recent survey of the members of the Society for Resource Management found that nearly one in four medium-sized employers use automation, or AI, in its hiring process.”
Lucila Onho-Machado, Yale School of Medicine’s Deputy Dean for Biomedical Informatics; Chair, Department of Biomedical Informatics and Data Science, said recently that the application of automated tools should be carefully analyzed to prevent decision-making bias based on inputs, as has already occurred.
“Health care algorithms are mathematical models that support clinicians as well as administrators in decision-making about patient care,” she said. “But biased AI is already harming minoritized communities. Experts have identified numerous biased algorithms that require racial or ethical minorities to be considerably more ill than their white counterparts to receive the same diagnosis, treatment, or resources. These include models developed across a wide range of specialties, such as for cardiac surgery, kidney transplantation, and more.”