
As AI technology continues to advance, there is an increasing need for regulation to ensure it is developed and used in a responsible and ethical manner.
Multiple state government have already taken several steps towards regulating AI, with a focus on ensuring transparency, accountability, and fairness in the development and deployment of AI systems.
Amidst growing global societal and regulatory focus on AI transparency and safety, regulators are focused on ensuring that businesses, governments, and the public can trust AI algorithms, tools, and products work as claimed, and do so without causing harm, financial or otherwise, to users.
New York City published a final rule last week providing guidance on which automated employment decision tools (AEDT) are to be regulated and how bias audits of such tools must be conducted.
The regulation is narrow–specific to the use of AI-driven hiring and performance management tools, and rule is about two things, transparency and potential liability.
NYC must disclose to candidates and employees that they are using such tool, and if the tool meets certain conditions, only then, is it required to undergo a bias audit.
Alla Vallente, senior analyst at Forrester, explains NYC, as well as other public and private sector organizations have come under fire and faced fines, lawsuits, and loss of trust or reputational damage.
“These orgs can’t defend their use of AI for AEDT because they, themselves don’t understand the underlying algorithms,” she says. “Requiring audit is a form of risk transfer, like an insurance policy. The audit provides independent assurance. And yes, at the present, these rules and regulation – if they have teeth, are both necessary and good risk management.”
She adds the law doesn’t ban the use of AEDTs, it merely establishes accountability and defined penalties which could be enough to limit its use significantly.
“I wish I could say this is groundbreaking and proactive, but it’s not,” she says. “What we know from all regulation, is that if it’s not enforced, it might as well be a recommendation.”
Gartner Analyst Avivah Litan says these types of rules and regulations are necessary to protect citizenry and ensure equal rights and equal access to opportunities under the law.
“Without government protection, we can expect profits to drive decision making, leaving many underprivileged citizens behind,” she says.
She notes industries are more prone to AI-based regulation include healthcare, banking and financial services, as well as government programs including education, benefits, housing, and emergency assistance.
“I think the cities and states will lead the way based on the makeup and inclinations of their legislatures,” she says. “I don’t expect federal regulation or legislation for years, given the fractured nature of decision making therein.”
Additional regulations are likely to address issues such as data privacy and security, as well as standards for AI development and testing.
Vallente says the momentum around regulating AI in all its forms is building, thanks in part to the speed of adoption of generative AI.
“If and when that regulation surfaces, I don’t see it will be industry specific,” she notes.
Amy Matsuo, KPMG’s principal and national leader for compliance transformation and regulatory insights, says companies utilizing AI, including generative AI, should consider multiple factors during the design, use, and deployment of AI.
These include safety and effectiveness (e.g., protections against unintended or inappropriate use), protections against, and ongoing testing for, bias.
Additional factors include data governance and privacy, transparency (including what and how information is being used and potential impacts to the business/consumer) and accountability and oversight.
“Regulators will use existing regulations, for example UDAP, data privacy and safeguards, as they enhance scrutiny over the development and use of AI across all industries, with continued heightened focus on consumer protections, false advertising, data governance, and bias,” she explains.
Josep Bori, thematic research director at GlobalData, notes AI regulation is in flux and not consistent across jurisdictions, but directionally it is moving towards more regulation everywhere, albeit with different approaches.
“For instance, it is clear by now that the regulatory approach to facial recognition technologies in the US, Europe and China are quite different,” he adds.
From his perspective, the New York City regulation on automated employment decision tools seems a positive development, as it will both minimize bias and provide transparency.
“When, as it is often the case, training data is incomplete or unrepresentative of the general population, the decisions of the AI models can be biased,” he says.
For instance, a study from MIT Media Lab showed that facial recognition technology works differently across gender and races. The darker the skin, the more errors arise, up to 35% for images of darker-skinned women.
“A similar bias in automated employment decisions would be utterly unfair and essentially perpetuate the racial discrimination that was built in the training data in the first place,” Bori notes.
Litan explains NYC is taking the lead in the U.S. on this type of regulation, adding the federal U.S. government has taken a very distributed approach to AI risk management and has not created new regulations or regulatory infrastructure.
“Instead they have focused on guidelines and an AI Risk Management framework,” she says, pointing to the National Institute of Standards and Technology (NIST) Artificial Intelligence Management Framework released last January.
In February, the White House issued an Executive Order directing federal agencies to ensure their use of AI advances equity and civil rights.
“The country needs clear regulations and regulatory infrastructure to enforce fairness and absence of bias in AI models used in various market segments, such as hiring, lending, education, law enforcement and more,” Litan says. “I think we can expect to see more local and state laws emerge in the absence of federal regulation, especially as Generative AI exacerbates these issues at a rapid rate.”