insurance, compliance, AI Innovation

The White House announced new measures to promote responsible American innovation in AI and protect people’s rights and safety. The government aims to ensure that companies take responsibility for ensuring the safety of their AI products before they are deployed or made public.

Senior Administration officials, led by Vice President Kamala Harris, met with CEOs of leaders in AI innovation, including Alphabet, Anthropic, Microsoft and OpenAI, to discuss responsible, trustworthy and ethical innovation with safeguards that mitigate risks and potential harms to individuals and society.

Among the administration’s actions taken to promote responsible innovation are the Blueprint for an AI Bill of Rights and executive actions, the AI Risk Management Framework and a roadmap for standing up a National AI Research Resource.

Additionally, the National Science Foundation will invest $140 million in funding to launch seven new National AI Research Institutes, bringing the total number of Institutes to 25 across the country.

The government will also release draft policy guidance on the use of AI systems by the U.S. government for public comment to ensure the development and use of AI systems center on safeguarding people’s rights and safety.

AWS

Dr. Muddu Sudhakar, CEO and co-founder of Aisera, argues the $140 million investment is “way too small” and says there needs to be a much broader approach to AI.

“The technology is absolutely critical to national security and economic growth of the U.S.,” he says. “The investment should be in the billions.”

He says while crafting policies for safe AI are certainly important, this should be backed up with funds for encouraging innovation and basic research.

“After all, this is what led to the emergence of the Internet during the 1960s – which has been huge for the growth of the U.S.,” he explains.” In fact, the recent semiconductor bill is actually an example of how the federal government should approach AI.”

He also expressed concerned about the White House’s committee for AI, advocating for much more participation from the many startups in the AI ecosystem, not a focus on the mega operators.

Kathleen Featheringham, vice president of AI/ML at Maximus, says she’s happy to see the White House’s statement associated with actions to promote responsible AI innovation.

“We see this as a critical component of AI adoption,” she says. “With the evolving nature of AI, the parameters that we put in place today will need to be continually reviewed and evolved, highlighting the balance of risk and reward, as we learn and the technology advances even further through human ingenuity.”

Joe Jeter, senior vice president of federal technology for Maximus, adds that given the wide-scale implications of AI on national security and citizen protection, the federal government must advance very rapidly to create regulations, guidelines and compliance standards for AI to act as the guardrails, but not an inhibitor of innovation.

“With the technology evolving at such a rapid speed, I believe these guardrails need to be flexible and adaptive with a focus on the longer strategy and the ethical and responsible use of AI versus specific AI tools,” he explains.

Sudhakar says the federal government’s role will be key in the establishment of guidelines for AI.

“A patchwork of state regulations would probably just make things even more complicated,” he says. “For example, the federal government has done a good job with laws like SOX and HIPPAA.  They were not too intrusive but did achieve important public goals. However, we need to be careful with the special interest groups that will want to skew the laws.”

He says he also thinks there is an opportunity for the federal government to leverage AI along the lines of a “Citizen AI Co-pilot”.

“It could be a tool to help people get the information they need about government services but also provide feedback,” he says.

Jeter says with the global implications of AI increasing with speed, complexity and societal impacts, the government has a strategic role in ensuring alignment with the values of the nation, and ultimately progress for all.

“AI can enable purposeful and positive innovation for our citizens and continue our ability to be globally competitive,” he says.

He adds although AI is rapidly evolving in how it’s transforming government, there needs to be thoughtful and purposeful strategy at the onset including leveraging a human-centered design approach.

“The protection of Americans’ rights and safety should be our north star, but we can’t get too carried away either,” Sudhakar says. “For example, the EU is having challenges in crafting their AI law. The technology is moving so fast, and the issues are extremely complicated.  The difficulty is making sure that the regulation does not choke off the innovation.”

He says if we take the right approach, it will be clear that AI will provide enormous benefits, akin to what was seen with the Industrial Revolution, semiconductor revolution, and Internet revolution.

From Sudhakar’s perspective, the U.S. government can lead by example on mitigating AI risks and harnessing AI opportunities.

“A big part of this is making a massive investment in AI,” he says. “But there should also be regulations that provide safety but not at the expense of allowing the huge benefits of the technology.”

The good news is that the U.S. has a good track record with this — whether it be with the space race or the Internet.

“We need to do the same for AI,” he says.

TECHSTRONG TV

Click full-screen to enable volume control
Watch latest episodes and shows

AI Data Infrastructure Field Day

TECHSTRONG AI PODCAST

SHARE THIS STORY