generative AI, GenAI, AI regulation

A bipartisan group of U.S. senators is shining another light on the thorny issue of regulating an accelerating AI industry with a 31-page report that outlines policies and guardrails that various Congressional committees can begin focusing on.

The roadmap created by the Bipartisan Senate AI Working Group, formed last year, calls for spending $32 billion a year for non-defense innovation around AI and touches on a wide range of areas that AI – particularly as generative AI products and services and the large language models (LLMs) they’re built on proliferate – will affect, from innovation and standards for fairness and safety to national security, jobs, elections, and education.

It’s a step toward creating nationwide regulations for AI at a time when the White House and federal agencies already are putting policies around the technology in place, states like California, Connecticut and Colorado are trying to address it more locally, and other regions of the world are doing the same, as seen by the European Parliament in March passing the EU AI Act.

At the same time, the tech industry also is wrestling with the issue of regulations and guardrails, with some pushing for more voluntary responses and others understanding that, given the wide-ranging impacts AI will continue to have on society, there likely will need to be some sort of regulatory oversight, though the details of who will regulate and how they’ll do it still need to be ironed out.

“Last year, Congress faced a momentous choice: Either watch from the sidelines as artificial intelligence reshaped our world, or make a novel, bipartisan effort to enhance but also regulate this technology before it was too late,” Senate Majority Leader Chuck Schumer (D-NY) said on the Senate floor this week when introducing the Bipartisan Roadmap for Artificial Intelligence Policy. “So … I convened a bipartisan working group of senators last year … to chart the path forward on AI in the Senate.”

Input from Everywhere

The work by the group, which also included Senators Mike Rounds (R-SD), Martin Heinrich (D-NM) and Todd Young (R-IN), conducted nine AI Insight Forums last fall that saw more than 150 experts in such areas as AI developers and users, AI hardware and software makers, and researchers, labor unions and civil rights leaders.

Schumer said that AI innovation is what the country should continue to strive for, though he added that included both “transformational” and “sustainable” innovation. The first is driving the societal benefits that AI promises, from curing cancer and addressing climate change to solving world hunger and improving education.

Sustainable innovation calls for guardrails that minimize the potential damage AI could wrought, including displaced workers, bias and intellectual property.

“We need both transformational innovation and sustainable innovation, in a sense, to maximize the benefits of AI and minimize the liabilities,” he said. “It’s no easy task.”

How well the roadmap will be implemented in a Congress in which both the Senate and the House are led by party majorities with extremely thin margins remains to be seen. However, Schumer noted that Senate committees such as Commerce, Homeland Security and Government Affairs and Armed Services already are studying AI legislation.

The Biden Administration – which has set policies for how federal offices should handle AI – and various federal agencies, including the departments of Homeland Security, Defense, Treasury, Commerce, and Health and Human Services and CISA, also are putting mandates and guardrails in place.

Applause and Arrows

The AI roadmap drew its share of praise and criticism. The Software Alliance, a software industry trade group also known as BSA, applauded the document, with CEO Victoria Espinel saying in a statement that the roadmap “places a welcome focus on promoting innovation in technology and recognizes the benefits of AI across the economy and society.”

The publication of the Senate’s AI roadmap should now provide an impetus for action on legislation,” Espinel said. “National technology laws remain the best way to widely spread the benefits of responsible AI and build trust and adoption, especially as US states begin to act on AI legislation.”

Bill Wright, head of government affairs for AI search company Elastic, in a blog post wrote that the roadmap, “while not perfect, reflects a concerted effort to address the myriad opportunities and challenges presented by AI technologies.

Wright wrote that for the industry, it puts an emphasis on funding for AI innovation and enforcing existing laws regarding unintended bias, both of which indicate a lean toward responsible AI development and deployment. Looking at the government, it also stresses national security concerns, data privacy and threats like deepfakes.

He also agreed with such groups as the ACLU and Legal Defense Fund, which criticized the Senate working group for not doing more to address such issues as algorithmic bias, discrimination or civil rights and liberties.

“For example, algorithmic systems used to screen tenants are prone to errors and incorrectly include criminal or eviction records tied to people with similar names; such errors fall hardest on Black and Latine people,” ACLU Senior Policy Counsel Cody Venzke wrote. “Similarly, algorithmic tools, already used widely in hiring and other employment decisions, have been repeatedly shown to exacerbate barriers in the workplace on the basis of race, disability, gender and other protected characteristics.”

Elastics Wright wrote that the lack of specific measures against such AI-related harms “is a significant gap — especially given the increasing use of AI in critical areas like hiring and law enforcement. … The Roadmap focuses heavily on AI’s potential benefits but is noticeably light on measures to mitigate its risks.”

An International Issue

Other countries and organizations like the United Nations are looking at ways to regulate AI. The Japanese government this week issued a draft of its basic policies for AI, while the European Union begins to put in place its EU AI Act that passed two months ago. Cranium AI, Microsoft and KPMG this week unveiled what they call the EU AI Hub to help organizations keep in compliance with the new regulation, even though it could be a long as two years before many of the requirements fall into place.

Meanwhile, AI vendors are putting their own safe AI policies in place and working with the government in a voluntary fashion. In an episode of the All-In podcast, OpenAI CEO Sam Altman said he is “super-nervous about regulatory overreach here. I think we could get this wrong by doing way too much or even a little too much. I think we can get this wrong by not doing enough.”

That said, there will need to be some regulatory oversight, particularly as the LLMs get more powerful, according to Altman. However, those likely will need to reach across borders.

“There will come a time, in the not-so-distant future … where the frontier AI systems are capable of causing significant global harm,” he said. “For those kinds of systems – in the same way that we have global oversight or nuclear weapons or [bioweapons] or things that can have a very negative impact way beyond the realm of one country, I would like to see some sort of international agency that is looking at the most powerful systems and ensuring reasonable safety testing.”