LLMs, U.S., business, leaders, transparency, trust, specialized LLMs, LLMs, AI, AI tech and LLMs government, combat, AI trust

Six months ago, the Future of Life Institute, with backing from tech gurus Elon Musk and Steve Wozniak, released an open letter calling for a halt in advanced AI development until some guidelines for generative AI use were assembled. That’s the one thing that didn’t happen.

If nothing else, AI development appears to have accelerated since then, while regulations governing its use are still in the discussion stage, with some critics suggesting that such discussion amounts to “virtue signaling.” Or as Captain Jack Sparrow from Pirates of the Caribbean films might say: “The code is more like what you’d call ‘guidelines’ than actual rules.”

The AI rules conversation is happening on several fronts at once. At the just concluded United Nations General Assembly in New York, AI was a talking point for just about everybody, as many view the UN as the only viable institution to address the issue on a global scale. The UN plans to convene an advisory panel this fall in the hopes that a common understanding will arise on how governance might work so that risks are minimized and opportunities for good are maximized. The panel would make its recommendations by year’s end.

Some leaders, like Icelandic foreign minister Thordis Kolbrun R. Gylfadottir, are worried that AI might become “a tool of destruction” that might require the creation of some type of rapid response mechanism in the event that AI goes disastrously askew. Other countries, meanwhile, have taken the opportunity to try to carve out an AI leadership position. The United Kingdom is hyping its upcoming “AI Safety Summit,” for example, with deputy prime minister Oliver Dowden telling the UN General Assembly that the UK’s AI task force could develop a framework that could be applied internationally. Spain, meanwhile, is vying to host any international AI agency that might develop. One potential stumbling block for UN involvement is that while its membership is made up of nations, AI development is principally being driven by private companies.

Just how thorny the AI regulation issue can be is evident in competing visions in the United States Congress. Senate minority leader John Thune of South Dakota is reaching across the aisle to Democratic senator Amy Klobuchar to propose a “light touch” legislative approach that would allow tech companies to self-certify the AI systems as to their safety. Among its supporters is Ryan Hageman, AI policy executive at IBM. This is in contrast to the more “heavy-handed” approach favored by Senate majority leader Chuck Shumer of New York who has been holding a series of intensive hearings on AI and its potential impact. Senator Klobuchar also is sponsoring legislation to prevent deceptive AI use from impacting elections.

AWS

Perhaps the place to watch is the European Union which has a history of putting teeth into its rules for tech companies. The EU’s AI Act now in the works would sort AI systems into different tiers, depending upon risk levels. The riskiest would be prohibited while limited risk systems would be subject to certain levels of transparency and oversight. One key issue is that liability be shared between both the purchaser of an AI system, say a hospital, for example, and its developer in the event an AI offers advice it shouldn’t have.

TECHSTRONG TV

Click full-screen to enable volume control
Watch latest episodes and shows

AI Field Day

TECHSTRONG AI PODCAST

SHARE THIS STORY