genai, upskilling, upskill, product, AI, GenAI, AI agents, AGI, AI, GenAI, security, AI brain

A regulatory framework for AI continues to take shape with the passage of its first United Nations AI-related resolution that calls upon member states to ban the use of AI systems that may prove to be violators of human rights. The move comes on the heels of a report commissioned by the U.S. State Department that suggests radical regulatory steps may be required now to rein in “superhuman” artificial general intelligence well before its expected arrival.

While the UN recognized the potential of AI to accelerate and enable progress across a wide range of endeavors, the UN resolution passed by the General Assembly requires member nations to “refrain from or cease the use of artificial intelligence systems that are impossible to operate in compliance with international human rights law or that pose undue risks to the enjoyment of human rights.”

The UN resolution emphasized “the same rights that people have offline must also be protected offline, including throughout the life cycle of artificial intelligence systems.”

U.S. Ambassador to the UN, Linda Thomas-Greenfield, underlined the UN resolution by noting that it is imperative “to govern this technology rather than let it govern us.”

While the global impact of AI appears to be encouraging multi-national responses from entities like the UN and the European Union (EU) which just passed the AI Act as governance legislation, it’s worth noting that not every country may be subject to international AI regulations due to political considerations. Among the more technologically advanced, for example, is Taiwan which is not a UN member. Likewise, countries like South Ossetia, Abkhazia and Transnistria, perceived by many as Russian client states, are not UN members. Other localities are in dispute with their independence, only partially acknowledged if at all by other countries.

AWS

The UN resolution may be so much prep work for the coming of more advanced artificial general intelligence (AGI). According to the newly-released Gladstone AI report funded by the U.S. State Department, AGI is a potentially “extinction level” technology akin to weapons of mass destruction (WMD). And while estimates vary concerning AGI’s arrival, the consensus is that AGI is on the near horizon and even may just slip in ahead of its actual perception. AGI may arrive much sooner than anticipated due to the success of AI scaling by OpenAI and others that have yielded improvements in AI by orders of magnitude very quickly. The “scaling laws” are being applied to AGI development. The 284-page Gladstone AI report is titled “Defense in Depth: An Action Plan to Increase the Safety and Security of Advanced AI”  and is the product of over 200 conversations with AI developers in various areas, including “frontier AI” companies like Google, OpenAI, DeepMind, Anthropic and Meta.

The Gladstone AI report details two levels of catastrophic risk. The first is the weaponization of AI into something akin to a WMD that might power mass cyberattacks that autonomously discover zero-day vulnerabilities, mount disinformation campaigns or access bioweapon designs, for example. To some extent, AGI weaponization is similar to concerns surrounding nuclear energy, for example, and is relatively familiar, though still scary, ground in terms of safeguards, regulations and treaties.

The second catastrophic risk, however, is uncharted territory: Loss of control due to “AGI alignment failure.” Basically, this describes moment when the interests of an AGI and humanity diverge and are no longer in sync. The AGI may no longer be controllable as it engages in “power-seeking” behaviors that might include preventing itself from being shutoff or having its goals modified, deception, control over its environment, self-improvement, and hoarding of resources. The Gladstone AI report notes that while frontier labs loudly emphasize the importance of alignment, there is currently no known method to accomplish this.

“This unsolved technical challenge is known as the alignment problem and it is believed to be central to the safe development of and operation of future, superhuman AI systems,” says the Gladstone AI report.

Perhaps crucially, an AGI may not require sentience or consciousness to experience alignment failure. An AGI’s ability to optimize internal strategies in pursuit of its own goals means that it may employ dangerously creative strategies to achieve them. If an AGI escapes the control of its developer, then the developer’s goals or intent are no longer relevant. A rogue AGI may precipitate “unpredictable and incontrollable technological change that could itself destabilize society in way we cannot anticipate.” A list of additional risks includes workforce displacement, data security, reputational harm, social instability, privacy, and even small-scale accidents.

Self-regulation by frontier labs is mostly ineffective with the Gladstone AI report citing the admission of frontier labs that they lack the necessary safeguards in a business environment that prioritizes development speed over safety. The Gladstone AI report sees the need for a variety of U.S governmental actions in the short term to get out ahead of AGI in the long term. Key among them would be:

— the creation of a “Frontier AI Systems Administration (FAISA)” with rulemaking and licensing authority that would oversee AI development and deployment with necessary safeguards in place. This might include a national security FAISA that would be tasked with oversight of AGI activities associated with national security agencies;

–outlawing the open-sourcing of advanced AI models;

— creating a framework for criminal and civil prosecution for dangerous use of AI across the supply chain with stiff financial penalties imposed even for negligence;

–granting the president emergency powers in the event that “unrecoverable catastrophic risks” caused by an AGI are likely to occur without swift government intervention;

–supporting an international AI regulatory agency that would ensure compliance and secure the international AI supply chain. This would include hardware-based tracking of AI-enabled chips and chip usage to ensure AI models are not being trained clandestinely.

Many will see the Gladstone AI report as overzealous, and its recommendations face significant political hurdles. Outlawing the open-sourcing of advanced AI models also would likely face major resistance. Still, the report’s authors note that unless some AGI regulatory action is taken now, the time will come when there is no alternative to just accepting the damage, no matter how bad it is.

TECHSTRONG TV

Click full-screen to enable volume control
Watch latest episodes and shows

AI Data Infrastructure Field Day

TECHSTRONG AI PODCAST

SHARE THIS STORY