LLMs, U.S., business, leaders, transparency, trust, specialized LLMs, LLMs, AI, AI tech and LLMs government, combat, AI trust

As governments and global regulatory and security bodies ramp up efforts to examine the risks inherent in the development of artificial intelligence (AI), the United Nations (UN) is also taking stock of the situation.

The UN Security Council is planning an initial round of talks in New York this week on the opportunities and risks for international peace and security when it comes to AI. The initiative is being supported by Great Britain, which this month holds the rotating presidency of the Security Council.

This week’s discussion will be chaired by British Foreign Secretary James Cleverly, according to a Reuters report.

There are numerous risks and concerns around AI, from personal data privacy, discrimination and accountability for decisions to replacement of people in the jobs market and overall lack of understanding and transparency.

While all of these are important, it is going to be crucial for global powers to align on priorities when it comes to AI application in the globalized economy and keep the focus on how standards around all the top priorities will support security and peace in the world.

AWS

Sounil Yu, chief information security officer at JupiterOne, explains advanced forms of AI can pose an existential threat to humanity, bringing up the need to address its development, application and proliferation as happened with weapons of mass destruction (WMD).

“The UN Security Council can serve a critical role in establishing norms that limit the use of AI in certain contexts, such as autonomous weapons,” he says. “However, the ability of the Security Council to control the spread and use of AI in violation with these norms will be limited.”

He adds that unlike physical WMDs, the tools to build more powerful AI systems are likely to proliferate unabated in a manner similar to digital WMDs, such as ransomware.

“Even if the timeframe for the creation of super intelligent machines is decades away, many AI researchers believe that action is needed now to create the necessary safeguards that may help us avoid potentially catastrophic consequences,” Yu says. “More attention is needed to this, not less.”

Erik Gaston, vice president, global executive engagement at Tanium, points out AI stands to impact the global economy as we see continued advancements and adoption in this space.

“The effects could be both positive and negative depending on the ability of major global powers to ensure that AI proliferation is done in a safe, secure and responsible way,” he says. “The UN plays a critical role in bringing world powers together to understand the potential impacts and future effects of AI on global security.”

Much like with the early days of cloud, there are no real standards or governance established in the space of AI at present.

“As we see the appetite for AI in the global market grow, it is important that we see governing organizations begin to align on standards to ensure AI is used responsibly and as predictably as possible,” Gaston says.

With standards being established and led by global watchdog agencies, and fully aligned and adopted by other councils and governing bodies, a lot of the angst that exists in the world currently around AI could be reduced.

“Also, with global standards in place, it will be easier to identify when guidelines and common patterns are not being followed and in turn, identify bad behavior and actors,” he adds.

From Gaston’s perspective, it is important for there to be transparency as global leaders go through this process and establish new standards for AI usage.

He adds he feels it is also important that they do not over-rotate on regulation as this will potentially lead to more limited adoption of AI by good actors and could allow for an imbalance of AI with organizations who intend to use this to disrupt global security.

“This is an opportunity to drive down fear and misconceptions around AI and establish standards that will help drive responsible behavior,” he says. “Without aligning on the priorities at a global level first we could see a lot of dissonance between the parties involved, resulting in this not having the level of global impact it intends.”

TECHSTRONG TV

Click full-screen to enable volume control
Watch latest episodes and shows

AI Field Day

TECHSTRONG AI PODCAST

SHARE THIS STORY