AI news

The United Kingdom hosted the AI Safety Summit on November 1 in Bletchley Park, northwest of London, where 28 countries, including the U.S., China, six EU member states, Brazil, Nigeria, Israel and Saudi Arabia signed the Bletchley Declaration.

The agreement establishes shared responsibility for the opportunities, risks and needs for global action on systems that pose urgent and dangerous risks.

“Many risks arising from AI are inherently international in nature, and so are best addressed through international cooperation,” a public statement published by the UK Department for Science, Innovation and Technology noted.

The declaration lays out the first two steps of their agenda for addressing “frontier AI” risk.

The first is to identify shared concerns for AI safety risks by building a “scientific and evidence-based understanding of the risks, and sustaining that understanding as capabilities continue to increase, in the context of a wider global approach to understanding the impact of AI in our societies.”

The second focuses on building respective risk-based policies to ensure safety considering identified risks, collaborating “while recognizing our approaches may differ based on national circumstances and applicable legal frameworks.”

This includes increased transparency by developers, tools for safety testing and evaluation metrics, and developing relevant public sector capabilities and scientific research.

“This is a landmark achievement that sees the world’s greatest AI powers agree on the urgency behind understanding the risks of AI – helping ensure the long-term future of our children and grandchildren,” British Prime Minister Rishi Sunak said.

The highest-profile element of the summit was a nearly one-hour interview Sunak conducted with Elon Musk, CEO of Tesla, who predicted AI applications could create a “future of abundance” in which “jobs are no longer needed.”

“You can have a job if you want to have a job, but AI will be able to do everything. I don’t know whether people will feel comfortable or uncomfortable with that,” the X (formerly Twitter) boss told the British prime minister.

Ultimately, it would then be about how to “find meaning in life” when “you have a magical mind that can do anything you want.”

Musk is not only relying on AI for jobs: Together with Sunak, he outlined a scenario in which supercomputers could offer a convenient substitute for human friendships.

According to Musk’s assessment, around 80% of current AI developments are likely to have a positive impact, while 20% could have negative consequences in his view. He admitted later in the interview that certain rules are needed when dealing with AI.

“I agree with the vast majority of the rules,” he said. “An arbitrator is a good thing.”

Ted Miracco, CEO of Approov Mobile Security, says the Bletchley Declaration demonstrates a more proactive approach by governments, signaling a possible lesson learned from past failures to regulate social media giants.

“By addressing AI risks collectively, nations aim to stay ahead of tech behemoths, recognizing the potential for recklessness,” he explains. “This commitment to collaboration underscores some determination to safeguard the future by shaping responsible AI development and mitigating potential harms.”

He added that while there are widespread doubts regarding the ability of governments and legal systems to match the speed and avarice of the tech industry, the Bletchley Declaration signifies a crucial departure from the laissez-faire approach witnessed with social media companies.

“We should applaud the proactive effort of these governments to avoid idle passivity and assertively engage in shaping AI’s trajectory, while prioritizing public safety and responsible governance over unfettered market forces,” Miracco says.

From the perspective of Emily Phelps, director of Cyware, recognizing that AI-driven risks cross borders, it is imperative for countries to join forces, ensuring that advancements in AI are accompanied by safety measures that protect all societies equally.

“The focus on a scientific and evidence-based approach to understanding these risks will enhance our collective intelligence and response capabilities,” she says.

She explains while the nuances of national circumstances will lead to varied approaches, the shared commitment to transparency, rigorous testing and bolstered public sector capabilities is a reassuring move towards a safer AI-driven future for everyone.