Facebook’s CEO, Mark Zuckerberg, used to boast about its ability to “move fast and break things.” Not, apparently, with artificial intelligence (AI).

A new policy document from Facebook parent Meta Platforms Inc., “Frontier AI Framework,” outlines a cautious approach amid advancements in AI that have security experts frazzled over potential abuse of the technology.

In the 30-page document, Meta identifies scenarios in which “high risk” or “critical risk” AI systems are considered too perilous to make available to the public in their current state. Those threats run the gamut from an “automated end-to-end compromise of a best-practice-protected corporate-scale environment” to the “proliferation of high-impact biological weapons”.

“This Framework is one component of our wider AI governance program. It deals with catastrophic outcomes that could arise as a direct result of the development or release of the frontier AI model,” the company explained. The framework was first reported by news site TechCrunch.

Meta, a leader in the open-source AI field with Llama, says it will evaluate where a system poses threat, limit its access internally, and not release it until it puts in place safeguards that “reduce risk to moderate levels.”

Should a model drift into “critical risk” status, Meta says, it will halt development and install security measures to stop exfiltration into the wider AI market. “Access is strictly limited to a small number of experts, alongside security protections to prevent hacking or exfiltration insofar as it is technically feasible and commercially available,” according to the framework.

Meta’s decision to publicly discuss AI safety policy comes as it plies $65 billion into AI development, copes with the recently enforced EU AI Act on responsible AI use (which it supports), and responds to fallout over the debut of DeepSeek. The Chinese AI startup’s open-source models have titillated the market while at the same time terrifying security experts. A series of reports have pinpointed flaws in DeepSeek’s AI model that have unwittingly exposed user data on a public database.

Meta’s family of Llama AI models have been downloaded hundreds of millions of times but reportedly been exploited by at least one U.S. adversary to create a defense chatbot. The company’s Llama 3.2 model, like DeepSeek, can be used by others to build AI tools that cull personal information from billions of Facebook and Instagram users. Meta says it will revise and update its framework as AI continues to rapidly evolve.

“[We] believe that by considering both benefits and risks in making decisions about how to develop and deploy advanced AI, it is possible to deliver that technology to society in a way that preserves the benefits of that technology to society while also maintaining an appropriate level of risk,” the document says.

TECHSTRONG TV

Click full-screen to enable volume control
Watch latest episodes and shows

Networking Field Day

TECHSTRONG AI PODCAST

SHARE THIS STORY