coding, developers, AI coding tools

Meta today at the LlamaCon 2025 announced it is making available in limited preview an application programming interface (API) and associated tools for open source Llama models that promises to make it simpler for developers to explore and fine tune artificial intelligence (AI) models.

In addition, Meta is making available a set of AI protection tools, including Llama Guard 4, LlamaFirewall and Llama Prompt Guard 2 along with updates to a CyberSecEval 4 framework it has defined to security operations teams.

At the same time, Meta also revealed it is launching a Llama Defender Program through which it is providing partners such as ZenDesk, Bell Canada, AT&T and Crowdstrike with tools to detect and prevent threats such as phishing attacks and various types of online fraud created using AI technologies.

The Llama API enables one-click API key creation and interactive playgrounds to explore different Llama models, including the Llama 4 Scout and Llama 4 Maverick models that were made available earlier this month. As part of that initiative, Meta is also sharing tools for fine-tuning and evaluation that make it simpler to create custom versions of the Llama 3.3 8B model that organizations can opt to deploy anywhere they best see fit.

It also provides access to a lightweight software development kit (SDK), available in both Python and Typescript, that is compatible with the SDK developed by OpenAI. That capability promises to make it simpler to convert models based on proprietary models to an open source Llama model.

Finally, Meta also announced today alliances with Cerebras and Groq to provide early access to AI inference infrastructure directly via the Llama API. Eventually, the Llama API will become a core element of Llama Stack, a set of building blocks that Meta is defining to accelerate the building of AI applications.

Chris Cox, chief product officer for Meta, told conference attendees that despite initial doubts about the viability, open source AI models are gaining traction because in addition to the level of accuracy and performance that can now be attained, they can be improved and audited in the clear light of day. “Open source is here to stay,” Cox said.

There is, of course, no shortage of open source AI models that are helping to significantly drive the total cost of building AI applications. In fact, most cloud service providers make available a wide range of open source and proprietary AI models that can be generally invoked using any number of APIs.

It’s not clear to what degree the robustness of those APIs might ultimately determine who will win the battle for AI model supremacy. However, as the SDKs that are built on those APIs become more widely available, it’s becoming much less likely that organizations will find themselves locked into any one set of AI models. In fact, it may soon become easier than ever to simply mix and match them based on the capabilities being provided at any given point in time using a set of compatible APIs and SDKs that now have the potential to become a de facto standard.

TECHSTRONG TV

Click full-screen to enable volume control
Watch latest episodes and shows

AI Infrastructure Field Day

TECHSTRONG AI PODCAST

SHARE THIS STORY