
Postman this week added an ability to build artificial intelligence (AI) agents to its platform for designing, testing and deploying application programming interfaces (APIs).
The Postman AI Agent Builder leverages the APIs and workflow tools Postman makes available, to invoke large language models (LLMs) to build workflows that incorporate AI agents, says Abhinav Asthana.
That approach eliminates the need for organizations to integrate an API platform with the tools they are using to build AI agents, he adds.
At the core of that capability is Postman Flows, a no-code tool that Postman created to enable application developers to collaboratively create and deploy APIs. That framework has now been extended to enable those teams to also create AI agents.
Already more than 35 million developers spanning a half million organizations use Postman to create and deploy APIs using the Postman API Network, a service that can now be used to discover both APIs and LLMs.
That network also provides access to a range of APIs exposed by software-as-a-service (SaaS) application providers, most of which are building their own APIs. Ultimately, organizations will need to integrate AI agents they build with the ones being developed by SaaS application providers to automate workflows on an end-to-end basis.
It’s not clear to what degree organizations will prefer to build their own AI agents versus rely on ones provided by app vendors, but that latter will incur additional costs. Moreover, many organizations are going to prefer to maintain control over how multiple AI agents that will soon be strewn across the enterprise are managed.
In fact, many of those AI agents will be built by business professionals that have a better understanding of how those workflows need to be managed, says Asthana. “New types of developers will emerge,” he says.
The rate at which AI agents are adopted will naturally vary from one organization to the next depending on the scope of the tasks that need to be automated. The narrower the task, the more likely it becomes for the agent to be able to consistently perform it.
The challenge is that LLMs are probabilistic, so an AI agent based on an LLM may not execute a task the same way every time. In fact, in many instances, what is described as an AI agent actually consists of multiple agents verifying the output each one creates.
Regardless of how AI agents are constructed, as LLMs continue to add more advanced reasoning capability, many organizations will soon find they need to replace one LLM with another. That becomes simpler to do using a network to centrally manage LLMs that are exposed via APIs, notes Asthana.
Ultimately, it may be now only a matter of time before organizations are managing hundreds, possibly thousands of AI agents. The issue then becomes how to orchestrate the tasks being assigned to these AI agents in a way that the humans responsible for managing any given process can easily verify.