
Salesforce Inc. has arguably been among the most aggressive — if not the model — in vigilantly hammering home its messaging on artificial intelligence (AI) agents.
Last week, it announced plans to acquire Convergence.ai, a maker of AI agents that perform complex, human-like tasks in digital environments, and it introduced a new AI pricing structure for customers. Earlier in the month, it also enhanced capabilities of its AI agents to handle more complex tasks as part of an expanding enterprise general intelligence (EGI) initiative. The company is also deploying the large action models (LAMs) it has developed, called xLAMs, that drive those AI agents on multiple platforms.
Like its tech brethren, who’ve flooded the enterprise market with AI offerings — especially around AI agents and generative AI — Salesforce has preached the importance of data security, compliance and governance. But most of all, it is trying to get a read on where the technology is going as digital workforces (its phrase) grow, it believes, to a $6 trillion market by 2030, citing a Futurum Group report.
Indeed, the debate over AI and its use, FOMO (Fear Of Missing Out) vs. FUD (Fear, Uncertainty, Doubt), consumed the agendas of security experts at the RSAC 2025 conference in San Francisco a few weeks ago. They discussed security gaps around Model Context Protocol servers among other new-fangled cyberthreats.
“Two years ago, the feeling about ChatGPT was, ‘This is scary and new. How do we stop it?” Salesforce chief trust officer Brad Arkin recalled in a video interview on Monday. “Last year, it was curiosity. We need to learn it. This year [at RSAC], the feeling was this is absolute magic and it will change everything. There was excitement and fear. But the fear is not of the tool but that we aren’t moving fast enough.”
“I spent the whole week with other security executives discussing how we adopt things safely,” added Arkin, who joined Salesforce in February 2024 after serving as chief security & trust officer at Cisco Systems Inc.
Where the technology is going is both predictable and unpredictably organic, Arkin acknowledged. He does not doubt that AI agents will infiltrate nearly every organization in nearly every capacity, tackling repeatable tasks and documenting actions while greatly increasing productivity.
But what is unclear, he said, is how individual AI agents will shake out. It is conceivable, he said, that some agents will be more effective than others and that a super agent of sorts will act as a glorified traffic cop, managing other agents like human supervisors before it.
What intrigues him most, however, is the concept of an AI agent that functions as an apprentice, or “watch-and-learn agent,” as he puts it. It would presumably absorb encyclopedic knowledge about a particular task or division, and then pass that on to humans and other agents.
“That could be the next phase,” he said. “Every company has someone who has been there forever and knows everything about their particular corner of the organization. This agent would just sit and learn everything in that department [albeit instantly].”