
SAN FRANCISCO — FOMO vs. FUD.
Those were the dueling acronyms in a technological and financial tug-of-war at the RSAC show in San Francisco this week.
The allure of artificial intelligence (AI) has put enterprises and individuals in an existential crisis over the dazzling benefits of AI agents (in particular) and the stark realities of an expanded attack surface. Should companies jump all-in like everyone else (fear of missing out) and pursue AI gold as fast as possible — and quite possibly risk unforeseen security risks (fear, uncertainty, doubt)?
The underlying debate is teeming with tension and trepidation, and represents a delicate balancing act of assessing benefits vs. security holes, as Ronan Murphy, chief data strategy officer at Forcepoint, put it.
“If you don’t embrace AI, you fall behind your competition. But if you plunge into it, there is a security risk that exposes you,” he said in an interview. On Tuesday, the company announced Forcepoint Data Security Cloud, an AI-powered platform that oversees and controls data wherever it may be. “The positives of AI far outstrips the negatives, but you need the proper guardrails for data.”
In recent months, as the market for AI agents has grown with a bombardment of solutions from Microsoft Corp., Alphabet Inc.’s Google, Salesforce Inc., Adobe Inc., ServiceNow Inc., and others, so have potential security headaches. A common conversation at the industry’s premiere security show this week here was a reminder of what AI agents can do, and how they can be compromised. But unlike previous tech waves, the AI wave comes steeped in hype, speed, and a blinded sense of urgency for adoption from the C-suite.
“AI is a challenge and an opportunity,” said Lou Fiorello, vice president and general manager of security products at ServiceNow, which has aggressively introduced AI agents over the past year. “In terms of productivity, we’re seeing improvement of more than 50% internally already. But governance and compliance, and expanded attack surfaces, are challenges. More AI agents are additional elements to secure.”
Indeed, another acronym was repeatedly uttered at RSAC — this time out of fear: MCP. The relatively new development of Model Context Protocol that lets AI models fetch real-time data, execute commands, and integrate with third-party applications.
While breathtaking in their ability, MCP servers can wreak havoc if they lack robust security measures, becoming easy targets for attackers to launch unauthorized access, data breaches, or system takeovers. The most insidious threat, arguably, comes from prompt injection attacks, where malicious actors insert carefully crafted inputs designed to manipulate AI model behavior.
MCP has quickly been treated as a “de facto standard,” adding to the sense of urgency and risk, according to Hao Yang, head of AI at Splunk Inc. “But doing things right is more important than doing something quickly,” he cautioned.
Growing adoption of AI agents dominated the discussion at RSAC but with a dual-edged sword of “lots of optimism and fear,” said Mandy Andress, chief information security officer at Elastic. “It’s not the threat of new risks, but the volume and speed in which agents operate and add to the complexity of cyberdefense.”
AI agents are likely to exacerbate old problems as security professionals get directives from impatient executives to transition as quickly as possible to the latest innovation, which also happens to be evolving at a blur-fast speed. This is nothing new in the security world, but it could force some harried practitioners to hastily bolt on AI features rather than methodically leverage the power of AI.
To that end, Non-Human Identity (NHI) management and governance is one of the fastest-growing security blind spots and misunderstood challenges in cybersecurity today. With the rise of cloud computing, SaaS, APIs, microservices, and AI agents, non-human identities now outnumber human identities conservatively by a 50-to-1 ratio and as high as 90-to-1. This is likely to lead to stolen or orphaned service accounts.
The race to develop agentic AI systems is also likely to raise security headaches over unstructured data such as social posts, customer feedback, and images. New research from Precisely found 60% of data and AI professionals are evaluating unstructured data, which agentic AI depends on to make autonomous decisions.
“Enterprises are weighing the risk. Are they willing to take some risk to find vulnerabilities and patch them?” Christina Cravens, chief marketing officer and chief growth officer at Redjack, said in an interview. “With agentic AI, it is crucial to have a disaster-recovery plan and DR testing.”
FOMO or FUD? It’s hardly a debate. For now and the foreseeable future, it’s FOMO in a landslide. But the panacea comes with a potential painful price.
“There is a tremendous amount of trust we have to build” in agentic AI and AI in general, Lee Klarich, chief product officer at Palo Alto Networks Inc., said Tuesday in announcing a new AI security platform. “To make this work requires a fusion of AI and automation.”
Added Protect AI CEO Ian Swanson, whose startup was acquired by Palo Alto Networks for an estimated $700 million on Monday: “AI agents are harder to monitor than generative AI because they are autonomous and require a comprehensive end-to-end solution.”