Synopsis: In this Techstrong AI video interview, David Brauchler, technical director for NCC Group, explains many of the inherent cybersecurity risks associated with adopting artificial intelligence (AI) agents that many organizations have yet to appreciate.

In this Techstrong AI interview, Mike Vizard speaks with David Brauchler, technical director at NCC Group, about agentic AI and the security risks associated with these evolving systems. Brauchler explains that while agentic AI builds on existing technologies like chatbots and language models, the key difference is that these AI agents can autonomously execute tasks, such as making purchases or managing accounts. This shift raises significant security concerns, as compromised AI agents could be manipulated to perform unintended actions, making them a new and attractive attack vector for cyber threats.

Brauchler highlights a major challenge in AI security: the more functionality an AI agent has, the greater the risk it poses if compromised. Organizations must rethink traditional security approaches, as AI agents operate more like overconfident interns rather than predictable software. He suggests an approach called “dynamic capability shifting,” which limits an AI agent’s permissions to only what is necessary for a specific task. This method reduces the risk of an AI being exploited by attackers who could inject malicious input into its processing flow. Additionally, he warns that organizations often neglect proper security controls, leaving AI agents overprivileged and vulnerable to exploitation.

Looking ahead, Brauchler anticipates major AI security breaches in the near future, similar to past vulnerabilities seen in cloud computing and IoT. While some propose using AI to secure AI, he argues that AI should not be relied upon as a primary security control, as it remains a probabilistic system that can be manipulated. Instead, he emphasizes the need for strong architectural safeguards to ensure that even if an AI agent misbehaves, it lacks the necessary access to cause harm. As AI adoption accelerates, security professionals must proactively integrate risk mitigation strategies to prevent the exponential expansion of attack surfaces.

 

 

4o