Synopsis: Jim Richberg, head of cyber policy and global field CISO for Fortinet, dives into the challenges organizations will encounter as they build and deploy artificial intelligence (AI) agents.

In this Techstrong AI interview, Fortinet’s Global Field CISO Jim Richberg discusses the evolving AI landscape, highlighting the distinction between predictive (discriminative) and generative AI. Predictive models have long been used to classify and analyze data, while generative models, like large language models (LLMs), create new content based on data patterns. Richberg notes that generative AI introduces more uncertainty due to its probabilistic nature, making it more suitable for creative and knowledge-driven tasks rather than deterministic applications like automation or safety-critical systems.

Richberg emphasizes the importance of selecting the right AI model for specific use cases. While ChatGPT-like models excel at creative, freeform tasks, logic-based models such as DeepSeek are better for structured problem-solving, like drug discovery. He anticipates the rise of AI agents capable of dynamically selecting the appropriate model for each task and sees orchestration infrastructure evolving to support this flexibility. However, he warns that this complexity also introduces new security challenges, especially when switching between different models, data sets, and security protocols.

Security, Richberg asserts, must be built in by default—similar to the shared responsibility model developed in cloud computing. Currently, users bear too much burden in understanding and implementing AI security layers, which could lead to exploitable gaps. He advises organizations to clearly define whether a use case requires generative or predictive AI, ensure the model is fit for purpose, and prioritize responsible, efficient implementation. As AI adoption accelerates, he stresses the continued relevance of foundational wisdom: choose tools carefully and deliberately to avoid costly inefficiencies and risks.