Synopsis: In this Techstrong AI video, Trevor Welsh, vice president of products for WitnessAI, dives into what will be required to deploy, manage and govern artificial intelligence (AI) agents.
In this interview, Trevor Welsh, VP of Products at WitnessAI, discusses how enterprises are moving from early enthusiasm about generative AI into the more complex realities of implementation. While initial adoption was driven by excitement around tools like ChatGPT, organizations are now confronting challenges around model accuracy, hallucinations, ethical development, and predictability. Welsh emphasizes the importance of structured AI systems—comparing ideal models to well-trained Marines who can think creatively while delivering predictable outcomes—and points out that relying solely on general-purpose models often creates friction in business workflows that demand consistency.
Welsh highlights how organizations can improve outcomes by combining multiple AI agents in a layered system: one model interprets input, another manages logic, and a final model verifies outputs for issues like bias and hallucination. He also underscores the growing importance of data governance, citing real-world examples such as Microsoft Copilot accessing sensitive documents and how companies are only now realizing the need for better control and visibility over their data. Welsh warns that as AI tools integrate across platforms, data exposure risks increase, even in cases where users think their information is isolated or private.
Finally, the conversation turns to security threats such as AI model poisoning, IP theft, and disinformation. Welsh shares concerns from Department of Defense officials about subtle manipulation within AI outputs, which could have long-term effects on user beliefs and behaviors. He also describes how malicious actors might exploit AI interactions to extract valuable proprietary information, especially in industries like automotive design or pharmaceuticals. These risks, he notes, extend beyond the digital realm, as misinformation can quickly spread offline through casual conversation—underscoring the need for better oversight, ethical standards, and secure AI deployment strategies.