Synopsis: In this Techstrong AI video, Liran Hason, VP of AI for Coralogix, explains how the acquisition of Aporia will improve observability of AI applications.

In this Techstrong AI interview, Mike Vizard speaks with Liran Hason, VP of AI at Coralogix, about the company’s recent acquisition of Aporia and the launch of Coralogix AI. Hason explains that Aporia was founded to address the risks associated with AI by introducing observability and guardrails, ensuring AI applications operate responsibly. The integration of Aporia’s technology into Coralogix AI aims to provide organizations with real-time monitoring and governance tools to detect and mitigate AI issues such as hallucinations, inaccuracies, and unethical behavior.

Hason highlights the challenge of integrating AI into deterministic IT workflows, as AI operates probabilistically. To address this, Coralogix AI employs a network of small language models (SLMs) specializing in different AI risks, ensuring more precise oversight. He emphasizes the importance of setting clear boundaries for AI use cases, preventing models from making unreliable decisions outside their intended scope. This approach is particularly crucial for applications like customer service chatbots, where AI must remain confined to verified knowledge domains to avoid misleading users.

Looking ahead, Hason discusses the broader implications of AI governance, stressing the need for collaboration between government regulators, corporate leadership, and engineers. He also highlights the growing role of open-source AI, particularly with advancements like DeepSeek’s cost-reducing models, which make AI more accessible. However, he cautions that companies must still implement rigorous evaluation and observability frameworks before deploying AI at scale. Without proper oversight, many AI projects stall before reaching production, reinforcing the critical need for structured governance and continuous monitoring to ensure reliability and compliance.