Synopsis: SPLX CEO Kristian Kamber dives into the state of artificial intelligence (AI) model accuracy and security that, despite the arrival of new offerings, isn't improving.
The assumption was that as technology matured, safety would improve. But recent tests suggest otherwise — GPT-5, for example, doesn’t appear to be any more secure than GPT-4.
Kamber argues the core issue is focus. Most companies are pushing for bigger, faster, and more multimodal systems. Security often takes a back seat to data scale and feature sets. That creates risks, from hallucinations that undermine trust to vulnerabilities that adversaries can exploit. “If we’re not prioritizing safety, we’re simply building more powerful systems with the same flaws,” he warns.
The problem is compounded by the lack of industry standards. While some organizations invest in red-teaming and adversarial testing, others leave security as an afterthought. The result is a patchwork of approaches and inconsistent defenses. Kamber stresses that we need systematic, repeatable frameworks — not just one-off fixes — to measure and improve model reliability.
He highlights three areas demanding urgent attention:
-
Robust evaluation to ensure models can withstand malicious inputs.
-
Transparency and explainability so organizations can understand why models behave the way they do.
-
Alignment with human values, ensuring safety isn’t sacrificed for speed to market.
The takeaway is clear: scale without security is a recipe for disaster. As models get more capable, the stakes get higher — whether in finance, healthcare, or critical infrastructure. Kamber urges industry leaders to view security not as a competitive disadvantage but as the foundation for long-term trust and adoption.
In short, if AI is to be transformative, it must also be safe. And right now, that progress is far from guaranteed.