Synopsis: Manifest CTO Daniel Bardenstein explains why and how software bill of materials (SBOMs) will be extended to artificial intelligence (AI) applications to create AIBOMs that provide greater transparency.

In traditional application development, SBOMs are gaining traction as a way to bring transparency and accountability to the software supply chain. But Bardenstein argues that the AI ecosystem introduces entirely new challenges. Unlike conventional software, AI models are shaped not just by code but also by massive datasets, model architectures, training pipelines, and fine-tuning processes. All of these layers influence outcomes, yet most remain invisible to end users.

That lack of visibility has real consequences. Without a clear record of how an AI model was built, organizations risk deploying systems they don’t fully understand, exposing themselves to bias, compliance issues, and even security vulnerabilities. Bardenstein stresses that “trust without transparency” is unsustainable in AI adoption.

He outlines what an AI-specific SBOM might include: not only dependencies and frameworks but also dataset lineage, model weights, evaluation metrics, and the context in which a model was trained. This level of detail would allow organizations to validate claims, reproduce results, and meet emerging regulatory requirements.

While building such a standard won’t be easy, Bardenstein notes that momentum is building across both industry and government. As AI becomes embedded in critical infrastructure, the need for a transparent supply chain will only grow more urgent.

The takeaway: Just as SBOMs have become a cornerstone of DevSecOps, an AI-native version could become the foundation of trustworthy and secure AI systems. Without it, enterprises risk scaling AI without the safeguards needed to ensure safety, accountability, and compliance.