Synopsis: Purnima Padmanabhan, general manager of Broadcom’s Tanzu Division, highlights a fundamental truth of enterprise AI: the biggest challenge isn’t always the models, it’s the data.

Organizations talk about building smarter AI systems, but actually delivering the right data to those systems is harder than most realize. Padmanabhan notes that enterprises often rely on sprawling infrastructures — data lakes, pipelines, engineers, developers, and scientists all trying to collaborate — just to make data usable. The process is complex, resource-heavy, and difficult to maintain. Updating and scaling those systems adds even more strain.

One of the major hurdles is data quality. AI can’t deliver accurate inferences if the data it consumes is incomplete, siloed, or out of sync. Padmanabhan points out that many enterprises still treat data integration as a one-off project rather than a continuous, evolving process. As a result, models often run on outdated or fragmented datasets, which undermines their effectiveness.

Broadcom’s focus is on enabling enterprises to make data movement and access more seamless — connecting mainframe and distributed environments, modernizing legacy infrastructure, and ensuring that data pipelines remain consistent and secure. Without this foundation, AI projects risk stalling before they even get off the ground.

The message is clear: before chasing new AI capabilities, enterprises need to get serious about building reliable data pipelines. That means aligning business and technical teams, simplifying integration, and treating data as a living resource that must constantly evolve.

AI may get the headlines, but as Padmanabhan emphasizes, the real work — and the real bottleneck — lies in preparing the data that makes AI possible. Without it, enterprises can’t scale AI beyond pilots or proofs of concept.