Synopsis: Maggie Laird, president of the Pentaho arm of Hitachi Vantara, explains why organizations in the age of artificial intelligence (AI) need to have a greater appreciation for the art and science of data engineering.

In this Techstrong AI interview, Mike Vizard speaks with Maggie Laird, President of the Pentaho business unit at Hitachi Vantara, about the growing trend of data repatriation in the age of generative AI. Laird explains that as organizations experiment with AI, they’re constantly moving data between cloud and on-premise environments to strike a balance between control, cost, and performance. Companies are realizing that while the cloud is effective for training models, operationalizing AI often requires moving data closer to where it’s used—in on-prem or edge environments—for better cost predictability and governance.

Laird emphasizes that these shifts are shining a spotlight on long-standing data management challenges. Many organizations are learning that data quality and trust are foundational to AI success. Poor outcomes often stem not from flawed models, but from flawed data. As AI initiatives scale, businesses must carefully evaluate what data is brought in, whether it’s trustworthy, and if it’s suitable for the desired outcomes. This renewed focus is driving companies to revisit the basics—like real-time streaming, data cataloging, and cleansing processes—to ensure the right data is in the right place at the right time.

The conversation also explores the changing landscape of data roles and governance. Laird notes that data engineering is becoming more critical, blending with automation and AI to democratize access and streamline workflows. With increasing regulatory pressure and C-suite interest in AI, transparency into data usage and model training is becoming essential. Organizations that approach AI with a detailed, strategic mindset—considering workload placement, cost, security, and long-term scalability—are best positioned to succeed in this new data-driven era.