Synopsis: DataBank COO Joe Minarik and CTO Vlad Friedman discuss how responsibilities for building and deploying artificial intelligence (AI) applications within IT organizations are evolving.
In this episode of Techstrong AI, DataBank CEO Joe Minarik and CTO Vlad Friedman join Mike Vizard to discuss the evolution of AI infrastructure. As demand for AI inference engines and high-performance computing grows, enterprises are increasingly shifting workloads from hyperscalers back to private or co-located data centers for greater control, cost efficiency, and data security. DataBank, which operates 70 facilities across 27 U.S. regions, plays a crucial role in providing the power and cooling infrastructure required to support AI deployments. The rise of generative AI is reshaping traditional IT responsibilities, moving model management from data science teams to core infrastructure teams.
The conversation highlights a growing concern: Energy consumption. With data centers already consuming 4% of U.S. electricity and projections suggesting that number may double or triple, sustainability is becoming a central challenge. AI workloads demand significant power and advanced cooling systems, which strain energy grids—especially in densely populated regions. Joe Minarik notes that while efforts are underway to incorporate renewables and small modular nuclear reactors, current solutions are years away from full deployment. This creates urgency for more efficient compute strategies and infrastructure design as AI becomes a mainstream enterprise tool.
As enterprises move toward practical AI implementation, Vlad Friedman points out that the market is maturing—with models becoming more efficient and development teams leveraging AI to boost productivity by hundreds of percent. However, AI’s limitations still require skilled oversight, especially to avoid hallucinations or data exposure. Enterprises are also investing more effort in understanding and securing how their data is used in training. Looking ahead, Vlad predicts a hybrid model architecture—where large language models (LLMs) use directional routing to access relevant sub-datasets, enhancing performance without ballooning infrastructure needs. AI, he stresses, isn’t a magic bullet—it’s a powerful tool that requires thoughtful application.