Synopsis: Kamal Ahluwalia, president of Ikigai Labs, explores the shift from large language models (LLMs) to smaller, domain-specific AI models (SLMs) for enterprise applications, where cost, relevance and data privacy are key priorities. Kamal emphasizes that while LLMs offer broad capabilities, SLMs provide tailored, efficient solutions that align better with the specific needs of businesses.

In this Techstrong AI video interview, host Mike Vizard sits down with Kamal Ahluwalia, president of Ikigai Labs, to discuss the evolving landscape of language models, particularly the shift from large language models (LLMs) to more specialized small language models (SLMs). Ahluwalia explains that while LLMs were groundbreaking, they face limitations in cost, scalability and specificity, which have paved the way for SLMs tailored for domain-specific tasks. He notes that SLMs provide the precision and contextual relevance enterprises need, without compromising on security, as they can be deployed within a company’s own virtual private cloud. This shift also addresses the high computational costs associated with LLMs, making AI more accessible and efficient for a wider range of organizations.

Ahluwalia further describes a future where AI workflows will involve a blend of LLMs, SLMs and AI agents, each orchestrated to perform distinct functions. He predicts that this approach will lead to greater automation and efficiency in workplaces, with some roles evolving significantly and new ones emerging, such as AI prompt engineering and model observability. As this shift accelerates, companies will increasingly adopt agent-based AI architectures, where agents perform specialized tasks within broader workflows. Ahluwalia concludes by reflecting on how advancements in automation will allow people to focus on more meaningful tasks, much like how technology has simplified our lives in other areas.