Synopsis: In this AI Leadership Insights video interview, Amanda Razani speaks with Dr. Jignesh Patel, co-founder of DataChat, about how to best train LLMs, and the roadblocks business leaders face in selecting and using these models.

In this episode of the AI Leadership Insights series, Amanda Razani interviews Jignesh Patel, co-founder of DataChat and a professor at Carnegie Mellon University, about his extensive experience in the field of data and the evolution of large language models (LLMs). Patel shares his journey through various technological waves, from the internet revolution to the current AI boom, and discusses the importance of high-quality data in training LLMs. He emphasizes the need for robust statistical methods to identify and handle outliers, which can significantly impact the performance of AI models.

Patel also explores the challenges businesses face in training LLMs, particularly the rapid pace of technological change and the complexities of managing data silos within organizations. He highlights the differences between large and small language models, noting that while large models are more generalist and costly to run, small models can be more efficient and specialized. Patel concludes by discussing the future of LLMs and the importance of maintaining human oversight in AI deployments to prevent potential issues, stressing that the collaboration between AI and human decision-making is crucial for successful implementation.