Synopsis: In this Techstrong AI video, Nicholas Mattei, professor of computer science for Tulane University, explains the limits of the inductive reasoning that generative artificial intelligence (AI) applications depend on.
In this interview, Mike Vizard discusses with Dr. Nicholas Mattei, a professor of computer science at Tulane University, the foundational role of inductive reasoning in artificial intelligence (AI). They explore how current generative AI platforms are built on principles of machine learning that have existed since the 1950s and 1960s. Dr. Mattei explains that these models learn from past data to make predictions, but this reliance on historical data can lead to ethical issues, as the models might perpetuate past biases. He emphasizes that while these techniques are not new, the increased computing power and data availability have amplified their capabilities. However, the inherent limitations and potential flaws in using past data to predict future outcomes remain a critical concern.
Mike and Nicholas further discuss the probabilistic nature of AI models and the challenges of applying them to deterministic problems. Dr. Mattei notes that machine learning models, including large language models, can be noisy and are best suited for low-stakes decisions where errors are not costly, such as content recommendations on streaming platforms. He highlights the importance of integrating more structured data and traditional AI techniques to improve the reliability of these systems, particularly in high-stakes decisions like credit eligibility or hiring. The conversation also touches on data security and the need for regulatory frameworks to manage the use of personal data and provide recourse for algorithmic decisions. Dr. Mattei suggests that while the AI models themselves might not need strict regulation, the data inputs and outputs, along with users’ rights, should be closely monitored and governed.