DeepMind, Google’s leading artificial intelligence research company, has recently developed a new AI system called AlphaGeometry. The system has demonstrated the ability to solve geometry problems at a level comparable to the world’s brightest high school mathletes. This significant new development suggests an optimistic path for creating artificial general intelligence. Combining neural networks with symbolic deduction engines can detect patterns in new predictive ways.

AlphaGeometry aims to emulate human cognition by integrating two types of intelligence. According to Nobel Prize-winning psychologist Daniel Kahneman in his book “Thinking Fast and Slow,” the human mind has two systems of thinking – System 1, which is fast and intuitive, and System 2, which is slower and more deliberate. Neural networks act as System 1 by quickly proposing constructive ideas, while symbolic deduction engines resemble System 2 by methodically assessing the validity of these ideas.

System 1 Thinking

 Fast, Automatic and Effortless: System 1 operates automatically and quickly, with little effort and no sense of voluntary control. It includes automatic reactions and quick judgments we make without deliberate analytical effort. For example, when you pull your hand back from a hot stove, recognize a friend’s face, or understand simple sentences, you use System 1 thinking.

  • Intuitive and Emotional: System 1 is often driven by emotions and instincts. It can generate powerful feelings and impressions that influence our decisions and judgments without conscious awareness.
  • Error-Prone: While System 1 is efficient for handling routine tasks and making rapid decisions, it’s prone to biases and errors. It often relies on heuristics (mental shortcuts) that can lead to systematic mistakes in complex situations.

 System 2 Thinking

  • Slow, Effortful and Deliberate: System 2 requires attention and mental effort. It’s invoked when we engage in complex computations, focus on a challenging task or deliberately choose between multiple options. System 2 thinking is responsible for solving a math problem, making a budget or planning a vacation.
  • Analytical and Logical: This system is characterized by its ability to analyze and apply logic to a problem. It’s more conscious and rational, capable of reasoning through complex situations, evaluating evidence, and making judgments based on facts and analysis.
  • Lazy Controller: Despite its capabilities, System 2 is often described as “lazy” because it requires significant mental energy. Our brains tend to conserve energy by defaulting to System 1 whenever possible. System 2 will only engage when necessary, such as when a task cannot be solved by the fast, automatic responses of System 1 or when a situation explicitly demands focused attention.

Understanding how System 1 and System 2 thinking interact can help improve decision-making and problem-solving in various areas of life. System 1 tends to make errors independently, while System 2 is slow and deliberate. Working together can be very effective – System 1 provides inspiration and direction to System 2, while System 2 applies rigorous checks to ground System 1. AlphaGeometry takes advantage of this collaboration by using a neuro-symbolic architecture, where the predictive pattern recognition of a neural language model feeds into the deliberate deductive reasoning of a logic-based system.

AWS

AlphaGeometry has developed a neuro-symbolic approach that enables it to solve complex geometry conjectures at the university level. This approach combines a neural language model that acts as an intuition, suggesting new points, lines or circles to the diagram as necessary, and a symbolic deduction engine that reasons step-by-step from these starting points to uncover layers of mathematical relationships until the solution is found. This innovative approach has allowed AlphaGeometry to create new geometric constructs when necessary and solve problems.

The language model and symbolic engine are two components that are essential in discovering and verifying mathematical knowledge. However, when used together, they can successfully uncover and validate mathematical knowledge – a promising approach to advancing AI.

Many experts in artificial intelligence believe that developing artificial general intelligence (AGI) is still an unresolved problem. This new DeepMind breakthrough is both frighteningly interesting and innovatively unique. Although this case is only used for Geometry, imagine a deep learning exercise based on Kahneman’s famous hypothetical question about Julie’s GPA. The question is as follows:

 Julie is currently a senior in a state university. She read fluently when she was four years old. What is her grade point average (GPA)?

Suppose we look at how a neural network might interpret this question, for example, a large language model LLM. It first interprets the text of the question, identifying key elements such as “Julie,” “senior in a state university,” and “read fluently when she was four years old.” This step involves parsing the text to understand its semantic content. It will then contextualize the information within its trained model, including general knowledge about educational achievements, early reading abilities, and their potential implications on academic performance. The model would identify and extract relevant information that might inform an answer. If available in the trained model, this could include general trends about early readers and their academic achievements or more direct information about GPA outcomes.

Here’s how GPT-4 answered the question:

 Without specific details about Julie’s coursework, major or academic performance in college, it’s impossible to accurately guess her GPA. However, if we were to make a broad, speculative assumption based on the correlation between early literacy and academic success, we might assume that Julie could have a GPA in the higher range, potentially around 3.5 to 4.0 on a 4.0 scale, which is often considered excellent in many academic institutions.

 This speculation is based on the premise that early reading fluency is an indicator of good learning capabilities and might correlate with higher academic achievement. Nonetheless, it’s important to remember that this is a very general assumption and real-world outcomes can vary widely due to the multitude of factors affecting academic performance.

It is important to note that responding to a question from a large language model (LLM) like GPT-4 (Generative Pre-trained Transformer) is considered System 1 thinking. However, this approach lacks what Kahneman calls “shared factors,” such as family support, academic interest and many other factors influencing Julie’s GPA. In his book, Kahneman proposes an alternative formula for predicting Julia’s GPA that considers some shared factors.

(P. 190 Thinking Fast and Slow) Recall that the correlation between two measures—in the present case reading age and GPA —is equal to the proportion of shared factors among their determinants. What is your best guess about that proportion? My most optimistic guess is about 30%. Assuming this estimate, we have all we need to produce an unbiased prediction. Here are the directions for how to get there in four simple steps:

 Start with an estimate of average GPA.

  1. Determine the GPA that matches your impression of the evidence.
  2. Estimate the correlation between reading precocity and GPA.
  3. If the correlation is .30, move 30% of the distance from the average to the matching GPA.

Kahneman coined the cognitive pattern “Regression to the Mean,” which refers to the tendency for things to even out or become closer to the average. “Or one might say that it’s closer to the truth.” The result comes from System 2 thinking, which involves more deliberate and effortful decision-making. AlphaGeometry neuro-symbolic architecture is an innovative approach to address some AI bias forms, where one system generates quick and intuitive ideas. In contrast, the other system engages in more rational and deliberate decision-making.

Further research in this area has the potential to produce exciting results.

TECHSTRONG TV

Click full-screen to enable volume control
Watch latest episodes and shows

Edge Field Day

TECHSTRONG AI PODCAST

SHARE THIS STORY