A thing that almost no school or college teaches us, and yet we do organically every day, is make decisions in one of two ways – form a conclusion in the moment without thinking overmuch, or ruminate over the problem before putting together a fitting solution.
In psychology, this is called the dual process theory. Experts believe that these inferential reasoning styles unique to humans is the hidden formula for making AI agents smarter and sharper.
Giving AI Agency and Autonomy
Over the past few months, companies have poured in huge investments building new versions of AI models that can reason their way through problems following the second method of thought.
The dual process theory states that human decision-making is either automatic and spontaneous (System 1), or conscious and controlled (System 2).
In his influential book, “Thinking, Fast and Slow,” Nobel-winning science book author and psychologist, Daniel Kahneman, notes while exploring the differences between the two styles, that System 1 is the default mode of thinking because it conserves cognitive energy, while System 2, which is more deliberate and analytical a process, yields the most accurate and complex solutions. Unsurprisingly, the latter approach takes more work and focus, and is therefore reserved for the most elegant problems.
In December, when Google announced what CEO, Sundar Pichai, touted as Google’s “most thoughtful model yet”, and just a day later, OpenAI followed up with announcing o3, their “most capable” AI model, it became clear that this is the line of pursuit for every big AI company in the second wave of GenAI revolution.
In the first phase, makers were seen focusing on building a volley of general-purpose AI agents that fielded questions pretty cleverly. But owing to holes in knowledge engineering, these assistants, though significantly faster and smoother than their older cousins, could only handle frequently asked questions, or perform basic routine tasks without imagination.
One thing that is common with the newly released models is that they’re domain-specific, and can mimic the intricate decision-making and problem-solving skills observed in humans, extremely well.
AI is maturing every minute, and its next frontier is agentic AI. These new and improved agents will not require explicit inputs from users to perform tasks; nor will they serve template responses to every query.
Infusing LLMs with genuine reasoning and logic allows the agents to take on more complex problems – and act on their own.
“I see agentic AI as a way to take maybe not very well-formed inputs, and generate much better formed outputs or decisions based on those inputs that don’t necessarily have a very strict structure,” said Stephen Foskett, president of Tech Field Day – a division of The Futurum Group – at a talk on agentic AI at the recent AI Data Infrastructure Field Day event.
Although in its early stages, and is currently used in only 1% applications, Gartner predicts agentic AI to be embedded in 33% of enterprise applications by 2028.
Futurum Intelligence projects the AI software market to exceed $238B by 2029 growing at a 13.4% CAGR. In the next three years, 82% companies it surveyed said have plans to integrate AI agents.
LLMs’ Source of Advanced Reasoning Skill
You might be wondering where does this human-level reasoning capability in LLMs come from? The technology that’s fueling the new generation of agentic AI solutions under the hood is a reasoning engine. Simply put, a reasoning engine is an AI system that gives AI agents their reasoning ability. Its task is to organize models, data, business logic, workflows and events, all into one cognitive flow. The allows the agents to perceive nuances in user queries and requests, make logical decisions, and most importantly, evaluate their own responses. As a result, they are able to take actions and generate results more faithfully and contextually.
The outputs are a striking departure from the scripted, run-of the-mill responses produced by general-purpose AI assistants. Contextual awareness gives an AI application the ability to process each request differently based on semantic analysis. The reasoning engine can apply logical rules to the knowledge base, connecting chunks of disconnected data and surfacing up new facts and hidden meanings. A very popular reasoning engine, the rule-based reasoning algorithm, is an expert in inferring and adding values that are not explicitly stated in the data using rule sets.
With a clearer understanding of the relations between datasets, the applications are able to make intelligent selection of data sources when processing requests.
Many AI applications today falter when faced with new prompts. Certain problems require breaking them down into multiple steps and executing each one methodically. Reasoning engines give LLMs the ability to deliberate over a query following System 2 of thought, and apply appropriate inference mechanisms required to draw logical inferences.
Because of reasoning engines, agentic AI solutions are able to weigh their own outputs, spotting errors and inconsistencies where they can be found and correcting them before generating a response.
The real power of reasoning engines can be seen in full display in the recently launched Salesforce Agentforce. Agentforce is an agentic AI platform that offers out-of-the-box autonomous agents for various business requirements. The platform relies on Atlas Reasoning Engine, a purpose-built technology, that serves as Agentforce’s “chain of thought” for delivering specialized agentic AI solutions. Atlas can chew through raw data and metadata and deliver intelligent actions “mimicking or even surpassing human thinking”, says Salesforce.
As AI companies continue to search for new ways to elevate models’ reasoning capabilities, reasoning engines will be critical to add and infuse a formal understanding of complex concepts in AI copilots enabling them to fulfill complex and advanced tasks with greater finesse than their human counterparts.