AGI image

The launch of ChatGPT in late 2022 unleashed a torrent of varied opinions about artificial intelligence. Everyone from the top experts in the field to politicians to the anchors of the Sunday morning talk shows seemed to have a viewpoint about AI, including what its impact would be on our society, and perhaps most importantly, how it would evolve.

The one thing all of these had in common though was their agreement that ChatGPT (and many other current AI initiatives for that matter) displays some very impressive capabilities. While that is absolutely true, I would argue that most current AI solutions ultimately are limited by their dependence on machine learning and its underlying backpropagation algorithm. As a result of their continued reliance on progressively larger and more complex data sets, the AI systems we encounter today generally possess super-human capabilities in very narrow areas but lack the common sense of the average three-year-old.

Put another way, the AI that we experience today merely gives the illusion of understanding. In reality, it is simply manipulating symbols and code samples without any real knowledge or understanding of what those various symbols and samples mean. And because there is no true understanding, today’s AI is little more than a tool – albeit a very impressive tool – which can be manipulated by humans to achieve certain outcomes, depending on the intentions of the users.

If that is the case, can AI ever evolve to the point where it is no longer dependent on massive data sets, but is actually capable of thinking and understanding on its own? The answer is a resounding yes, but not if we continue to chase after bigger and better training sets. Bottom line, data sets are simply incapable of storing enough examples to deal with every conceivable situation that might present itself. Moreover, despite the best efforts of numerous researchers, data sets do not integrate information from multiple applications in the same seamless way that humans can merge vision, language and hearing.

For today’s AI to advance to something that more closely resembles human-like intelligence – better known as artificial general intelligence, or AGI – it must be able to explore, experiment and learn about real objects, interpreting everything it knows in the context of everything else it knows, just like that average three-year-old does. In short, it must rely not on datasets but on a more biologically plausible system that is modelled after the human brain, with algorithms that enable it to build abstract “things” with limitless connections and context.

AWS

If we examine the human brain, we find that it has a volume of less than 1.5 liters and needs only 12W of energy – proving that it is possible for such a thinking system to exist. We also know that the structure of the neocortex, the part of the brain that thinks, is governed by a small portion of our DNA, which is significantly smaller than many existing AI programs. The issue that remains in modelling AI after the human brain is that we still don’t know precisely how the brain works, although advances in brain mapping technology may nullify that issue in the not-too-distant future.

That limitation aside, if we presume that generalized intelligence is a direct outgrowth of the structure defined by our DNA and that structure could be defined by as little as one percent of our DNA, it is clear that AGI emergence depends not on computer power or data sets but on what to write as the fundamental AGI algorithms. With that in mind, it is feasible to assume that a broader context capable of understanding and learning could emerge if today’s AI systems were built on a common, underlying data structure that enabled their algorithms to slowly begin to work with each other, eventually creating a more general intelligence that approaches the threshold for human-level intelligence to emerge.

How do we get to that common, underlying data structure? Let me begin with an example. If you know that red and green are colors, I can ask you to name some colors and you can include red and green on your list, the inverse of the initial information. Then if you are instructed that “foo” and “bar” are also colors, you can immediately respond to the directive “Name some colors,” with “red, green, foo and bar.”

The fact that you can learn information in a single presentation and provide the inverse information immediately using brain neurons which are so slow they could only perform a few operations in this timeframe provides evidence that much of the knowledge in your brain is some sort of graph structure – a collection of nodes connected by edges. You could imagine a “parent” node of color with “children” including red, green, foo and bar.

With that in mind, think of nodes as similar to neurons while edges are similar to synapses. Simulation demonstrates that these nodes and edges must actually be clusters of neurons and synapses. Now the process of retrieving information in your brain is only one of firing the color neuron and seeing which neurons fire in response because of their synaptic connections. Storing information is a simple matter of strengthening specific synapses so they become significant.

It is important to add that in your brain, strengthening a synapse can be very quick, just a few milliseconds, while growing new synapses takes hours. Given that, anything you learn in a reasonable timeframe must be based on the strengthening of existing synapses, implying that your brain has a huge number of synapses which have no meaning but are just waiting to be used. A computer, in contrast, has no such limitations because edges can be added to a graph nearly as quickly as they can be modified. This means that a computer implementing an identical neural graph could be created with 10,000-fold fewer synapses than a biological brain.

With that in mind, a biologically plausible AI system modelled after the human brain would likely contain a “self-adaptive” graph structure, capable of handling incoming information, determining where that information might be placed in the graph and then putting it there.

Unlike machine learning, this graph structure could determine the meaning of any individual node where the meanings of the perceptrons in an ANN are not known. This means that once the graph made a decision, it could also explain “why” it made that decision. The graph structure could also handle multi-sensory information and be designed for very quick interactions. As a result, it could handle incoming information in a more humanlike way where lots of data needed to be stored in the short term and then rapidly forgotten if proved to be irrelevant or false.

Bottom line, such a graph structure would offer one-shot learning, greater efficiency of storage, significantly faster retrieval, better handling of ambiguity and correction of false data and the ability to create generalized relationships between different data types. This, in turn, would allow AI to advance to something that more closely resembles human intelligence, ultimately hastening the emergence of AGI.

TECHSTRONG TV

Click full-screen to enable volume control
Watch latest episodes and shows

AI Data Infrastructure Field Day

TECHSTRONG AI PODCAST

SHARE THIS STORY