Artificial Intelligence: Friend or Foe?

OpenAI CEO Sam Altman reportedly is hoping Microsoft, which already is investing $10 billion and taking a stake in the startup that a year ago kickstarted the generative AI push with the release of its ChatGPT chatbot, will give the company more financial support for his plans to build “superintelligence.”

Altman, who also co-founded OpenAI, told The Financial Times that his partnership with Microsoft is working well and that he hopes “to raise a lot more over time” for bringing in the compute power needed for artificial general intelligence (AGI), intelligence in machines that enable them to learn, comprehend and perform as well as – or even better than – humans.

“The vision is to make artificial general intelligence, figure out how to make it safe and figure out the benefits,” Altman told the news outlet.

He was asked whether he thought Microsoft would continue putting money into OpenAI, saying, “I’d hope so. … There’s a long way to go and a lot of compute build out between here and AGI.” He added that the “training expenses are just huge.”

Looking Toward AGI

AGI – also known as strong AI or deep AI – is an idea the scientific and tech communities have been talking about for almost as long as they’ve been researching AI. It’s now at the forefront of discussions because of the accelerated  innovation around generative AI and large-language models (LLMs) in the year since OpenAI’s release of ChatGPT.

“Today, public interest in AI is at an all-time high,” the Brookings Institution wrote in a paper in July. “With the headlines in recent months about generative AI systems like ChatGPT, there is also a different phrase that has started to enter the broader dialog: Artificial general intelligence.”

Altman in a blog post earlier this year wrote that “if AGI is successfully created, this technology could help us elevate humanity by increasing abundance, turbocharging the global economy, and aiding in the discovery of new scientific knowledge that changes the limits of possibility.”

However, he also cautioned that “AGI would also come with serious risks of misuse, drastic accidents and societal disruption. Because the upside of AGI is so great, we do not believe it is possible or desirable for society to stop its development forever; instead, society and the developers of AGI have to figure out how to get it right.”

Generative AI is a Good Start

Generative AI has given the tech, scientific and business communities a hint of what AI can accomplish in terms of creating content – from text and images to audio, video, and simulations – based on massive amounts of data it was trained on. AGI will take that steps further, Future AI CEO Charles Simon wrote in Techstrong.ai.

“The AI that we experience today merely gives the illusion of understanding,” Simon wrote. “In reality, it is simply manipulating symbols and code samples without any real knowledge or understanding of what those various symbols and samples mean.”

Generative AI and other forms of artificial intelligence are reliant on datasets, he noted. AGI “must be able to explore, experiment and learn about real objects, interpreting everything it knows in the context of everything else it knows… It must rely not on datasets but on a more biologically plausible system that is modelled after the human brain, with algorithms that enable it to build abstract ‘things’ with limitless connections and context.”

When Could We See AGI?

AGI seems to be getting closer to reality. In a research paper from Cornell University earlier this year, researchers said that GPT-4 model that OpenAI made generally available in July showed “sparks of artificial intelligence” in early experiments. (The paper was published before the model was generally released.)

“We contend that (this early version of) GPT-4 is part of a new cohort of LLMs (along with ChatGPT and Google’s PaLM for example) that exhibit more general intelligence than previous AI models,” they wrote.

According to Reuters, Softbank CEO Masayoshi Son said at the Softbank World conference in October that AGI will be a reality within 10 years and that “artificial super intelligence” will be in place in 20 years, exceeding human intelligence 10,000-fold. However, Arm CEO Rene Haas had a less ambitious prediction, saying at the same conference that he thought AGI would come to fruition within his lifetime.

Steps Along the Way to AGI

In his blog post, Altman wrote about steps that can be taken now to get ready for AGI, including creating and deploying increasingly more powerful systems to create a more gradual transition to an AGI world, giving policy makers and institutions time to adapt to more AI technology and put regulations in place.

A question is whether the arrival of AGI will be good or bad for humans. Altman noted that the technology comes with the risk that it could be misused by bad actors and cause “drastic accidents” and “societal disruption.”

He added that the possible benefits of creating AGI are so great that it would behoove society to continue developing it, but admitted that “at some point, the balance between the upsides and downsides of deployments (such as empowering malicious actors, creating social and economic disruptions, and accelerating an unsafe race) could shift, in which case we would significantly change our plans around continuous deployment.”

In a peer-reviewed paper published in August in Frontiers, Giorgio Buttazzo, with the Department of Excellence on Robotics and AI at the Sant’Anna School of Advanced Studies in Italy, weighed the opportunities and risks of AGI, outlining recent gains in such areas as neural networks and deep learning and sorting through challenges like the risk to jobs and a range of ethical issues.

Buttazzo also took a look into what a future with AGI could mean.

He noted that the exponential evolution of AI so far would mean that singularity, or the point where AI matches human intelligence in all fields, could be reached in 2030, though it may be delayed by 10 to 20 years given shortcomings in current machine learning models.

However, as the exponential growth of compute power and AI continues, with AI surpassing human intelligence, “two extreme and opposite scenarios” emerge for humankind. One is that a super intelligence emerges that strives to sustain itself in a way that conflicts with humans and leads to the extinction of humans.

The other is immortality: Humans adapt by embracing advances in such areas as nanotechnology, robotics and AI to gradually transform “into a new species with synthetic bodies and digital brains,” essentially becoming “an immortal superintelligence destined to expand into the universe.”