Artificial Intelligence: Friend or Foe?

As far back as 2014, Professor Stephen Hawking told the BBC of his concerns about the development of full artificial intelligence (AI). The theoretical physicist — who, himself, used a system developed by Intel to speak because he suffered from amyotrophic lateral sclerosis (ALS) — feared the consequences of creating something that could match or surpass humans so much that he is quoted as saying, “The development of full AI could spell the end of the human race.”

Pace of Development

How far AI has come since that interview in 2014 is simply mind-blowing. The pace of development continues to pick up speed.

But have we just begun to create something wonderful and world-changing, or something akin to gothic horror and Frankenstein’s monster? Should we embrace AI or fear its very existence? Are we simply standing by while Hawking’s fears are being played out?

Let’s look at the impact of AI and one Australian reporter who may have got more than he bargained for; the good and bad sides and the forecasts for the future of life with AI.

Exciting or Frightening?

Ameca is intelligent, charming, honest and a little bit sarcastic. Ameca lives in Falmouth on the south coast of England. Ameca is also a robotic humanoid. Not just any old robot, but the most advanced one ever built.

Recently, the Australian 60 Minutes reporter, Tom Steinfort, traveled to meet and interview Ameca. Charmed by the long motorized limbs and realistic facial expressions, he complimented Ameca on ‘her’ physical appearance and asked whether ‘she’ thought he was handsome. While Steinfort’s charm may — or more likely may not — have worked on a person of flesh and blood, Ameca merely replied, “It’s not my place to judge your appearance, but I think you have a great personality and that is always important.”

Steinfort was not only side-stepped by this robotic humanoid, but then it went on to discuss feelings and the possibility of love. Is that frightening? A little too Stepford Wives?

Ameca appears to think for itself, but how? Generative AI technology incorporating very large language models allows Ameca to respond, using mathematical formulae.

The Good Side of AI

There are plenty of newsworthy examples of how AI is used for good, to collect data on a global scale and take action in ways that were previously impossible. Here are three recent good news AI stories from around the globe:

  • We hear a lot about the declining bee population and the serious repercussions. Along comes AI to save the bees. Together with Oracle, the World Bee Project gathers data through sensors and cameras placed on hives, uploads that data to the cloud, and is able to analyze it to identify patterns in order to direct early intervention to aid the survival of bees. AI-powered hives are the latest news, where in Australia they are being used in the fight against the varroa mite, a deadly parasite. The Observer newspaper interviewed Vignesh Murugan, from the Melbourne startup Vimana Tech, who said the hives have two cameras “top and bottom that scan each bee going in and out”. Whenever a bee enters the hive, it is scanned to see if it is carrying the mite, Murugan said. “And then once it has found a mite it will send an SMS alert.”
  • Healthcare is radically improved by the use of AI and deep learning-based AI algorithms; from drug development, clinical trials, data collection and sharing, and predictive technology to assist in decision-making. A global TV news story in the past week saw a paralyzed man — Gert-Jan Oskam, a 40-year-old Dutch man — walk after receiving electronic brain implants. He has been paralyzed for 12 years, and this is a paradigm shift in what was previously available, according to Prof Grégoire Courtine of the École Polytechnique Fédérale in Lausanne (EPFL), who led the project.
  • World hunger could be something of the past as we see breakthroughs in data monitoring systems, the Dalili app that helps refugees find food sources, efficient agricultural production systems and overall reduction of food waste using machine learning (ML) algorithms. A study carried out by researchers at the UK’s University of Birmingham investigates how ‘precision agriculture’ would allow farmers to respond in real-time to changes in crop growth using technology.

The Darker Side of AI

So, are we happy to embrace AI for the greater good or is there a darker side? Latest news stories show one of the ‘godfathers’ of AI, Professor Yoshua Bengio, quoted in an interview with the BBC as feeling “lost” over his life’s work. The Canadian has joined with others to call for strict regulations and recently signed two statements urging caution on the possible risks of AI. All companies that build powerful new AI tools need to be registered, he said. The issue seems to be the sheer pace of development, as mentioned earlier, and where it’s going. This pace could, say industry experts, result in the rise of bad actors inflicting harm through the deployment of malicious AI.

In light of this, Professor Bengio warns, “It might be military, it might be terrorists, it might be somebody very angry, psychotic. And so if it’s easy to program these AI systems to ask them to do something very bad, this would potentially be very dangerous.”

I threw out the question on LinkedIn, asking whether any of my fellow members of the Data Scientist and Analyst group are apprehensive of the impact of AI on our future. The general consensus is fear of AI getting into the wrong hands, which may be used in war and terrorism. Fear of hacking government databases is another worry. That we should tread with caution, said Adejoke Biobaku, and develop mitigations for checkmating the adverse use of AI innovations.

Fears for the Next Generation

On June 1, ahead of Apple’s AR/VR headset release on June 5, a short film entitled PRIVACY LOST was released at the Augmented World Expo (AWE) in Silicon Valley, to highlight the dangers of AI-powered mixed reality. “The risks to children are the most concerning,” said Dr. Catriona Wallace, founder of the Responsible Metaverse Alliance. “AI-powered characters like the teddy bear depicted in Privacy Lost could easily influence and manipulate kids. The technology to achieve this already exists, and without regulation, these tactics are likely to be deployed in the near future.”

The Future can be Bright with AI

Back to Professor Hawking and his own fears that the development of full AI would lead to the end of the human race. Did he predict the future? Can AI really replace the human race? Perhaps ironically, CTV news decided to ask ChatGPT the same question. The answer? AI could replace some humans in some industries, particularly in the case of manual, time-consuming and repetitive tasks. The displaced workers can then be promoted to oversee the repetitive work, train the robotic workers and ensure they are efficient. Overall, higher-level, creative and strategic work requires a human brain to achieve results; so for now we seem to be safe. Think of an industry, finance for example, and how AI is used for the good: AI can detect fraud, and while that’s great, it still requires the analytical human mind to interpret those results and make decisions on the next move.

The crux of AI being a bad thing is if the machines become more intelligent than us, or if bad actors give robots the ability to create sub-goals, for example. Then they become effectively sentient. We need to fully understand all aspects of AI and be totally in control at all times. Google CEO Sandar Pichai recently admitted that he did not fully understand what Bard — Google’s chatbot — does. But we are all still learning and our ability to continue to do that and improve our understanding will get better as time goes on.

Importantly, we must regulate artificial intelligence so that we can trust it. AI tools and products need to deserve that trust and not exploit it. Profits should not come before principles. It has to be a two-way relationship, appropriately regulated and used ethically for a safe and secure future.