artificial intelligence

In late 2023, Business Insider published an article exploring the three biggest fears people report having about artificial intelligence. At the top of the list was an “AI takeover,” which some believe will involve AI using military systems to start a world war.

While fearing that AI will take over the world — or destroy it — might be seen as extreme, many AI-related fears are more reasonable. The following is a short list of some reasons why you might want to fear the AI revolution, along with some steps to keep those fears at bay.

The Fear of AI Leading to Job Displacement

Job displacement made the number two spot on the Business Insider list, where it was described as “AI causing mass unemployment.” This fear stems from the belief that AI can make certain hands-on positions obsolete, thereby doing away with the need for certain categories of workers. Recent reports predict AI could replace as many as 2.4 million jobs by 2030, suggesting that the fear is not unfounded.

Repetitive and predictable tasks are those with the highest likelihood of being replaced. Essentially, if it can be automated, AI can do it faster and with fewer errors than a human. AI also brings the benefit of 24/7 availability to the equation, providing a worker that never calls in sick, takes a vacation or asks for a raise.

Those who fear AI will displace them from their position should begin now developing uniquely human skills including empathy, creativity, critical thinking and complex communication skills. Those who have those skills will most likely find AI-driven platforms will augment their work, but not displace it.

The Fear of AI Causing Ethical Dilemmas

Ethical concerns quickly came to the forefront of public discussions as AI usage increased, and one of the top concerns in this area involves biases being perpetuated as they are integrated into AI models through training. This has sparked a fear that AI-driven programs in areas like hiring, banking, law enforcement, and health care could drive discriminatory practices by amplifying biases.

The potential for privacy violations involving personal data is another ethical concern that has arisen with the increased use of AI. Generative AI platforms like ChatGPT store conversations that take place between users and AI-driven chatbots, meaning that if the accounts where data is stored are breached, personal data can fall into the wrong hands.

The Fear of Over-Dependence on AI and Resulting Vulnerabilities

The latest generation of AI tools has quickly attracted users in a variety of fields, with ChatGPT alone having amassed more than 180 million users since launching in November 2022. Recent studies show more than 80% of companies have already adopted AI to support their operations.

For some, this rapid mass adoption has led to the fear that our society is becoming over-dependent on AI. At best, over-dependence could lead to users making poor decisions based on inaccurate information provided by AI-driven platforms. At worst, it could lead to AI-driven catastrophes triggered by the malfunction or misuse of the technology.

The Fear of Socioeconomic Disparities Arising From AI

When properly managed, AI has the potential to make life safer, healthier and easier. Taking advantage of that potential, however, requires being on the connected side of the digital divide.

Presently, there are 2.7 billion people in the world — or 33% of the global population — who do not have internet access. For these people, accessing the benefits of AI is virtually impossible, a fact that has led some to fear AI will deepen socioeconomic disparities. If left unaddressed, this divide could result in AI being developed to benefit the wealthy, which could result in even more restricted access.

In late 2023, Scientific American published an article that reported “AI Anxiety” is on the rise. The article explored the fear that has arisen in our culture as AI has rapidly advanced, highlighting many of the fears mentioned above.

For those suffering from AI anxiety, knowledge is power. Exploring AI-driven tools to get a better understanding of their capabilities and limitations empowers us to effectively contribute to the ongoing AI debate while also encouraging developers and regulators to take the steps necessary to ensure AI contributes to a future that is safe and beneficial for all.


Ed Watal is an AI Thought Leader and Technology Investor. One of his key projects includes BigParser (an Ethical AI Platform and Data Commons for the World). He is also the founder of Intellibus, an INC 5000 “Top 100 Fastest Growing Software Firm” in the USA, and the lead faculty of AI Masterclass — a joint operation between NYU SPS and Intellibus. Forbes Books is collaborating with Ed on a seminal book on our AI Future. Board Members and C-level executives at the World’s Largest Financial Institutions rely on him for strategic transformational advice. Ed has been featured on Fox News, QR Calgary Radio, and Medical Device News.