Business, businesses, AI, security, Human, AI, cybersecurity

The AI revolution is well underway. In the past two years, generative AI has proven capable of outperforming most humans on a range of cognitive tests, and its text and image generation capabilities have improved rapidly. This is why OpenAI’s ChatGPT was such a phenomenon, setting a record for the fastest-growing consumer application in history after hitting 100 million active users in just a few months.  

The cybersecurity implications of the AI era are pronounced, as cybercriminals are increasingly using tools like large language models and AI-powered image generators to launch a barrage of cyberattacks — from phishing schemes that steal sensitive data and access credentials to deepfakes that spread misinformation and manipulate victims. Cybercriminals also use AI to conduct surveillance on potential targets and probe an organization’s cyber defenses for weak points to exploit. These developments have fueled an AI arms race in which the technology is simultaneously used to launch increasingly sophisticated attacks and to detect threats, protect data and control access.  

While the AI arms race will continue to escalate, human intelligence is still vital for keeping organizations safe. This is because the majority of cyberattacks rely on social engineering, which means they can be prevented by educating employees on how to identify and report suspicious activity. CISOs and other security leaders can’t afford to sit around and wait for the next AI-powered cybersecurity breakthrough — they must ensure that their workforces are prepared for a new wave of cyberthreats.  

Don’t Overlook the Human Element of Cybersecurity

Despite major shifts in the cyberthreat landscape, one fact has remained constant: Most cyberattacks still rely on human error. Each year, Verizon releases its Data Breach Investigations Report, which consistently finds that the “human element” is implicated in the majority of cyberattacks. The latest report found that this is the case for 68% of breaches — a testament to cybercriminals’ reliance on social engineering, as well as alarming evidence that these attacks remain extremely effective. 

According to IBM, the average cost of a data breach hit $4.88 million in 2024 — the highest ever recorded, and a 10% jump in a single year. As with previous reports, phishing was among the most frequent and costliest initial attack vectors. Social engineering is integral to many phishing attacks, which often coerce employees into clicking on malicious links, attachments and other fraudulent content in an effort to infect their devices or steal login credentials. IBM found that two-thirds of organizations are using AI for cybersecurity, and that this significantly decreased the cost of data breaches. However, employee training is the top factor that drives down the average cost of a breach — followed closely by AI insights.  

AWS

While AI is a valuable cybersecurity resource, it’s clear that employee training remains critical for keeping organizations safe. By developing a culture of cybersecurity at every level of the organization, CISOs and other security leaders will minimize the damage caused by successful attacks and limit the attack vectors cybercriminals can exploit in the first place.  

Why Social Engineering Attacks Are So Destructive

In January, Allianz published its 2024 Risk Barometer — a survey of over 3,000 risk management experts from around the world. These experts cited cyberattacks as the top global business risk for the “first time by a clear margin,” across “all company sizes.” According to PwC, 79% of executives are increasing their cybersecurity expenditures this year — up from 64% last year. Despite the growing awareness of the immense financial, operational and reputational damage cyberattacks can cause, employees are still regularly falling for social engineering schemes.  

One reason social engineering remains so stubbornly effective is the fact that it exploits universal and deeply ingrained psychological vulnerabilities, such as fear, obedience and curiosity. For example, a fear-based phishing scheme would be a fake phone call from the IRS which accuses the victim of tax fraud and demands sensitive information (such as a social security or bank account number). AI has made attacks like these even more powerful. As Google explains in its 2024 Cybersecurity Forecast: “Generative AI and large language models will be utilized in phishing, SMS and other social engineering operations to make the content and material (including voice and video) appear more legitimate.” Microsoft echoes this concern, noting that companies should prepare themselves for a “new era of phishing schemes” as AI becomes more advanced.  

Ninety percent of cybersecurity professionals believe AI is making cyberattacks harder to detect. LLMs allow cybercriminals to create compelling and error-free phishing messages regardless of their native language or technical ability, while deepfakes are becoming more convincing. It’s clear that cybercriminals will continue to leverage AI to hack employees’ minds, which is why awareness training is integral to an effective cybersecurity strategy.  

AI Cybersecurity and Awareness Training are Complementary

Recall that the top two mitigating factors in data breaches are employee training and AI-driven insights. These factors outrank encryption, insurance, identity and access management, and many other cybersecurity measures. It’s no surprise that one of the top factors which increases the average cost of a data breach is a cybersecurity skills shortage. IBM reports that the number of organizations confronting a cyber skills shortage rose from 42% in 2023 to 53% this year. While companies are using “AI and automation solutions to close the skills gap,” the lack of cybersecurity skills is a major (and growing) contributor to the cost of breaches.  

As AI-powered cybersecurity tools continue to improve, they will remain an important contributor to companies’ security posture. But the AI era has also created unprecedented cyberthreats, and existing AI security tools haven’t proven capable of preventing the relentless surge in data breaches and other cyberattacks. This is partly because AI gives cybercriminals more ways than ever to exploit the psychological weaknesses of their victims. While AI cybersecurity tools are more powerful than ever, companies are still only as safe as the weakest link in their defenses — which is often their employees.  

This is why CISOs and other security leaders must develop a culture of cybersecurity at every level of the company. Nothing can stop the AI arms race, but security leaders can’t just rely on the latest technology to navigate the evolving cyberthreat landscape. As long as cybercriminals continue to use employees’ own minds against them, awareness training will be indispensable for keeping the company safe. 

TECHSTRONG TV

Click full-screen to enable volume control
Watch latest episodes and shows

AI Data Infrastructure Field Day

TECHSTRONG AI PODCAST

SHARE THIS STORY