Under AI auspices, sophistication of cyberthreats has tripled and quadrupled, with the number of cases rising across the globe.
In H1 of 2024, 1571 cases of data compromise resulting from targeted cyberattacks were reported, a 14% rise from the year before.
A Check Point 2024 Q2 Research found that globally, cyber-attacks have increased 30% YoY, with a 25% spike recorded in the volume of corporate breaches between Q1 and Q2.
The same research found that the most-targeted sector today is education and research where troves of sensitive information and poor cybersecurity practices stoke deep interest in bad actors.
According to the report, education and research institutions globally saw a 53% rise in attacks in Q2, totaling to 3341 cases per organization, every week, on the average.
The second most frequently attacked sector is government and military – with health care a close third.
Piggybacking on AI
The maturity of the attacks underscores the potency of technologies like generative AI. Massive adoption of GenAI across organizations has fueled opportunities for hackers to systematically exploit vulnerabilities and coordinate malicious attacks.
The FBI office of San Francisco released a press release in May forewarning readers about the rising risks of AI-enabled attacks.
“Attackers are leveraging AI to craft highly convincing voice or video messages and emails to enable fraud schemes against individuals and businesses alike,” said FBI special agent, Robert Tripp.
There is a number of factors at play behind the sudden surge of AI-led attacks. Democratization of AI tools has significantly lowered the barrier to entry for many amateur hacking groups that, despite not having advanced skills, are able to execute malicious campaigns using the technology.
Additionally, opportunistic use of AI has allowed attackers to significantly evolve and refine their patterns, making attacks surprise cloak-and-dagger operations that even the smartest cybersecurity scanners fail to discover in the early stages.
“If you remember the “script kiddies” of the early years of cyberattacks, this new way of AI-assisted attacking just amplifies the problem,” says Tom Hollingsworth, former network engineer, and current events lead for the Security Field Day series.
Data shows that attackers are leveraging AI tools in the wildest of ways to optimize attack chains – from automating discovery of vulnerabilities using AI-based reconnaissance, to optimizing phases of the attack chain, to enabling efficient and speedy lateral movement.
The rising distribution of misinformation using deepfakes is another prime example of AI-led assaults.
Deepfakes are synthetic media properties – video, audio and images – that realistically clone the faces and voices of real people to defraud and victimize unsuspecting audiences.
The deepfake technology has become the force behind some of the most nefarious adversary operations in the world. Names of countries like Russia, China and Iran have been frequently linked to deepfake political campaigns on social media.
AI technologies are also being leveraged to carry out forms of social engineering scams that tools like ChatGPT have heavily facilitated.
Creating convincing and personalized human-sounding messages and emails are the chatbot’s biggest strengths. Across the world, hackers are leveraging this capability to enhance the effectiveness of their phishing campaigns.
Awareness and Communication Top of Mind
There’s no denying that AI’s potency to enable malicious cyber operations is huge. But one thing is certain. The less digitally literate users are always the first to fall.
As new regulations are being forged and enforced by governments in all parts of the world, a big share of the responsibility falls on organizations facing the threats.
The secret to defeating these operations lies in educating the workforce about the urgency of good cyber hygiene and best practices. Communication and training play a key role in making the staff understand, care, and work towards the improvement of the situation.
And from the technology standpoint, integrating AI security tools that proactively analyze risks and potential impacts, baking security into everything, makes defenses a lot stronger.
Hollingsworth says, “The availability of AI has made sophistication of attacks increase significantly. The key to preventing them is to leverage the most up-to-date tools that understand how AI is iterating on attack methods and anticipating where the attackers will go next.”