Generative AI is a double-edged sword in the cybersecurity world, just as it is in many other business and tech sectors.
The advanced technology used to create such popular applications as ChatGPT is being used by security pros to enhance their capabilities, but their adversaries are doing to the same, using generative AI techniques to hone their attack techniques and tactics.
A recent survey released by Deep Instinct – a cybersecurity firm that provides a deep learning-based framework – found that security teams are seeing positive effects from generative AI in their organizations, including improved employee productivity, collaboration and morale.
However, a jump in the number of cyberattacks and increasing vulnerability to the attacks illustrate the dangers of generative AI, particularly when it’s in the hands of bad actors. It also lays bare the need to advance cybersecurity tools beyond endpoint detection and response (EDR) products to a more proactive strategy that stresses prevention, according to Deep Instinct CIO Carl Froggett.
“The best way to combat AI-powered cyber threats is with a more advanced form of AI, specifically deep learning,” Froggett told Techstrong.ai. “What we need now, more than ever, is a prevention-first approach to securing the business.”
A Mixed Bag
For its report, “Generative AI and Cybersecurity: Bright Future or Business Battleground,” Deep Instinct surveyed more than 650 senior security operations pros in the United States, including CIOs and CISOs. The survey was conducted by Sapio Research.
What the survey found was SecOps teams with a mixed view of generative AI – good for business operations, a challenge for security and – for a smaller percentage – concerning for their own job prospects.
According to the study, 69% of respondents said their organizations already have adopted generative AI tools, with the most adoption coming from the finance sector (80%). About 70% said it was enhancing employee productivity and collaboration, with 63% seeing it improve employee morale.
That said, 75% of security pros saw a jump in cyberattacks over the previous 12 months, with 85% attributing the increase to cybercriminals using generative AI in their malicious operations.
Numerous reports have shown that the technology can help bad actors create more realistic phishing emails and launch more sophisticated business email compromise (BEC) campaigns as they adopt such tools as WormGPT and FraudGPT. The use of deepfakes – fake events created through the use of deep learning techniques – is concerning enough to have federal agencies like the FBI, National Security Agency (NSA), and Cybersecurity and Infrastructure Security Agency (CISA) issue an information sheet this month about the threat.
Generative AI poses a significant cybersecurity threat, with 46% of survey respondents say it will make their organizations more vulnerable to attack, with the top concerns being privacy (39%), undetectable phishing attacks (37%), and an increase in both the volume and velocity of attacks (33%).
A Need for Deep Learning
The key going forward will be ensuring the SecOps teams have access to technologies like deep learning tools to counter what hackers are doing with generative AI, Deep Instinct’s Froggett said.
“We’re in a new era of generative AI where security professionals have the power to benefit from the technology,” he said. “But issues arise when they don’t have the proper tools and resources to do so. Traditional AI can’t keep pace with the sheer volume of data and, as a result, only leverages a portion of the data to infer decision-making.”
Froggett said there is a turn in the cybersecurity world from detection – reacting to an intrusion – to prevention, as SecOps teams look to keep ahead of the increasingly sophisticated attacks. Deep learning is a key technique for this.
“With deep learning, none of the data is discarded, making it much more accurate and enabling it to predict and prevent known and unknown threats,” he said.
Along with enabling attackers to launch complex campaigns that are more difficult to detect and mitigate, generative AI also is helping to add to an already burdensome workload for security pros. About 55% of those surveyed noted an increase in stress levels, with the limitations on staffing and resources being blamed by 42% of respondents.
In addition, 51% said the stress will likely lead them to leave their jobs in the coming 12 months.
Job Security is a Worry
At the same time, despite a growing skills gap in cybersecurity, there is a worry among SecOps pros that AI is threatening their jobs. About 14% said they worried that their job will soon become irrelevant and 11% are unsure if their jobs will exist five years from now.
Froggett said cybersecurity operators understand the skills shortage – particularly given that it is adding to already high stress levels – and that generative AI is leading to a larger volume of more sophisticated attacks. However, the use of AI in their jobs will be a positive for them.
“I don’t believe that AI will take jobs away from cybersecurity professionals, but rather, augment them, helping with tedious tasks like data analysis and highlighting threats that are truly deserving of investigation versus the alert storm of suspect alarms that SOC [security operation center] teams are fighting against today,” he said.
Froggett noted that Deep Instinct’s report found that false positives from legacy cybersecurity tools lead to more than two working days of lost productivity per week, adding that “with AI’s assistance, cyber defenders can focus on real, often unknown threats to the business versus wasting time on false positives.”