cybersecurity

The accelerated development and adoption of generative AI in the form of ChatGPT, Google’s Bard and similar tools are giving boards of directors already nervous about the expanding cyberthreat landscape one more thing to worry about.

A report by cybersecurity firm Proofpoint found that 73% of the 659 board members surveyed said they feel their companies are at risk of a significant cyberattack; a jump from the 65% who answered the same question last year. About 53% feel unprepared to address a targeted attack, an increase from 47% in 2022.

The board members surveyed by Proofpoint came from companies in 12 countries with 5,000 or more employees and from a range of industries.

There are a number of issues driving these worries, from geopolitical tensions emanating from Russia’s illegal war against Ukraine to increases in ransomware and supply-chain attacks. In addition, the transition by many companies to hybrid and remote work means a much larger attack surface that includes a more widely distributed workforce.

The rise of generative AI is also of particular concern, with 59% of board members believing that the advanced technology already poses a security risk to their organizations. The worry is not only the use of the technology by threat actors but the risk of employees inadvertently uploading sensitive or proprietary corporate information while using ChatGPT.

“Board directors are rightly concerned about the evolving threat landscape, and that may be driving their fear that threat actors could use AI to develop more effective attacks on people,” Ryan Witt, vice president of industry solutions at Proofpoint, told Techstrong.ai. “Accessing user credentials is the nirvana state for most threat actors. Once they have credentials, they get access to the network, can survey their target organization, and look for the best way to launch an exploit.”

Fight AI with AI

Generative AI in the cybersecurity world is an issue that Proofpoint and many other vendors are tracking, outlining both the risks to organizations and how those companies can use the technology to bulk up their own protection.

Threat actors already are using generative AI to create their own chatbots, like WormGPT, which can help hackers create even more legitimate-looking phishing emails and launch other malicious campaigns. It also democratizes cybercrime, enabling less-skilled bad actors to launch sophisticated attacks.

“For bad actors, formulating well-written phishing and business email campaigns just became much easier and scalable,” Brian Reed, Proofpoint’s senior director of cybersecurity strategy, told Techstrong.ai. “Gone are the days of advising end users to look for obvious grammatical, context and syntax errors. Simply asking your generative AI platform to generate a convincing and urgent email to update your bank routing and account information is now a thing that any non-technical bad actor can easily understand and execute.”

The worry about leaking data into generative AI is also a legitimate concern for organizations, Reed said, particularly if they’re using their own corporate data or – even more – customer or regulated data or intellectual property. That said, there are a range of tools and controls that can help to mitigate this, including data loss prevention (DLP) products, insider risk platforms, cloud access security brokers (CASBs), and cloud web gateways.

The Economy Increases the Risk

What feeds into the problem of generative AI-based cybercrime – and almost any kind of attack – is an turbulent economic environment that has put a squeeze on cybersecurity resources, Witt said. It not only puts pressure on budgets for security, but the stress of the economic downturn worldwide can take a financial and emotional toll on employees and their families, opening the door even wider for threat groups.

“Employees become distracted and unhappy at work, making it much easier for threat actors to exploit human weaknesses,” he said. “Threat actors thrive on such worries.”

Just as tools like ChatGPT can make hackers more dangerous, they also can make defenders more effective. They’re particularly useful behind the scenes, including in cases where an organization is using large language models (LLMs), Reed said. Since OpenAI introduced ChatGPT in November 2022, the focus has been on developing AI chatbots, which he called an “obvious initial step.”

“A better approach would be for organizations and vendors who have a deep and wide visibility into the global threat landscape to utilize both a platform and analytics to use AI to tell a real-time story about what is currently happening within an organization, instead of just simply feeding in plain-text queries about security threats,” he said.

Address Current Threats

At the same time, Witt cautioned against worrying too much about the threat of ChatGPT in phishing and other attacks, saying the “concern may be overblown.” Right now, traditional phishing, ransomware and other attacks are still the primary thrust of most threat actors, and that likely will remain the case until the defenses against them are more hardened.

“ChatGPT [and] AI-based attacks, like attacks on IoT [internet of things] devices should still be a concern for company boards, but their energy is better focused on where today’s threat activity is most prevalent,” he said.

However, both threat actors and security professionals will continue to maximize emerging AI technologies, which will drive further spending on them. AI can help security teams defend organizations through expedited and enhanced threat detection, and hackers will use them to launch increasingly complex attacks that can learn and adapt on the fly, Witt said.

“This may devolve into a cat and mouse game, where AI-enhanced defenses are persistently challenged by AI-improved threats, and vice-versa,” he said. “All of this will require continued investment in AI technology so that cybersecurity defenders can match their aggressors on the virtual battlefield.”