
A global survey of 2,307 senior decision makers, including 1,5000 C-level executives, published this week finds nearly all (99%) are planning additional investments in generative artificial intelligence (AI) technologies, with 89% of CEOs identifying it as the top technology they need to ensure their competitiveness and profitability
Conducted by Jigsaw Research on behalf of NTT DATA, more than two-thirds (67%) of the 187 CEOs surveyed described those investments as significant.
At the same time, 95% of the 413 CIOs and CTOs that responded to the survey reported that generative AI has already driven, or will drive, greater cybersecurity investments, with organizations ranking improved security as one of the top three business benefits realized from GenAI deployment in the last 12 months.
However, close to half (45%) of the 63 CISOs surveyed expressed negative sentiments toward generative AI adoption. More than half (54%) of them noted internal guidelines or policies for generative AI are unclear, a concern that only 20% of CEOs shared.
Overall, nearly three quarters of respondents noted their organization still lacks a formal generative AI usage and just 24% of CISOs strongly agree that their organization has a robust framework for balancing risk with value creation. Additionally, only 38% of CISOs say generative AI and cybersecurity strategies are aligned compared to 51% of CEOs.
Nevertheless, 81% of senior IT security leaders with negative sentiments still agree generative AI will boost efficiency and impact the bottom-line.
Stefaan Hinderyckx, senior vice president for cybersecurity at NTT Data, said the survey makes it clear, when it comes to operationalizing AI, all the traditional issues involving people, process and technology are once again coming to the fore. The major difference now, however, is that data security has become a much bigger concern in the age of AI, he added. As such, organizations should, for example, be doubling, sometimes even tripling, the number of schemes used to encrypt data, said Hinderyckx.
Organizations are not only concerned about inadvertently sharing sensitive data with AI models, they are also becoming more aware of how adversaries might use AI to launch more sophisticated social engineering attacks, he added.
In effect, organizations are now locked in an AI arms race as they look to secure AI applications and platforms from attacks that are increasingly leveraging AI to craft more believable phishing lures, said Hinderyckx. Cybercriminals are also using AI to learn how to exploit vulnerabilities using code that is created for them using any number of AI tools, he added.
As a result, the amount of time between when an exploit is discovered and a breach occurs continues to narrow in ways that require cybersecurity teams to be able to respond in near real time to limit the blast radius of a breach, noted Hinderyckx.
It’s not clear just yet at what pace organizations are successfully operationalizing AI, but the one thing that is certain is the cybersecurity stakes have never been higher for everyone concerned. The only issue that remains to be seen now is what types of breaches will inevitably manifest themselves simply because some simple best practices for data security were never initially followed in the first place.