Security teams are tapping into the power offered by generative AI (GenAI) applications, as the top use cases within IT are automated code generation and debugging (35%) and enhanced cybersecurity (30%).
These were among the findings of a report by Everest Group, which also found virtual assistance and support (20%) and software testing and quality assurance (15%) were among the other top uses for GenAI.
According to the survey findings, security teams are using GenAI for threat detection and response (38%), phishing detection and prevention (32%), anomaly detection (20%) and AI security chatbots (10%).
Callie Guenther, senior manager of cyber threat research at Critical Start, explained IT and security teams are “naturally positioned” at the forefront of GenAI adoption due to their core responsibilities, which include continuous monitoring, threat detection and adaptation to emerging threats.
“GenAI offers significant advancements in these areas by providing enhanced capabilities for intelligence-driven threat hunting, automated code generation, debugging and anomaly detection,” she said.
Guenther pointed out the inherent need for IT and security teams to stay ahead of sophisticated cyber threats drives their rapid adoption of cutting-edge technologies like GenAI.
“This adoption is essential for maintaining robust security postures and ensuring resilience against evolving cyber threats,” she said.
Jasson Casey, CEO at Beyond Identity, explained GenAI tools are “amazing” at recognizing patterns as well as generating examples from patterns.
“Adversaries have already harnessed GenAI to produce targeted phishing lures that read, sound and even look indistinguishable from a victim’s colleague,” he said.
Security and IT teams have taken note and have started to use these tools in similar fashion for internal pen-testing.
Casey added GenAI represents an ability for IT and security teams to summarize vast amounts of data to help understand how their organization operates, what are the specifics behind high level patterns and accelerate forensics in typical incident response.
While the shortage of skilled talent compared with the complexity and volume of alerts leading to alert fatigue make security teams open to using GenAI tools to augment people working in security, the Everest Group report also revealed a talent shortage was the top challenge CIOs face with GenAI adoption.
The report found financial services, pharmaceuticals and retail were the top industries leading GenAI adoption, with key applications in finance being fraud detection and personalized advice, in pharma being for drug discovery and protein structure prediction, and in retail being for personalized marketing and supply chain management.
Financial service organizations are leveraging GenAI primarily for fraud detection and management, as well as personalized financial advice.
Guenther explained GenAI models can analyze vast amounts of transactional data in real-time to identify unusual patterns indicative of fraudulent activities, thereby enhancing security and compliance measures.
“GenAI’s capabilities in natural language processing and machine learning allow financial institutions to provide tailored advice and services to customers, improving customer experience and engagement,” she said.
She added the critical nature of financial data security and the competitive advantage provided by personalized services drive these organizations to be leaders in GenAI adoption.
From Casey’s perspective, one of the main benefits is AI augmentation of existing worker capabilities beyond what they can achieve realistically without AI.
“A key manifestation of this is automating repetitive, administrative tasks that do not require labor-intensive human reviews,” he said.
He noted another benefit of GenAI is worker training, pointing to a Gartner study predicting AI augmentation will reduce the need for specialized education for entry level cybersecurity positions by 50% by 2028.
Narayana Pappu, CEO at Zendata, said GenAI provides “tremendous opportunities” for efficiency improvements.
“Developers reported anywhere between 10%-25% increase in speed in writing unit testing, completing methods or functions, or working on routing tasks,” he said.
He added on the security side, although there is excitement and usage, the effectiveness of the foundation model is not as well proven.
“The training data needed for training a great foundation model focused on security is siloed within individual companies, and this could potentially be addressed by small language models,” he explained.