GenAI, cybersecurity

Lemongrass CISO Dave Manning recently shared his insights regarding the intersection of GenAI and cybersecurity, and how he foresees cybersecurity further shifting left. Below are his thoughts, as shared with Amanda Razani, the managing editor for Techstrong.

AI has been at the center of copious conversations over the past year or two. But is AI in the realm of cybersecurity actually new?

Not at all. For years, security tools have included “Artificial Intelligence” features – although the industry didn’t always label them as such.

For instance, take Security Information and Event Management (SIEM) platforms. The basic job of these tools is to ingest tons of security data and look for anomalies. AI is a great way to do this because it can automatically recognize patterns and deviations from them – such as login events that are anomalous based on a user’s login history.  This kind of anomaly detection and correlation has been a pretty standard feature of SIEMs for some time.

The big change that has taken place recently is the advent of generative AI, which has not yet been a large part of cybersecurity tooling.

OK, so let’s talk about generative AI in cybersecurity. Does GenAI create new capabilities for security teams that wouldn’t be possible using other types of AI?

Yes, plenty.

Some obvious and basic use cases for GenAI in cybersecurity are automatically drafting reports and summarizing alerts. In these scenarios, AI can save time and reduce toil for security personnel.

But going deeper, I foresee a variety of more sophisticated GenAI use cases in security. One with which I have some experience is using GenAI to interpret and translate documented security policies – meaning high-level security standards that we define for the organization – into actual security control objectives and procedures that we can implement within our systems. In this way, my organization has begun using GenAI to better-align our policies and practices, which is always a goal of security practitioners.

I also envision using GenAI to help us shift security even further left: Less human error through greater automation, more-advanced fuzzing in the development pipeline, and a rapid and cheaper alternative to traditional pentesting by having GenAI tools evaluate an organization’s IT environment and describe – or simulate – likely attacks against it. There are some promising attack-path simulation tools available today, and this seems like the logical extension to that.

What are the benefits to using AI to support use cases like those you’ve just described – as opposed to doing things the old-fashioned way, using human staff?

To me, the fundamental advantages of AI compared to humans are speed and scalability: AI can handle vast quantities of data with ease, and brings what were prohibitively-expensive high-end capabilities within the reach of everyone.

For instance, you could look for vulnerabilities in an application by hiring a team of skilled developers and asking them to pore over source code. After many days or weeks they’d produce a report detailing risks within an app. If you’re lucky, they’d also find time to suggest how to remediate the vulnerabilities.

Soon you’ll be able to deploy an AI tool that scans your source code and/or simulates attacks against your application automatically. Within minutes you’d likely have identified most or all of the same vulnerabilities that those developers would have found through manual analysis. At the same time, your AI tool might also have rewritten the code to fix those vulnerabilities.

That’s just one example, but hopefully the point is clear: With AI, you can work much faster and at a greater scale. Plus, you can free your human engineers and analysts to focus on tasks that most find more interesting than the repetitive manual work they often find themselves having to do.

Speaking of keeping humans happy, should cybersecurity staff worry that AI will take their jobs?

That question brings to my mind the famous Latin phrase “quis custodiet ipsos custodes” – who watches the watchmen?

How much do we trust AI security tools to always represent our best interests? Putting appropriate controls around them will remain a human role for a long time. AI security is so new that, as an industry, we’re still learning what questions to ask – before we even get to working out the right answers.

AI-driven approaches to cybersecurity have the potential to expose an organization to new types of risk, making human oversight all the more important. For instance, imagine that you deploy an AI-based Security Orchestration, Automation and Response (SOAR) platform to automate security operations. Using something like prompt engineering, a threat actor could convince the SOAR to carry out malicious activity in the environment it’s supposed to be protecting. It is the role of the humans to ensure that the potential blast-radius has been minimized through appropriate controls.

What other new challenges does AI bring to the realm of cybersecurity?

The risk that GenAI tools or services will accidentally expose your organization’s sensitive data is a high-profile one. So is the risk that threat actors will find a way to poison the data that an AI model uses for training in ways that cause the model to behave maliciously.

These aren’t fundamentally new challenges. That second example is a variation of supply-chain security, because it involves ensuring that the AI tools and services supplied by external vendors – and the components they’re built from – are well-understood. But they certainly add a new dimension to this challenge.

In a more philosophical sense, I’d add that generative AI is challenging because it breaks a cardinal rule of cybersecurity, which is that you should never mix the data and control channels. This is precisely what today’s Large Language Models (LLMs) do when they make it impossible to differentiate between the data a model was trained on and the algorithm that powers the model. This is another reason why GenAI requires new types of controls, such as the ability to filter model inputs and outputs to reduce their susceptibility to malicious prompts.

Now that AI is a well-established part of cybersecurity workflows, what comes next? Which innovations should we watch for in coming years?

I can’t predict the future with any greater guarantee of accuracy than an AI model, but I think that one big change to watch is how society adapts to AI technology.

Even today’s AI technology is quite powerful and, in many ways, reliable. Arguably, what’s holding things back is societal and cultural concerns about its safety and trustworthiness.

For instance, take self-driving cars. They have accidents, but so do human drivers – and when you crunch the numbers, you discover that self-driving cars have lower accident rates than humans. Yet when a self-driving vehicle does crash, you can bet it will be all over the headlines, whereas human car crashes are so routine that they rarely even make the local news.

I suspect this will eventually change as society becomes more comfortable with the idea of outsourcing driving to AI. And I think we’ll see something similar pan out with AI’s incorporation into other domains, including cybersecurity.

Today, we are hyper-alert to the risks and challenges that AI presents precisely because it’s so new and, to be frank, not particularly well understood even by AI developers.  We’ve learnt to adapt and thrive with every previous technology wave and have become more comfortable with it until it’s an invisible part of our everyday lives. There’s no reason to expect AI to be any different.

TECHSTRONG TV

Click full-screen to enable volume control
Watch latest episodes and shows

Qlik Tech Field Day Showcase

TECHSTRONG AI PODCAST

SHARE THIS STORY