Synopsis: In this AI Leadership Insights video interview, Amanda Razani speaks with Cody Cornell, co-founder and chief strategy officer of Swimlane, about how business leaders can harness generative AI and automation to improve SecOps.
Amanda Razani: Hello, and welcome to the AI Leadership Insights series. I’m Amanda Razani with Techstrong AI, and I’m excited to be here today with Cody Cornell. He is the Co-founder and Chief Strategy Officer for Swimlane. How are you doing today?
Cody Cornell: Great. Happy to be here. Thanks.
Amanda Razani: Can you share a little bit about Swimlane and what services do you provide?
Cody Cornell: Yeah. Swimlane at its core is from a product perspective, is an AI enabled security automation tool. That’s a bit of a mouthful, but what we really do is help organizations who are trying to deliver security operations, which is historically a very human intensive and provide automation. We’ve been doing that for almost a decade now for some of the largest companies and government agencies in the world, and obviously along with a lot of other folks, AI has come in and really been a good contributor to helping us achieve that outcome, which is how do we get more done without as many people, because there’s never enough people to do the work.
Amanda Razani: Absolutely. So with that being said, I believe you recently unveiled a tool called Hero AI. Can you share a little bit about that?
Cody Cornell: Yeah. Hero AI is a set of AI innovations that we developed. They’re pretty unique in the sense that we’re using a combination of generative AI and other capabilities like natural language processing to help security analysts. There’s not enough security people in the world to do all the work that’s needed. So automation has been the backbone of kind of being a force multiplier for those teams historically, and AI is really another capability layer on top of that to help them do things like summarize cases, determine next steps, and really start answering questions more quickly and more efficiently than they could by hand.
Amanda Razani: Wonderful. So from your experience in working with business leaders, how can they best harness generative AI and automation for their SecOps?
Cody Cornell: Yeah. I think security has been using artificial intelligence and machine learning for a long time. Right? So I’d like to say that in some senses we’re ahead of the curve in the fact that we’ve leveraged it for a long time. I think the thing that’s new obviously, is the use of generative AI. LLMs become very prominent, almost synonymous with AI in the last year, year and a half. So I think you need to kind of bucket your items and your tasks and your outcomes that you’re trying to do so that they match the technology you’re trying to leverage. Right?
Generative AI is exactly that, it’s generative. Can create text, can create images, can help people do things that they typically would have to do either manually or through a lot of process, where other AI is about things like detecting deviations from baseline. So that’s more like threat detection and things like that. So I think one of the things to think about before you wave the magic AI wand at everything is to really think about the outcome you’re trying to use and realize that AI, much like cybersecurity is not a one size fits all. There’s different technologies and different techniques depending on the outcome you’re trying to create and the problem you’re trying to solve.
Amanda Razani: So when business leaders are trying to implement this technology, what are some of the roadblocks that you see them encounter, and what is your advice to get past those?
Cody Cornell: Yeah. I mean, I think that one of the biggest things, and one of the reasons why we developed Hero AI is that a lot of, especially enterprise organizations or government agencies, they want to leverage AI very aggressively. There’s lots of good things that they can do with it. But unfortunately that means they’re sending really sensitive proprietary and sometimes security data to public sources. Right?
So it’s not always obvious how something like OpenAI or Gemini is leveraging that data to build its own learning and reinforcement learning and things like that. So one of the things they run into is just how do I use this technology without exposing sensitive information, proprietary information?
And that’s really what Hero’s about. It’s actually a private and secure LLM, which is not tied to a public LLM, which means your data is safe in the sense that we’re not training with it, and we didn’t train with your data to start with, so there’s no way to leak your information. So we use our own data, public data, but not customer data. In that sense, that allows us to give our customers a lot of confidence so they can leverage AI aggressively without some of the downside risks of exposing sensitive information.
Amanda Razani: So you said that you work with government agencies. So what is your opinion as far as do we need some better regulations when it comes to this AI technology and cybersecurity, and if so, what do you think is the timeline for producing this?
Cody Cornell: Yeah. That’s a pretty politically loaded question to be honest. I think there’s a lot of things that people can do with AI, and there’s a lot of positive outcomes, and there’s lots of negative outcomes. Right? If you see deepfakes as it relates to political influence, the invasion of privacy of impersonating an individual, we’ve already seen those things happen. And I think those into themselves are, they’re crimes, right? We already have laws for those, and those laws should be enforced.
Obviously, AI regulation, it might be another way to do that. So there might be some improvements to the laws that we have. But if you think about the things that people are doing with AI, they’re already illegal, and implementing regulation for AI is really difficult because it moves so quickly. Right? It’s one of the fastest moving technology innovations, probably faster than the internet, faster than mobile, faster than the cloud. So the idea of regulating is really difficult. I’m not going to say it’s not needed. I’m not a lawyer or a politician, so I’m not an expert on either, but there’s probably things that need to be done where there probably needs to be some type of body.
I don’t know if you compare it to the Food and Drug Administration, where we’re constantly looking at new things and evaluating them for risks to the public, but the idea of setting a law and it being implemented in a way that deals with the changes that are coming at the pace on which they’re coming is really hard to comprehend. And also, just remember, we do have laws on the books. If people are using AI to break the law, steal money, steal information, slander people, those are all already crimes. So we should enforce those as such.
Amanda Razani: Yeah, definitely. So this technology is advancing extremely quickly. What do you foresee in the future as far as how businesses can incorporate AI moving forward?
Cody Cornell: I think it’s kind of a classic. I think it’s Gates Law, right? Where we’re overestimating what will happen in the short term and underestimating what will happen in the long term. I don’t even think that we, much like we probably didn’t understand the impact of the internet 20 years, 25 years later, we probably don’t understand how AI is going to impact our lives 10, 15, 20 years from now. I think the chasm is that big or larger.
So I think we are going to get much more efficient. I think it’s going to change the occupational landscape. Some of the skills that have been required to create new content, specifically with LLMs, be it copy for blogs or images for whatever they might be needed for, software development. There’s a lot of these things that I think are going to be augmented. Will it replace unique thought? No, it can’t.
It’s trained on known things. But I do think there’s a lot of those things that are more summarization of other tasks or other pieces of information will quickly become commoditized. From a cybersecurity perspective, there’s lots of really interesting applications up through that. So you think about detection engineering, and threat hunting.
Both of those require you to come up with ideas, how to detect bad behavior, how to hunt for bad behavior and environment. That’s typically an individual that’s coming up with those ideas and then testing them. If you think about generative AI, those are probably really interesting ways to apply that technology to those use cases. Again, it’s not a panacea. It won’t solve all security problems, but it definitely will help with some.
Amanda Razani: Awesome. Do you think that we’re going to come to a point where we see absolutely everyone using generative AI in some form or fashion?
Cody Cornell: I think everybody will be consuming generative AI if they realize it or not. I mean, do I expect my parents and grandparents to be using generative AI? No. But right now they use the cloud and they don’t know it. If AWS is running a huge percentage of web services, things like Netflix and PayPal and all these other things.
I’m not sure if those are actually customers, so I don’t know that’s the case. But when Amazon’s not running, those things are not running. Right? So a lot of people are using the cloud, but they don’t realize they’re using the cloud. They’re consuming the end service. And I think there’s a lot of things that will be built on top of generative AI capabilities or augmented with generative AI capabilities that we might not even realize is happening. But we are consumers of those services.
Amanda Razani: Right. So if there was one key takeaway you would like to leave our audience with today, what would that be?
Cody Cornell: I think it’s, look for opportunities to leverage it, but don’t assume it’s going to solve all your problems. I think what we’ve seen is a lot of hype in the last 12 months that, “I have a problem, AI will solve it.” But there’s a lot of things that go into using AI around gathering the right data, training it, making sure there’s guardrails there, making sure there’s not any kind of privacy implications, whatever it might be. That look for opportunities to use it, but don’t assume it’s just going to solve all your problems. Some of your problems are probably not just things that can be solved by training a model. The whole world isn’t going to be… Well, we’re not going to get to Utopia through trained LLMs.
Amanda Razani: Right. It’s just one [inaudible 00:09:54] of many.
Cody Cornell: Yeah.
Amanda Razani: Well, I want to thank you, Cody, for coming on our show and sharing your insights with us today.
Cody Cornell: Yeah. No, thanks for having me. I really enjoyed it.
Amanda Razani: Thank you.