Synopsis: Amanda Razani speaks with Greg Notch, chief information security officer of Expel, about the good and bad sides of generative AI.

Amanda Razani: Hello, I’m Amanda Razani with Techstrong.ai, and I’m excited to be here today with Greg Notch. Greg Notch is the chief information security officer at Expel. He’s been doing the security and tech thing for over 20 years, helping companies large and small through all three dotcom booms to build high performing engineering teams, and improve their technology processes and security. How are you doing today?

Greg Notch: I’m great. And how are you?

Amanda Razani: Glad to have you on the show. I’m doing well. So can you share about Expel and the services you provide?

Greg Notch: Sure. We’re a managed security services provider. We sell a couple of things. One is a managed detection and response service. We also sell a phishing service, and services for detection and response in the cloud infrastructure, SaaS and Kubernetes.

Amanda Razani: Wonderful. Well, that’s a good segue into our topic of discussion today, which is generative AI. There’s been headlines about the good side of AI and the bad side of AI, but according to you, no technology is inherently good or bad, it’s all how it’s used. Can you share your thoughts on this subject in more detail?

AWS

Greg Notch: Sure. I mean, I’ve been doing this a while, and I’ve seen a lot of transformative technology sort of hit the marketplace. I’d say almost all technology is dual use, right? The internet itself, social media, mobile computing, the web and its evolutions. Look, new things just change equilibrium. They change the balance of how we get information, who we get it from, how quickly it shows up, and I think… And how it’s made. And I think a lot of the back and forth about generative AI is going to change how content and information is made, distributed, and how you get it. And I think there’s of course, both sides of that, it’s dual use. Going to be useful for attackers, going to be useful for defenders, useful for all of us, and there’s potentially some new harms that come out of it.

Amanda Razani: Absolutely, and I think it all comes down to the human role. So let’s talk about the role of humans in the context of AI, and why is it essential to keep them in the loop?

Greg Notch: Sure. I mean, I saw someone say that your job isn’t going to be taken by AI, it’s going to be taken by somebody using AI. So I think a lot of the concerns about its changes in the job market are a little overblown. I’ll give an example. I think humans might outsource rough drafts of writing, but they’re certainly not going to outsource the final product. As far as security and other technology goes, it’s essential to keep humans in the decision-making loop. Right now, I mean, if you take a look at the output of some of the generative AIs, if you’ve spent time with a chatbot on Amazon or one of the other customer service chatbots, you realize we have a long way to go, and often you need a human in the loop. I think for security and other risk-based technology, there’s not going to be a human out of the loop for quite some time, if ever. You want ethical decision-making being done by human beings.
The tool, the generative AI is a step change in the ability of technology to assist those folks, but it’s still not… These things aren’t thinking for themselves. The way that I think about it is decisions are the frontier of this. How much autonomy you allow an AI, and in what context, is going to depend on the scope of the consequences. So if it’s something small and not that important, sure you can lay AI out. But if it’s like, hey, if I’m making decisions about the future for this human being, I think we still need to keep humans in the loop.

Amanda Razani: Absolutely. Can you elaborate on some real world examples where generative AI has been utilized as both a problem and a solution, illustrating the impact of its application depending on usage?

Greg Notch: Sure. I mean, the canonical example of the weaponization is business email compromise and impersonation. So you’ll see targeted emails that are much better written than they have been in the past. They can be automated and indistinguishable from human writing, even for non-native speakers. That’s a problem. You see the rise of impersonation through phishing, audio phishing. If you give me a few seconds of this video and some audio, you can impersonate either one of us. And I think that presents some very serious challenges.
I’d say from a solution standpoint though, generative AI can be used to detect and respond to incidents. And we play with some of this, and certainly you’re seeing it in security technology spaces now, where you can use a generative AI to build context about an attack, and explain complicated things in ways that are simpler. And they help response teams know what to do and how to defend more, and remediate problems a little more easily than before, where that took a human analyst, and they had to write it up, and be able to explain it in a way that someone could take action. And we find that generative AI is helpful there.
Another example is it can be used to develop software more quickly and reliably. So things like GitHub copilot, we have engineers that are using it in our detection and response team to more quickly build detections and build preventions into our product, and respond to threats in the wild. So there’s a yin and yang to that where, look, phishing getting more sophisticated, but our responses are getting better and more reliable and more rapid.

Amanda Razani: So from your standpoint, how do you view the ethical concerns of AI at this time, based on its usage?

Greg Notch: Sure. I mean, my main concern is that we aren’t having enough conversations about it, the ethics yet. We’re like, if this is gold rush, land grab moment in the industry where everyone’s like, “Let’s bring these things to market.” And in the past we’ve seen where technologies are rapidly brought to market, social media is a good example, smartphones another, we didn’t really understand the impact on our kids. And on how we were going to interact, and how we were going to get information, and how misinformation… All of that was a big part of the, hey, we found out sort of after the fact. Then we started to think about the ethics of social media and the ethics of smartphones. I think this is another case of like, “Well, we’re going to put this thing out there and hope for the best.” And I think, maybe some better thinking on that is needed.
I think we need to be talking, and specifically what we need to be talking about is what controls we put in place. And I don’t mean regulation, I mean how we validate truth and how we validate identity. In a world where you can generate copies of people and their likenesses, I think we need to look at rights around what your likeness is. Who is the right to use that? The notion of public domain is changing a little bit, right? If I put this video out and someone uses it to generate some video of me saying something different, what are my rights in that scenario? The video I put out was in the public domain, but do you have the right to remix that endlessly to say something I didn’t mean? Especially important in politics and other public sphere things. So I think those are both some heady ethical concerns.
And the last one I would say is the training data is everything. So these generative AIs are being trained with a set of available data, and that might not be the most inclusive or perhaps the most socially forward thing. For example, a lot of the video training data is trained on white folks. That doesn’t necessarily translate into folks with different backgrounds or different colors, and that is a problem. And so we need to really mind the training set and the social implications of these technologies that we’re building. Super hard problem and folks are working on it, but something we want to keep an eye on from an ethics perspective.

Amanda Razani: Definitely. I think the technology is advancing so quickly that it’s hard to keep up and handle all these issues. And I know they just had a big meeting at the White House where a bunch of people flew in to discuss this issue and the concerns, and I know the EU is working on some things, and there’s been some companies step forward. So I know that it’s a concern across the board, and they’re just trying to get a handle of it now before it advances even further.

Greg Notch: Or at least build some frameworks on how to think about it. Hey, we understand that we have this problem, so everybody is at least aware we have this problem. You know there’s bias in this model. Everybody knows that, right? So we’re not going to make life decisions based on it.

Amanda Razani: Right. So what are some common attack trends involving generative AI, and how can organizations defend against them effectively?

Greg Notch: Look, I mean, it is an emerging area. What we’re seeing so far is improved spam, it’s evading detection, we’re at the usual, “Look for bad grammar,” is not going to fly anymore in business email compromise.
What’s interesting, I’m starting to see some reports among my peers of interviews of people who are interviewing for jobs and the video is fake or the audio is fake. And so really getting strong identity is very, especially with remote work, very difficult. How do you validate somebody is, or what you’re seeing is actually the person you expect it to be? I mean, I think it’s only a matter of time before someone gets their bank account emptied because if they properly authorize transfers on the phone, I mean, how does the phone system know that that’s you on the other side? So all of those types of things are, I’d say, emerging, scary problems, because so much of our identification and identity is based on these types of authentications.

Amanda Razani: Yes. So what are some of the key aspects of cyber awareness that individuals and organizations should prioritize, do you think?

Greg Notch: I mean, awareness is just that, letting people know that this is all possible, right? And to be inherently skeptical. I mean, I think it’s a lot of what security awareness training is is like, ask some questions. If you’re not sure, even if you are sure, ask some questions anyway.
We need to talk, have conversations about how we do that verification. Because if you’re talking to a generative AI and you can’t tell, then the verifications won’t matter. How to do out of band, like, oh, pick up the phone and call their known cell phone number. Or maybe do some challenges and responses that an AI probably won’t get. All of that’s emerging. I don’t know that we have good, prescriptive, “You must do this” yet, but those are the kinds of things I think you’ll see coming out of security training for the next year or two.
You want to verify someone is who they say they are and intends to do the thing that they’re talking to you about. And ascribing an intent and assuming identity, those are things that we’ll need to be training, particularly like HR or other finance folks, on validating that these things are true.

Amanda Razani: So last question, with this rapidly advancing technology, what do you see happening in the next year as it relates to AI?

Greg Notch: Look, I think every company’s trying to figure out how to leverage this to gain an edge, whether it’s to drive cost, drive efficiency, improve output, bring new products, we’re going to see all of that. Depending on the business and the landscape, we’re going to see some or all of that from pretty much everyone.
Probably the most exciting are new products. And a year is a pretty short timeline, but I think we’ll see a lot of, “Hey, our product now has, now with generative AI usage.” You’ve seen like Notion and email products from Google and Microsoft are all baking that in. I would say the value depends. Like I said before, if you’ve interacted with a chatbot online in the last six months, you have some idea of how far we have to go before this thing is really fully baked. But the promise is exciting. I mean, certainly there’s risk like any other new technology, but I would say the future’s bright.

Amanda Razani: Well, you’ve shared some really great points on our show today. I want to thank you for coming on and giving your insights about this really interesting topic.

Greg Notch: Awesome. Thank you.

Amanda Razani: Thank you.