Synopsis: Cloud Security Alliance (CSA) CEO Jim Reavis describes goals for an AI Safety Initiative launched in collaboration with Amazon, Anthropic, Google, Microsoft and OpenAI in collaboration with governments and academic experts.

Mike Vizard: Hello and welcome to the latest edition of the TechStrong.ai video series. I’m your host Mike Vizard. Today we’re with Jim Reavis, who’s CEO for the Cloud Security Alliance, and they have launched an AI safety initiative with a who’s who of everybody in the AI space. And we’re going to dive into what their goal is and why this is a critical issue these days. Jim, welcome to the show.

Jim Reavis: Thank you for having me, Mike.

Mike Vizard: So what is the thought process behind this initiative? It seems like everybody’s talking about AI safety. It’s not clear to me what they’re doing about it per se, but what’s the outcome? What’s the goal?

Jim Reavis: Sure. Well, there are, as you’re saying, a lot of people that are talking about this and governments doing efforts on this, and it’s a lot like when I started Cloud Security Alliance, there was a lot of different efforts. And so that’s great, and hopefully not everything is going to come to fruition, but hopefully a lot of us working together will get a lot done. But the idea, I mean simply is that we see AI as probably the biggest tech trend in history to date, and we can see how far-reaching its impact is going to be. And we can even observe, although AI has been around for decades actually, that all of the effort into generative AI and then combining it with a cloud interface so it’s accessible by anyone is creating this huge viral adoption and we’re just going to see so much change.
And so we all understand this, this is huge topic, it’s going to change the world, what do we do about that? And so the mission of CSA’s AI safety initiative is really to understand that, let’s create the best practices, let’s create the education, let’s create the certifications that are needed for organizations to show that they are using this responsibly and also safely. And I heard a podcast recently that Sam Altman, the CEO of OpenAI was on, because this debate over do we stop AI and figure out how to secure it and restart it, and I’m just sort of paraphrasing, he was saying that we really need to build the guardrails in the context of everyone adopting it, developing it, because they’re finding new use cases, new problems all the time.
And so this is meant to be a real agile effort, a whole of industry approach to create those guardrails, to create those certifications, to create the best practices that are very actionable for all of the different stakeholders, and then be able to iterate this, because AI is moving in a direction where it’s going to get more and more powerful and maybe a lot sooner than we think. And by doing this now, we’re going to be in a lot better position to incrementally improve it and have a strong governance structure for how we deal with those more powerful versions of AI in the future.
So that’s what it’s about. And because it really is something that’s very tightly coupled with cloud computing, we thought we were in a very good position, and with all the work we’ve done over the last 15 years in building out a research framework, a framework of chapters all around the world, we could leverage what we have organizationally to be able to be one of the organizations that are probably in a better position to do this type of work in a very competent and quick way.

Mike Vizard: Also, seems to me if everybody did it themselves, there would be a lot of redundant efforts that would be conflicting. So can we get to some level of consensus about what’s required here and what we should be doing?

AWS

Jim Reavis: Yeah, you’re going to have by nature, take countries for example, countries have a responsibility of their governments to protect their citizenry. So they’re not just going to ignore it and do nothing, and there will be some duplication of effort. But interestingly, they also are collaborating, CISA and the UK Cyber Security Agency and a bunch of other countries put together some high level guidelines on secure development, and so you do see some of that outreach. But yeah, we think as a global organization and working with government, industry, we can provide a consensus around that. That’s going to make it easier, because I think what these different groups will find, we’re all about 80, 90% talking about the same thing. So why not have us use it and create something that’s freely available that everyone can use?

Mike Vizard: Is this going to break up into different working groups? Because from my perspective, there’s a world of difference between trying to protect an AI model from vulnerabilities that exist in the underlying software to making sure that somebody isn’t using it for some sort of nefarious purpose involving say, I don’t know, not to pick on an industry, but redlining neighborhoods in the real estate business?

Jim Reavis: Yeah, we already have a few working groups and we probably need more because you’re absolutely right, that the malicious actors, they’re doing multiple things. They’re trying to understand how they can attack AI systems directly, but they’re also even more so they’re looking at how do we use those to transform our business? How do we use AI the way it was intended to be able to further what we’re doing? And you have organizations, and this is where it’s definitely a lot like cloud computing, you take an enterprise and they’re going to look at, well, do we use these big public large language models? Do we create our own LLMs just with our data internally? So that’s sort of the analogy to public and private cloud. At the same time, they are using tons and tons of applications and the SaaS applications are, there’s this huge effort to put generative AI in those. So the enterprises need to understand how to do risk management over all these different areas, whether it’s building it or it’s buying it. And so there’s that enterprise perspective.
Now there is how do we make these AI systems internally very secure, and how do we make it transparent so people understand how did it come to that answer? How does this explainability of it help us understand the things like you’re talking about, redlining or other sort of analytics that are leading to nefarious results? And so we’ve got it organized into different areas. And also the one area I didn’t mention is cybersecurity as an industry is racing to use this for good, I’m calling it the co-pilotization of cybersecurity and basically taking the Microsoft Copilot what they’ve done. Everybody is doing that, and I don’t think you’re going to recognize cybersecurity in terms of from the solution providers two years from now from what it is right now. So we’ve got a bunch of different working groups to look at it from those different perspectives and we’ll see what sorts of things we feel like in the next few months that we need to add to that as well.

Mike Vizard: What is your sense of what is the regulatory environment going to be like? And I think a lot of organizations are waiting to see. There’s clearly a lot of interest in experimentation and they’re building out AI models, but there’s always going to be a little hesitancy of deploying these things in production environments if I don’t know what it is that I’m going to get beat up for later.

Jim Reavis: Yeah, there’s a strong appetite to regulate here. And what’s interesting is when I spend time with what they call the frontier model companies, those are the largest large language models where the definition of it is those could reach artificial general intelligence faster than the others, when I talk to them, when I talk to some of the very large cloud providers, they’re definitely welcoming that dialogue with regulatory bodies and the idea of regulations, more than I see have seen or can remember from the early days of cloud. So there is a lot of them understanding, boy, this could create some real changes in the world, could be very existential, and so we don’t want to take all the responsibility for all the decisions we’re making. We want governments to be on board with what we’re doing.
So they’re starting to do that. And I haven’t looked into all the details of the EU Act, people tell me it’s pretty reasonable. But definitely taking that privacy of citizens and other sorts of data as being very critical that we protect that here and seeing what new laws versus interpretation of new law of old laws need to happen. You’re seeing different sorts of activity like on the copyright, that’s just a huge area, copyright protection. And it seems like people are leaning more towards these large language models and generative AI need to be pretty darn respectful of our traditional ideas of copyright law, which I think they can do. So it’s going to be real interesting, this mix of old laws being reinterpreted and the new laws we need to see for that next generation of AI.

Mike Vizard: What is your sense of how far are the so-called malicious actors because they’re unencumbered by many of the niceties we just discussed? So are they further down the path or will they be moving faster than the good guys are able to move because they don’t have to worry about what some regulator thinks?

Jim Reavis: Yeah, I think there’s going to be this AI versus AI, when I talk about you not recognizing cybersecurity a couple of years from now, I think that’s going to be the case. You can see some things today where, one example that several of these companies have shared with me is the AI-based scanning of code for vulnerabilities. And if you are writing code and it’s available in some way, maybe in GitHub repos or something like that, that AI scanning is finding vulnerabilities more quickly than anything we could have ever done before. And so hey, you probably got to be using it yourself to scan your same code to find the same vulnerabilities. So there’s some areas there where the malicious actors are ahead.
There’s certain types of attacks you can imagine with prompt injection that could allow malicious attackers to do some pretty bad things right away. And I haven’t seen as many as I thought and so don’t know if they have not really caught up to that yet, or if they are saying, hey, we want more people dependent on these AI systems before we attack them.
So you’d say they’re probably looking at things pretty aggressively, it’s hard to say do they have the overall edge. But I will say that there’s real promise like a couple of years down the road here, that there’s this appropriate governance over these really frontier large-language models that they’re able to prevent some of that malicious activity there while at the same time giving an advantage to enterprises, to the good guys. And so there’s a real opportunity for the balance to shift, because there’s so much capital and so much compute power, so much data training that go over these, they’re just going to be really powerful in all sorts of industries. And so if you can give more access to the good guys than the bad guys, you could see some promise there.
But yeah, right now malicious actors are looking at it, they’re doing some things very aggressively, the phishing and other things like that, and there’s never going to be an end. It’s going to be AI versus AI for quite a long time.

Mike Vizard: What is your sense of, when I look at the companies who were starting this, they’re clearly a lot of the AI providers and that’s clearly necessary, the average enterprise, do they need to be part of this process? I mean, how do they get engaged? Because I think a lot of times they look at these things and they go, well, that’s interesting, but it seems to be the tech bros hanging out. How do they participate?

Jim Reavis: Yeah. So recently, a few months ago, I was meeting with the cybersecurity agency in Singapore about what is their strategy and focus here? And that’s a very advanced country, hey, we’re a small country and we’re not going to have our own versions of OpenAI and Google, and so we’re going to be users in our country and our businesses are going to be users, so we want to very much take that perspective. And it’s not about us really being engaged in the tech bro stuff and understanding the ins and outs of the LLMs, but what do we need to understand in terms of how we use these securely? What are the secure development methods? How do we operationalize that? What sorts of lenses do we maybe need into these companies versus understanding them to some great depth?
So enterprises, they need to understand how do we comply? Largely, what do we need to be doing from a data governance perspective? How’s it different? Which there’s a lot of things that are very common. So it’s that risk assessment, compliance, overall security, data governance. And then some new areas like is what we’re doing as a business, even if we’re using OpenAI or Anthropic or for something, are we doing something that’s ethical with it and is that appropriate? So they need to understand, just like we used to do, are you ready for the cloud, they need to have a process and the right governance and organization say, hey, are we ready to use generative AI for our enterprise applications and the data that we have?
So we’re finding early on there’s a lot of this being aware, and then there’s a big amount of just do we have good overall data governance as an organization.

Mike Vizard: I’m assuming you’re looking for the coalition of the willing, so how do they get involved? Do they just email you and say, hey, sign me up, or is there some more structured process?

Jim Reavis: We’ve created, you wouldn’t be surprised, a domain that goes straight to where you can see the latest activities and sign up. It’s cloudsecurityalliance.ai. And so you go there and you can see some of the tools we’ve already made available, you can sign up for working groups, you can get on the list for this, but it is everyone. It’s CISOs, it’s small businesses, it’s cybersecurity professionals, it’s audit compliance, as well as those that AI is like their whole business right now.

Mike Vizard: So as you look down the pipe, it’s the beginning of 2024, so it’s the obligatory where are we going to be a year from now kind of question, but what’s your crystal ball telling you?

Jim Reavis: So I think we will have a fully baked certification program for organizations to show that they adhere to the guidance and the best practices that we have. And something that’s going to be very different from cloud, where cloud real focus has been are cloud providers compliant, but I think we’re going to find that there’s going to be a level of the checkbox and certification that the users of this are going to need to have. So I think we’re going to have that in place. I think we’re a year from now going to have very strong enterprise readiness frameworks that really kind of the DevOps of AI, a secure DevOps of AI for how you do this holistically in the organization.
On cybersecurity teams, you’re going to find that enterprise professionals have a real strong understanding of prompt engineering, and you’re going to see a lot of gluing together of cybersecurity applications. You’re going to see a lot of sharing of that, it won’t be complete.
And a year from now, I think you’re going to find that there’s been several new announcements of more powerful, more capable, large language models that are going to create some emergent capabilities that we can’t predict. And so about a year from now, there’s going to be a whiplash. We’re not going to know exactly where it’s going to be, but we can just feel that that compute power and the data train that’s going on is going to lead to that in about a year’s time.

Mike Vizard: All right, folks, you heard it here. There’s clearly a lot of excitement, some of it perhaps bordering on the irrational exuberance, but as anybody will tell you, if you work in a field where it’s dangerous and people might get hurt, the most important thing is safety first.
Hey, Jim, thanks for being on the show.

Jim Reavis: My pleasure, Mike. Take care.

Mike Vizard: All right. And thank you all for watching the latest episode of the TechStrong.ai video series. You can find this episode and others on our website. We invite you to check them all out. And until then, we’ll see you next time.