Synopsis: In this AI Leadership Insights video interview Mike Vizard interviews Daniel Rohrer, VP of software security for NVIDIA, about how to secure the large language models people are building.

Mike Vizard: Hello, and welcome to the latest edition of the Techstrong.ai video series. I’m your host, Mike Vizard. Today we’re with Daniel Rohrer, who’s vice president of security for NVIDIA, and we’re talking about how to secure all those large language models that people are building. Daniel, welcome to the show.

Daniel Rohrer: Happy to be here. Thank you for having me.

Mike Vizard: We’ve heard about the poisoning of LLMs and AI models and people are concerned about who gains access to these things, and that sounds to me like we need some basic governance and compliance, but I’m not the expert – but Daniel, what’s going on here? What do we need to be worried about?

Daniel Rohrer: Oh, I think a lot of LLM and enterprise motion today is really just racing to get their solutions into the market. And in some cases there may be, as people moving fast, do sometimes skip steps. I think there’s a lot of security principles that we’ve learned over 20, 30 years. Denial of service, don’t let millions of people send things to your model all at once if you’ve only got one model running. Authentication, authorization, all those basic practices apply to models certainly help. But there’s obviously new classes of attacks in terms of data poisoning, back doors, prompt injection that are new and novel that teams are learning how to develop solutions for as well. So it’s a little of the old school along with some of this new art and the AI space and just helping organizations get to a place where they can do that naturally. And the way they’ve gotten used to deploying cloud solutions today is really what we’re about.

Mike Vizard: They say if you can imagine it in security, somebody’s probably working on it, so.

Daniel Rohrer: Often true.

Mike Vizard: Is there going to be something that feels like malicious code being injected into the model and then I can result with a malicious hallucination? We’re already struggling with making sure everything is right, but will we one day see bad guys trying to deliberately change an outcome of a model by injecting some malicious data?

Daniel Rohrer: I mean, I think prompt injection is that. I mean, they’re specifically trying to influence and elicit unexpected or user controlled responses today. You can see that in image spaces, but you can see that in text phishing, right? If you think about classic phishing defenses, those are models, sometimes machine learning models, sometimes sort of mathematical models, but they’re all about attack and defense. And if you can influence the model to give a yes answer to your phishing email that gets through to the user and you start playing forward those sort of attack patterns that we’re more historically accustomed to, I think. So a lot of those dynamics exist today have existed for decades. I don’t know that they significantly changed with AI, just some new tools for both attackers and defenders to use in that space.

Mike Vizard: So what do I need to put in place in terms of guardrails to make sure that this doesn’t happen or at the very least minimize it?

Daniel Rohrer: For sure. I think maybe speaking in the guardrails context, which I think most commonly comes up in chat, LLM kind of dynamics, which we certainly see a lot in media, guardrails for general intent. A lot of folks we hear on ChatGPT that answers any question for anything that you might want to know about, most applications that people deploy today don’t need to answer about travel locations in Lake Cuomo, Italy when answering facts for their product on their user website. So really tailoring down those solutions to be very targeted I think is a great first step so that you don’t have to deal with the general training dynamics of malicious or ethically biased content coming out of your model because it was never introduced to your model training to begin with.
But certainly if you’re using more off the shelf models that may have some of that data in it, using things like our NeMo guardrails products, there are others in market that shape the inputs measure for toxicity or other things. Building those in at the front, I was having a very rich discussion with our own teams internally of how do you do that efficiently and effectively and at low cost, right? There are many things you can do in old school security that are very cheap and fast. You can do those as a first step, then a more mature model that filters for toxicity, like a language model, and then using some guard railing in between as well. So you can have a much, a very layered defense defense in depth, which you hear about in security a lot, can also be applied to LLMs as well, just to shape those responses and make sure you get a good system for your purpose.

Mike Vizard: And do I also need to secure the vector databases and everything else that we’re using to interact with the LLM? Because it seems like-

Daniel Rohrer: For sure.

Mike Vizard: Move laterally, right?

Daniel Rohrer: And I think that hearkens back to the opening a little bit of securing the system as a whole and some of those classic things that we do of authentication and authorization. You need to authorize to the vector database. Not everyone may need to get all the data out of that, right? How do you make sure requests from user A and user B aren’t mixed maybe appropriately in that database? You can build systems to do that. So securing the data from beginning to end is really important. And in those prompt cases that we referred to earlier, having the prompt to be able to influence what gets accessed or doesn’t get accessed can be a real question. And you invest tools to go do that and make sure only what you’re authorized to see is what you get out of those databases. But again, a lot of common patterns just applied in this new context.

Mike Vizard: You’ve been around security a while and we’re spending a lot of effort trying to teach developers best practices for security, and it’s known as DevSecOps. Do we need MLSecOps? Because it seems to me the data science people know even less about security than the developers do.

Daniel Rohrer: For sure. That, I mean, I think first and foremost is just getting common language. Some of the language data scientists use, even if it’s the same word, doesn’t mean the same thing to a security person or a safety person, some of those ethics and bias folks. So common language I think is really important. I know I spoke about that with a panel in RSA and then closing the gap between security and data scientists. We did some training at Black Hat this year specifically to address some of those gaps that we see in market and find ways for common ground and build those fundamental schools. But training is one way, tools is another.
There’s a lot of great open source contributions and toolkits for securing AI systems, measuring for bad model files that may be a negative impact on your system. So I think those are developing, and a lot of what NVIDIA is doing now, and certainly myself and my team is investing in those things of how do we scale security solutions and take that 30 years of experience that we have, digest it down so it takes three, six months for a team to acquire that same expertise in this AI space rather than taking several decades is it’s always about translating all the things we’ve already learned and hopefully not have to relearn again, just tweaked for this new context.

Mike Vizard: Well, speaking of not relearning things again, do we take the cybersecurity experts and put them into the MLOps workflow, or are we just trying to make the folks who are already managing that workflow more security conscious? Which one of those two things is easier? Or maybe the answer is both.

Daniel Rohrer: I’m always a big fan of both. I think the lovely thing about the MLOps space, certainly for cybersecurity folks is cybersecurity is increasingly a big data problem. And ML folks are used to working with big data. They have the tools, they have the big iron, they have the deep pipelines and fast IO to really lean into those sorts of problems. So bringing cybersecurity folks into that data science space, that MLOps space, and then using them to upskill those folks, I think has a lot of synergies with what they’re already doing. So there’s a lot of opportunity, but again, there’s a little bit of that bridge building that has to happen between both sets. And I found actually, as I engage a lot of data science myself, I learned a lot from them on ways that I can use those tools to actually affect better cyber outcomes. So there’s a big virtuous cycle there in having those relationships established, especially if you have those inside your own company to lean into.

Mike Vizard: So speaking of that virtuous cycle, will I not wind up building LLMs to make sure that the LLMs are secure, so I’m going to use LLMs to police the LLMs that are in production environments?

Daniel Rohrer: I mean, yeah, I think there will be some of that. And again, referring to the conversation before, there’s a defense in depth. I can use a 50 billion model to answer the question, the general question, but I could maybe have a much smaller, much more efficient from a cost per, however you’re managing your TCO model, to defend against certain things, right? Why would I not deploy two models rather than trying to make this really expensive, large model, which is really tasked to a different thing, try and bear the load of something that I could do much more economically through a much more refined model? So I definitely see ensembles playing into that.
I also see data acquisition, you talked about poisoning at the beginning. Building good data sets is hard, right? De-biasing data sets is hard. I think there’s going to be a rich space emergent, and we see this in Omniverse in some of the simulation platforms we do today. Building simulation through AI models generating data sets, right? Finding a way, say in autonomous vehicles of, hey, I’ve got this great data set for driving through a valley, but I don’t have it through that valley in deep snow or rain or heavy winds or things. We can augment and synthesize data using AI models themselves to enrich the final models themselves. So I get there’s a lot of opportunities in using ensembles in this way to do a better overall design.

Mike Vizard: We think that we need to also think through if something malicious gets into one model and that model is the parent to a bunch of child LLMs.

Daniel Rohrer: Absolutely.

Mike Vizard: Or we’ve got a bunch of daisy chained LLMs, then all this stuff can move laterally and we need to start thinking about, I don’t know, microsegmenting LLMs. Does that make sense?

Daniel Rohrer: I don’t know that I’d go to microsegmentation, but I think your observation around classically supply chain attacks, right? If I inject this model and it goes to 300 places, that that’s a supply chain attack is very much on our minds. I know myself and others at NVIDIA are talking with many of our peers in the industry of how do we secure the supply chain? And even if not necessarily secure, having a very clear provenance of this dataset to this model, to this retrained model. So we can track these elements through the ecosystem. So if we do have a discovery of an issue in a model that we can very quickly track those down within the ecosystem and remediate those very directly. So the response is built in much the way we respond to a patch request, open SSH, “Hey, we need an update. We can very quickly move to that.” So that focus on remediation, having strong controls in between and the supply chain, I think bears out an AI just as much as it does the traditional software areas.

Mike Vizard: And if there is an incident of some type, when I talk to a lot of folks, I’m not quite sure how to take data out of an LLM once it gets in there. And so do I need to find some ways to make sure that the data that I’m using to train the LLM is isolated enough to be removed in a way that doesn’t destroy the LLM?

Daniel Rohrer: For sure that I would say is an element of it. Some of these very large language models are very costly to retrain. But again, we talked about guard railing. If I think of a more traditional security context of, hey, I’ve got a web application, it’s got a flaw, I need to remediate it, but man, if I take that down, it’s going to three days to set up a bunch of lost revenue. Or I can update my WAF system to defend against that specific packet. We have tools in that defense in depth we talked about before. There are other places that can potentially affect that remediation to allow for a more graceful transition, a retraining or whatever removal that may be otherwise quite difficult to do quickly, use those other tools to mediate that risk over time, and then spend the time to do that more directly at the model level.
So it’s a little of this, a little of that. I think certainly there is novel research to be made to find better ways to do that at the model level effectively and efficiently. I think the jury’s out on what the best path for that is. Some are easier than others, and again, depends on the application. Very large language models, 50 billion parameters, those are very hard. There are much smaller models where I think those tactics become a lot more trackable as well.

Mike Vizard: This sounds like a similar problem we have with applications today. It’s less costly to fix them before they wind up in a production environment. And the same issue seems to be playing out with the AI model is just at a rather larger exponential curve.

Daniel Rohrer: Yeah, I mean, like I said, a lot of the patterns we see over the last 20, 30 years definitely are playing out here, which is lovely for the security folks who are very comfortable with those. Makes some of the data scientists a little anxious, right? And us helping them navigate that maybe experience that we have in their space is certainly helpful.

Mike Vizard: All right. So what’s your best advice to folks who are starting to dabble in this whole security space around LLMs and the MLOps supply chain? What’s that one thing you see folks doing today that just makes you go, I think we need to be better than that?

Daniel Rohrer: Honestly, manage your data and secure your datasets. I think those will be increasingly large opportunities for folks to influence how your systems behave, right? Know where your data sets are, clean them up, manage them reliably in much the same way as you would source code. You don’t inject arbitrary source code off the web directly in your CI pipeline and production. You generally have some stages of observation before it gets to those things. Start there, that’s a great way. Understand your assets. Make friends with your data scientists who are building these models and help them along that security journey. Many of them haven’t been exposed to these sorts of patterns, but a lot of them are easily done in models once they understand the problem you’re trying to address with them.

Mike Vizard: All right folks, you heard it here. When it comes to securing LLMs, guess what? It’s a data management problem as always, so start in beginning and work your way through. Daniel, thanks for being on the show.

Daniel Rohrer: Absolutely, thank you for having me.

Mike Vizard: All right, back to you guys in the studio.