Synopsis

Mike Vizard: Hello, and welcome to the latest edition of the Techstrong.ai video series. I’m your host, Mike Vizard. Today we’re with Jonathan Dambrot, who’s CEO for Cranium, and they have a platform for securing your machine learning pipelines. And we’re gonna be talking about the state regulation as it applies to AI. Jonathan, welcome to the show.

Jonathan Dambrot: Thanks so much; happy to be here.

Mike Vizard: We saw, recently, the government of Italy kind of react harshly maybe to generative AI platforms like ChatGPT over concerns over privacy. I’m certain that every regulator on the planet is probably looking at these issues just scratching their head. What’s your sense of what is the state of security as it applies to AI? And what should we expect from regulators?

Jonathan Dambrot: Yeah, well, it’s amazing where we are over the course of the last year. As you know, it went from well AI, maybe it’s maybe it’s coming, maybe it’s not to wow, everybody’s now using these capabilities; we really have to think about both generative AI, but also all the AI investments that have been made over the last several years that are going into production now. And I think the regulator’s have taken notice clearly. So there’s been, you know, regulations drafted in the EU, with the EU AI act, there’s been regulations that have hit in the US with the, you know, AI Bill of Rights out of the White House a few months ago. And frameworks like NIST AI risk management framework. And when you think about what’s happening, it’s really due to to two things, I think.  one are honest concerns about AI systems and the data that’s going into them. And, you know, making sure that the public, you know, trust and what’s good for, you know, the citizens of these countries is being upheld. So there’s certainly, absolutely a discussion that we have to have. There’s another side, that’s pretty irrational, though, right? Growing up, we’ve been all afraid of AI. And we’ve been told that, you know, this is going to take over and really create a lot of harm. And so somewhere in the middle is probably where regulators heads are at and thinking about these things, as we need to take a view that allows us to protect these systems meaningfully.

Mike Vizard: As we go down this path, a lot of the focus today is of course, on ChatGPT. But if I look across the landscape, there are 1000s of these large language models that people are building, and not all of them are going to be, shall we say, have equal validity. So how do we know what to look for? And how to kind of work our way through the security and compliance issues that will inevitably arise?

Jonathan Dambrot: Yeah, that’s a great question. So you know, we look at it in kind of two lenses: One – ChatGPT has just changed the whole discussion around how people are thinking about using AI, because it’s so easy, right? It’s just like the iPhone was when it came out. It’s just easy, it’s easy to kind of ask it to do something that we never had technology that could really communicate in that type of way. And you know, and a model that was such a large language model that can do so much. I think what you’re mentioning is right on; there’s a whole set of other AI systems and pipelines that are being used and introducing these potential risks into organizations of all sizes, and they’re coming directly into those organizations, and they’re coming through third party systems, right? And so I think that we really need to think about how do we provide better awareness for both sides of this, you know? If you’re using ChatGPT, we have to have honest conversations about maybe what doesn’t go in that; you saw some breaches over the last week or so where people started putting code, they started putting patent information, they started putting things to try to train these large systems, when in reality the those systems don’t forget. They, you know, potentially impede the ability to protect your own intellectual property. We need to think about that, like that’s a real harm that could get created and create harm to organizations and individuals who are trying to build capabilities and technologies. On the flip side, I don’t think the conversation even occurs today, to be honest, right? I’ve met with a lot of organizations over the course of the last two years, as we’ve been thinking about this from a cranium perspective. And when you talk to data science teams, their mission is different than the risk and security teams whose obligations are to protect users and protect organizations and countries, right? So you know, when you have an obligation to go build something and change, you know, how a process works, you’re in the business, you’re going as fast as you can, you’re using what’s available to you; you’re – maybe you’re pulling down open source models and synthetic data or datasets that you may not have control over into a data pipeline, which goes who knows where right? And that’s really trying to go fast. I think now, given all of the discussions happening and all this oversight, you’re gonna start seeing people stepping back, thinking about what does adversarial risk mean? What are what do I need to think about? And I think that’s a really healthy conversation.

Mike Vizard: We’ve seen some calls for a moratorium on development to possibly enable folks to actually catch up for some of those issues. Is that realistic?

Jonathan Dambrot: You know, it’s kind of interesting, I think the premise of the discussion, the thesis of trying to put safeguards and really understand how to produce safe and secure artificial intelligence systems that don’t cause harm – absolutely. The right discussion and the right thing to do. I think it’s hard when you’re trying to pinpoint a single organization and saying that we need to stop them. Right. I don’t necessarily believe that you can kind of put the genie back in the bottle. And I think what they’re asking to put a hold on, probably has already happened. Right? So like, to some extent, I really believe that, you know, and we’ve invested very heavily in trying to, you know, agree on that standard – come up with a set of both, you know, kind of second line, first sight line, second line, third line capabilities to help address that. I don’t think there’s any going backwards. I mean, you’re seeing other countries going the opposite way. Right? So you saw other other nations basically going, we’re not putting anything in place, go faster. And, you know, that’ll be hard competitively. And I think you know, we need to do both simultaneously. And we as an org, as a country, can absolutely do that. And we do that in every other case, when we’ve seen advanced technologies, emerging technologies, and the fear that comes with those. This is not the first time we’re going through it. And I think we have we have every opportunity to just do both. I don’t think we can stop it.

Mike Vizard: To your point. If you’re a smaller country, with not as large of a population, you may look at this as the greatest thing ever, because it levels the playing field against other countries that have massive populations and workforces.

Jonathan Dambrot: I would think absolutely, I mean, you’re seeing massive macro trends; people are concerned about it, it’s kind of an interesting thing. We’re saying put a stop on this. But we’re also seeing macro trends of population decline, potentially, right? So in a lot of ways, this is the perfect timing, to have AI systems in place, as you have elder populations. You have, you know, things that are going to be coming where people are going to need more assistance; we have jobs that are going unfilled. And so I think you’re absolutely right, I think it levels the playing field for those that make smart investments and do the right things and take a you know, a measured approach to this. And I think we are going to start seeing, you know, you’re we’re we’re in the middle of it, right, there’s an AI revolution, it’s getting embedded everywhere. And I think, you know, people want to have a dominant position in this. And I think, you know, the United States and other countries that, you know, have the right mentality around this will absolutely leave.

Mike Vizard: We have seen the Biden administration kind of invite everybody up to Washington for a powwow. Is this the first in a series of ongoing conversations? Because it’s not clear to me that this is the kind of thing you’re gonna hash out in a couple hours.

Jonathan Dambrot: 100% And there’s been a working group. The White House, put one together. There’s been several other working groups, inside of other agencies. We’re working with one with GRF, and the ice axe and, you know, and Mitre and other organizations that have a focus on this problem. This is not a two hour discussion. This is a marathon. There’s no sprinting here, AI is not is here to stay. And I think when we look at where we’ve been with AI, we’ve kind of gone through that – what they described as a trough of disillusionment, right? Where, you know, the investments of the past are informing what’s happening today. And I think now is really how do we practically put these things in place and get the best returns on the investments made? And how do we do that safely? So yeah, I see these conversations, they’re going to continue for years. And we’re going to start to see, you know, hopefully smart regulation that’s going to be able to support the growth of these technologies instead of hamper them.

Mike Vizard: We have also seen orthogonal conversations about increasing accountability and liability for how data and software is built. Do you think that will all get rolled up into this conversation as well? Because ultimately, these models are software artifacts, right?

Jonathan Dambrot: Yeah, I mean, I think this is where AI sort of becomes somewhat of a different discussion, right? I think it’s, you know, when when you have a technology that can start start learning, that can itself produce software. And right now, we’re not at a point where we have applications that are fundamentally getting developed, getting published, getting done without human intervention, right? There’s, I think there’s a fear of that right where that will happen. But the fact is, I think we haven’t we’re at the very beginning of AI systems and AI I think you’re gonna start seeing different voices depending on where people’s, you know, interests lie. I think AI needs a separate track. I think it’s a technology that will be fundamentally a game changer in every industry. And I think, you know, we need to think about the data, we need to think about the models, we need to think about the pipelines, we need to think about the use cases. And ultimately, how do we, you know, how do we maintain competitiveness in this? I think if you start rolling regulation, after regulation, bad things start to happen. I think we really need to be keep this focused. I don’t I, to your question, I don’t know, I think that you know, it’ll be interest driven.

Mike Vizard: Who will be in charge of security? Are we gonna see an extension of MLOps? So there’ll be like, you know, there’s DevSecOps, though there’ll be MLOpsSec? Or is this just gonna get rolled up into one more set of responsibilities for the cybersecurity team to figure out?

Jonathan Dambrot: Yeah, we’re gonna figure that out together. We’ve been asking questions of leaders throughout our journey, the same question. There’s a lot of disagreement where they should live. So in the financial community, there’s been a lot of focus on model risk management over time, but they’re only focused on a certain set of models, and a lot of those organizations in the health care industry, you’re starting to see, you know, dedication to trying to drive an understanding of the datasets and trying to validate those as well as the models on a more holistic basis for those industries. So I think you’re going to see kind of MLOps and security for MLOps becoming much more part of the bigger part of the discussion. And I think when you look at roles and responsibilities, this has grown up inside the business. So I think risk, security, compliance and the business will need to come together. We are seeing some roles for Director of Data Science Security. And so I’m very heartened that there will be somebody who’s responsible. But I think from a leadership perspective, the governance of this will require all of those stakeholders. So what’s

Mike Vizard: your best advice to business leaders right now, and IT leaders for that matter, as they go forward? Because everybody’s kind of feeling the pressure to do something. But they’re not quite clear what, and they’re certainly even less clear on the house. So what should they be looking at when they think about security in this context?

Jonathan Dambrot: Yeah, so we’ve been giving some advice around. Look, even in our discussions in the working groups that I’ve been involved in even just understanding what AI means to your organization – what is it? There’s been differences of opinion on just how to define this. So really, looking at a framework, whether that’s the NIST AI, risk management framework, maybe some of the things coming out of mitre the EU, you know, the Bill of Rights, coming up with a framework and a way to talk about artificial intelligence in your own organization is super important. Understanding what – really an awareness of what those risks look like, especially from a cyber perspective. And then starting to put a framework together, that can inform the second line and capabilities ultimately, in the first slide where you can get that visibility you need, start monitoring the security of those assets and the pipelines and doing that both kind of point in time and over time. So it kind of that temporal analysis, and then getting visibility into your supply chain. There’s been a lot on the software supply chain piece recently with s bombs and other things. You know, in fact, I think we need another, we need to give a look towards what that means for learning systems, machine learning, and pipelines and how to, you know, how to give people a view into those so they know fundamentally that they’re that the AI they’re using is trusted.

Mike Vizard: You heard it here first, folks. Nothing ever gets built in the land of software that does not have a security component. So start thinking about it now. Jonathan, thanks for being on the show.

Jonathan Dambrot: Thanks so much, Michael. Pleasure to be here.

Mike Vizard: And thank you all for watching this edition of the Techstrong.ai series. You can find this on our website along with other episodes. We invite you to check them all out and we’ll see you next time.