Synopsis: In this AI Leadership Insights video interview, Mike Vizard speaks with Bryan Kirschner, VP of strategy for DataStax, about the need for AI maturity models.

Mike Vizard: Hello, and welcome to the latest edition of the video series. I’m your host Mike Vizard. Today we’re with Brian Kirschner, who is Vice President of Strategy for DataStax. And we’re talking about the need for AI maturity models. Brian, welcome the show.

Bryan Kirschner: Thank you; glad to be here.

Mike Vizard: There is clearly a lot of gnashing of teeth when it comes to AI these days, and probably for good reason. But it seems like everybody’s just kind of maybe stumbling around. Is there a more structured way of thinking about how we should go about building AI models and when and where we should use them?

Bryan Kirschner: Yeah, that’s a great question. I think, you know, my experience has been, there are a lot of CIOs who this time last year would have said, we’ve got a good plan for AI. They know the use cases they’ve deployed. And they can look ahead and see different use cases and understand the value. And I think the explosion of generative, really knocked some folks for a loop of – Oh, my gosh, we’re not doing enough, fast enough! How do we adapt to this? And then I think regenerative also does by bringing a new type of AI into the loop that starts to say, “Oh, we’re gonna run into issues around explainability a lot faster than we might have, if we were just doing traditional ML models, and use cases.” And so that really brings into this idea about thinking about maturity. And, you know, to riff off the word, not in the terms of age, but in terms of savvy, or sophistication. And that means looking kind of across your organization, at different types of AI and different competencies, but also like your journey, right? So I love the fact that model we’ve put together and my colleague has done a lot of work on, you know, at the peak of maturity, you’re proactively engaging with regulators. And what that means in terms of savvy and sophistication is, if you’ve got a strategy, right, and I think lots of companies are kind of reformulating their strategy.

But your strategy probably should include generative AI agents for your people, and probably your customers or chat with your customers around you know, your business. What kind of regulatory environment is going to be favorable to that? And what could actually interfere with your strategy? And how do you affect the balance of risks. And so I think, you know, great example, Microsoft has clearly bet big on a co pilot in every productivity application. And they are super proactive right now about engaging with regulators, because they’ve got a very sophisticated understanding of, you know, how that could impact their strategy, or accelerate their strategy, if there’s a world in which people are very comfortable and confident that the AI is safe and trustworthy, and data’s protected properly, and so on. So I think, you know, there’s a, there’s kind of a side to side, we’re an app dev team, we’re building an AI use case; have we thought about, the long term ethical implications? Do we need to go talk to somebody else? Or, I’m a CIO, we had the kind of landscape and plan around different types of AI and how we handled it. Oh, are we able to do generative? Are we stuck in batch and does the relevance of real time get, except by the advent of generative, because maybe it’s about connecting two different experiences together, one that’s traditional, and one that’s generative. And having those capabilities in place now, is now a matter of urgency versus nice to have in the long run.

Mike Vizard: It seems like this spans technical issues, such as whether or not I’ve got bias into the models, and then it goes all the way up to cultural issues. We have all these MLOps workflows, and then we have business processes. So how do we kind of put an AI Maturity Model embedded into those workflows so that we’re thinking about this stuff as we go along?

Bryan Kirschner: Yeah, I think that’s it’s a great observation, if it’s an MLOps workflow, or a DevOps workflow; in the moment, you can see when things break. And I think when you think about embedding this perspective, on maturity, into the business, you also need to think about things that don’t obviously break, or at least not in the short term. And so thinking about – do you create, as a result of culture or process, the equivalent of technical debt where AI models are not properly documented, or how a generative AI has been trained is obscure? And so that goes beyond the technology; you know, yes, you’re gonna have goals for how the actual tooling works and has to work, but also how does the IP generation and retention; how does ownership of AI models, AI applications get threaded so that there’s no model that walks alone? Because models can drift. And if models drift and no one’s paying attention, you can wind up in trouble. And so I think, you know, there’s a great one commentator on generative AI that says you should think about generative AI apps more as like hiring a person than deploying a traditional application. And with that comes this whole body of – how do we sort of manage and mentor a person?

And how do we make a person whose behaviors are more probabilistic, fit into our environment; customer experiences versus something that’s extremely deterministic, and we know does the same thing all the time? And so I think that level, you know, when we say, a maturity model, it’s important because you need to think holistically beyond; there’s just going to be a few teams operating this way. I’m also a big fan of the observation that the goal is really not for machines to replace people, but for people with machines to replace people without. And if you think about the smartphone, we basically got to 85% of the global population, which bumps up against global literacy. So if you’re in a global 2000 company, the goal for people using AI should be 100% – some type of AI tool in particular for anybody who needs to interact with customers, or do knowledge work, you know, a generative AI tool.

Mike Vizard: Are we going to need to think through what models we’re using? Because there’s general purpose models out there, and they’ve been trained with everything in the world, and some of that stuff is not so good. And then we have various open source models that people have trained stuff on, when we’re not quite clear what data they were using for that. So ultimately, are people just going to build their own MLMs? Because it’s – I can do that with less data these days. And it actually will become less mysterious.

Bryan Kirschner: Yes, we’re big fans of the using retrieval augmented generation, which basically means pointing a base LLM at known good content. And I think there’s, you know, the set of maturity model or maturity level around thinking about how you would use that model and what content is there. So Instacart, which is trained on known good content about things like recipes and word food go together. It has a bounded, you know, what you can talk about it in topics; we have an agent for our company, for support. If you ask a raw LLM for product support for our company, for things trained on the whole internet, the answers are terrible. But with ____, it’s actually trained, it’s pointed to our product documentation and all our customer service chats, and it’s really, really good – it keeps getting better. And so I think, understanding the architectural patterns and understanding, you know, yes, AI is going to hallucinate, but it doesn’t mean it can’t be helpful, you know? ChatGPT is a really solid, but somewhat untrustworthy, research assistant. But how do you, you know, use techniques and technologies to do this scoping where it can be really good at a particular scenario, and get better all the time? And you’ve bounded the risk, right? Our support agent will not talk with you about current events. And that’s a good thing, because that’s not why people ask questions.

Mike Vizard: Do you think this will drive us all maybe to focus a little more on data management? Because historically, we’ve not been very good at data management; it’s kind of been a little on the sloppy side. So, do we need to go back to the root cause of the issue?

Bryan Kirschner: Yes, well, I think, you know, it’s fascinating. So there’s a lot of angst about, you know, models being trained on open public content, and is that copyright violation? And how should the value flow work? I am a big fan of hoping to get to a world in which, you know, by producing good content, as a worker employee, as a citizen, it enables models to be trained in the world or at your company that makes everyone smarter. And so, we think about data management at a company. You know, there’s a McKinsey and Company just announced that they have an intelligent agent that’s pointed at the corpus of work they’ve done and one scenario is for a new project. Now in seconds, it can find the the most, expert people across this giant company, because it’s just looking at all the work and that would have been a manual process. as, you know, person to person process a, oh, we need a knowledge management system with, you know, metadata that we have to code as humans to okay, if we have a central repository or the ability for API’s to access work product, we now have a super smart, very useful assistant. And so long way of saying data management, that data’s. Now clearly, what we might have said was exhausted from knowledge, work, drafts, meeting transcripts, is actually now a resource for, you know, an agent to help your people be more productive. So I think the companies that kind of move to embrace that are going to have an edge, and it’s basically like, everybody’s work makes everybody more productive. And then so as a, as a worker, you want to make sure you share your work, and the infrastructure, is there to support that?

Mike Vizard: Is that gonna be a sustainable edge? Or are we gonna get to the point where everybody has basically the same capabilities using AI, and if you don’t have it, you’re gonna get left behind? But it’s not like I’m gonna suddenly wake up one morning and put all my rivals out of business, because I have AI; because they’ll have AI too.

Bryan Kirschner: Right. Well, I think yes. I think this, again, I think it gets to this idea of savvy and sophistication. There’s a great quote from the CIO of Kroger. And his point of view was when we replace a role with generative AI, we’re losing. And his point was, if the best we can come up with is doing the same thing with fewer people and AI, instead, we’re not, you know, expanding the horizons of what our business could be, and how do we augment people and enable, like new scenarios, and maybe radically new, you know, lines of business are ways of generating value. And I think that, you know, the don’t do AI as a way to lose out, as you say, aim for, you know, commodity AI as a way to, you know, at best kind of run along trying to keep pace, I think, and I think the the leaders are going to say like, you know, use the 30%, productivity number. If tomorrow all my people could be 30% more productive? How would we grow in a new radically different way? Versus how could we just, you know, bank 30% of payroll. And I think that’s pretty exciting, because I think people will do things that we wouldn’t have contemplated as business strategists or as consumers, if they, you know, embrace that in a sophisticated way, right? And sophisticated means business thinking, look into technology do what is the regulatory environment need to look like? Even what is like this copyright environment need to look like, right? If you think about, you know, serving serving consumers? How might we get into a world where there’s more open data to make models smarter versus less? And you might want to take a hand in that if you’re a big company?

Mike Vizard: Do you think big companies will collaborate together to define those data sources? Because, you know, we’ve seen some of the car folks have open source their factory data. So will that become the model where let’s all just pull our data and build something more interesting together?

Bryan Kirschner: Yes. I mean, I am a big fan of that model. I think, you know, there’s always the question about what’s our proprietary data or crown jewels, or is our competitive advantage, when I remember this is, you know, pre generative AI, but still talking about the power of aggregated data in AI; I was talking to the CIO of a construction company, who has, you know, correct conviction that their data and technology chops are a competitive advantage, you know, very hesitant to compromise on that. But he said, like, 100%, no one should ever die on a jobsite. If we can federate construction data from my company and other companies to reduce, you know, the risk. I’m all in. And I think that, you know, every industry should should really be thinking proactively about, you know, is it about reducing immediate harm for for sure, the floor isn’t about reducing me harm. Also, do we have a common interest in you know, fairness, and eliminating bias, so that, you know, our industry doesn’t become, you know, the target of hey, you all just blundered into, you know, bad medical advice for millions of people. That’s not where you want to be.

Mike Vizard: Even business leaders are a little too much focused on driving cost out using AI and they’re not really having that big picture conversation yet, or do you think that they are looking at this because, you know, to your point, it feels like to me most of the conversation seems to be about, you know, let’s eliminate this task or that thing?

Bryan Kirschner: Yeah, I kind of think about a couple of different buckets, right? If you’re eliminating some tasks; the first question would be, how do you sort of redeploy the time the resources that have been freed up? And I think that, I would say if your competitors got a magic pill that made their people 30% more productive overnight, what would you want them to do? The same thing they’re doing? They did yesterday, but with 30% fewer people. And so the again, I think the the idea about sophistication and maturity is, that’s a complicated conversation. It’s HR, its operations, its CIO and CEO. And so I think that there’s a risk, if you’re kind of siloed, and disconnected, the low hanging fruit is eliminated tasks, save some time, save some money, but you’re maybe not exploring the higher branches of the tree, so to speak.

Mike Vizard: A lot of folks, of course, are worried about security in all kinds of other issues. But it occurs to me like no AI model is going to be deployed in isolation. And there’ll be AI models to monitor the behavior of AI tools, and they’ll be AI models to secure AI model. So is this going to be a much more federated universe?

Bryan Kirschner: I absolutely believe so. And you’ll have AI, AI models talking to other AI models to complete transactions, right? Or like booking your vacation or what have you. So I think one of you know, if I were in the shoes of any executive, any industry, I’d be really leaning into this idea of, you know, transparency around, you know, what’s the regulatory environment, you know, there should be no hidden API’s. And so you can see the chain of AI eyes, maybe so customers can see the chain of API’s for both, you know, management and maintenance, but also just having a clearer picture of, you know, how do these systems of systems behave? And is it auditable, and in the ideal world, like audits will be, you know, routine few and far between because the system behaves as expected. But the last thing you want to do is create the AI equivalent of spaghetti code in the new decade.

Mike Vizard: So what’s your best advice to folks? How do they go about becoming more mature?

Bryan Kirschner: I would say, you know, if I were a CIO, the first thing I would do is go to my CEO and say, “This is your job, like, this is the transition from a world of sparse AI to super abundant AI; it’s going to happen once and it’s going to happen fast.” And you know, the next 36 months of our company will be kind of the platform or the springboard for how we do over the next decade. And I think that conversation for sure, the CIO needs to be a part of saying, look, we got this, we understand the vector database, and we understand we need this technology and that technology, and here’s what it can do. But I think CEOs need to go to each of their directs, because each function is now a use case and say, Hey, HR, how is HR going to be humans with machines, doing the work instead of just humans without machines? And, you know, operations, finance, marketing, you know, how are we gonna get to humans with machines doing things in new ways. And that’s, you know, a sort of CEO level of conversation, when also galvanizing your people to say, you know, look, let’s, what we’re all about is new ways to grow and new things to do. Because you know, what’s gonna hold back your people from trying to innovate with AI is the specter of like, I innovate with AI, eliminate my friends job or my job. And so I think catalyzing that now is really the way to, you know, get an edge and you know, literally think about moving up that maturity model in the sense of savvy sophistication, interconnection getting beyond silos.

Mike Vizard: Alright, folks, Well, you heard it here. An immature response to anything, whether it’s AI or not, is usually going to lead to a bad outcome. The only difference now is it’ll be a bad outcome and scale. Hey, Brian, thanks for being on the show.

Bryan Kirschner: My pleasure.

Mike Vizard: Thank you all for watching the latest episode from You can find this one and other episodes on the website. Till then, we’ll see you next time.