Synopsis: In this AI Leadership Insights video interview, Mike Vizard speaks with Andy Campbell, the director of solutions marketing at Certinia, about how to prepare for AI.

Mike Vizard: Hello and welcome to the latest edition of the Techstrong.ai video series. I’m your host, Mike Vizard. Today we’re with Andy Campbell, who is Director of Solutions Marketing for Certinia. And we’re talking about, well, AI of course. But what do you need to do to get ready for AI, and what’s real and maybe not so real? Andy, welcome to the show.

Andy Campbell: Hi, Mike. I’m delighted to be here. Great to have a conversation with you.

Mike Vizard: I think we’re all excited about AI, bordering on the irrational exuberance, as they say. But I think a lot of folks are also starting to realize that it’s all dependent upon the quality of the data they have, and we’re not so good at managing data all these years. So what are we going to do to get ready for AI?

Andy Campbell: Yeah, you’d have thought we would’ve been good at managing data by now, but yeah, I think you’re absolutely right. As I see it, Mike, there are a number of things that we need to put in place before we can really embrace some of the technology and do it in a way that’s actually going to deliver some tangible outcomes for our companies and for our customers.
Data’s a critical element of that, and we need to make sure that we’ve got the right data, that it’s clean data, we’ve got it in the right quantity to be able to do the right things. We’ve got the right quality to make sure that it’s not corrupt, it’s not incomplete, anything like that. And we have to make sure that that’s there at the very heart of any decision-making computing, rubbish in rubbish out, and AI just compounds that makes it even better rubbish.
In addition though, you’ve got some other things to think about within your organization. Are you really prepared? Have you got the right skills? Have you got the right executive buy-in? Have you got the right mindshare to be able to embrace what this technology is able to deliver? And frankly, at the heart of it, what is the problem that you’re actually trying to fix? Because we don’t want a situation where people just have pet projects where they’re doing stuff just for the sake of it. I’m in a business to help my organization, Certinia, to make money. So how can we actually translate somebody’s pet project into something that’s actually going to deliver some tangible improvements within a company?

Mike Vizard: And I think you touched on something that’s very real. One, building these AI models is somewhat expensive. Two, you got a limited amount of resources, so you can’t AI everything all at once. So how do we sit down and figure out what to prioritize?

Andy Campbell: Well, the data is key to that, and that, as I said, you’ve got to have a couple of years worth… three years worth of good quality data before we can start drawing any conclusions or build any algorithms around what that might be. And at the heart of that is understanding the data set. So what are those various components that you need to capture?
And again, that comes back to, you’re only going to be capturing data if it’s going to be of some sort of use. So my take will be what is the problem that you’re looking to address? And rather than just think of all the ethereal capabilities that we could do with AI and ML tools, it’s to think, let’s pick a particular problem. Let’s think about, in a professional services organization, what can we do to make sure that we’re using the resources within our organization in a particular way?
And once you’ve got clarity over what the objective is, what the project is, you can then start to think, “Okay, so if this is the problem that we’re looking to address, how can we go about do a bit? What do we need to put in place?” We need to get the right team, with the right skills, with the right executive sponsorship, realistic timeframes, et cetera, et cetera.
But starting from the outset, what is it you’re trying to achieve? And then how do we marshal all the resources to make sure that we don’t set off some project that is endless and looks at interesting stuff but goes off down a bit of a rat hole and never comes out? How are we going to measure our success? And being able to do that by bringing together and marshaling all the skills, all the capabilities, and the data together to achieve that objective. I think having clarity over that is the most important thing.

Mike Vizard: How do we coordinate all this, because it will involve business people and data scientists and DevOps teams and data engineers and security folks? How do we bring all that together? Because I think as one fellow told me not too long ago, they had a data science team and came back with a model that showed that sales drop every seven days and turned out the seventh day was Sunday, so yeah. How did we kind of marshal this in some sort of meaningful way?

Andy Campbell: It’s only the people in development that works seven days a week and sales guys only work five or six, or salespeople only work five days a week. Yeah, it’s very true. This is why it’s so important that we don’t operate in silos. And it’s not just simply the differences between technology, the technology team and the sales team. But think about, the salespeople talk in a different language. Finance people talk in a different language, and you have to start off by getting all that cross-functionality together.
You need to understand what the problem is, both in a business context and then describe it in terms of the technology. What’s the dataset, what’s the data elements? And you already have a load of these skills already in place. If you think about people who work in marketing, they’ve been doing a lot of this stuff for a long time, thinking about how they can use good old data mining tools to come up with the best kind of promotions to put together and all these kinds of things. There’s some really good skills embedded within the organization, but they might not be where you think they are.
So we need to start off by setting the objectives, what it is to achieve. Marshaling the team with the right resources, with the right knowledge. And then understanding and sharing in an open and collaborative, almost like a skunkworks type of thing where you’re working together on a particular project, sharing and brainstorming and getting those ideas right. And only then can you make sure that you’re starting off and doing things properly.
I think back to my background in retail and distribution, and I remember the Japanese approach to manufacturer was you spend 80% of your time planning what you’re going to be doing, and then the last 20%, bang, executing, and it works every single time. It’s pretty much the same here. You get the right team, the right skills, the right dataset, and then you can execute well. If you don’t put in the groundwork at the beginning, it ain’t going to fly.

Mike Vizard: As we kind of think this through, aren’t there also different levels of risk with different kinds of projects that we need to think through? It’s one thing to have an AI model that might hallucinate for, I don’t know, or marketing or sales motion. But it’s quite another if that AI model hallucinates on something that is really mission critical to the business, or as one man said, “It’s one thing to be wrong, another thing to be wrong at scale.” How do we figure all this out and determine which AI models are associated with what level of risk?

Andy Campbell: The issue is that historically we’ve been very good at measuring what we want to measure, and we use those measurements in order to make our decisions. And we tend to measure what’s easy and we don’t measure what’s difficult. So we have to really validate the data items that we’re looking at that are the very heart of the algorithms. A, they’re robust, but B, they actually are relevant and we need to do a bit of triage around that. So that’s one thing.
Secondly, we need to think about and really validate the data quality. And as I said, you typically need about three years worth of data in order to be able to apply proper algorithms against that and make sure that it’s clean and that the data that you are making your decisions is actually used by the business. It’s data that’s legitimate. You don’t have a variety of dropdowns and people just use the first thing that’s on the top of the list and they use that to populate data and it’s incorrect and you’re making decisions around that.
So making sure that the data that you’re using is valid, that it’s complete, that it’s authentic, and then you can start to use that in a sort of closed loop way. So you can start throwing your algorithms around it, working around with the technical teams about to have that work out, and then apply a bit of the old fashioned common sense.
You are talking to the people who’ve been in the industry who know their customers, and this was saying, “Let’s have a sanity check here. Does this make sense? Is this really the kind of thing that’s relevant within our business or not?” And I think that just relying upon the algorithm is dead risky. So we have to have that human impact as well to make sure that things are, not so much legitimate, but they pass that sanity check.

Mike Vizard: What is your sense of the cost for building these things? Do we really understand what the issues are? Because I think if I look at some of the models, it’s based around a token, but there’s two tokens per process or per task. And so will that cost spin out of control for folks? Because we need to think long and hard about just how many inputs and outputs there are per thing. And if I start multiplying that, I’ll be out of my budget within about a month.

Andy Campbell: Yeah, yeah, yeah. It’s the old exponential thing, isn’t it? I think just sort of stepping back a bit, there is a concern that there’s a lot of money, a lot of time and energy is being spent talking around AI at the moment, but the amount of projects that are kicking off is not as many as we would hope. So a lot of people are not quite sure what to do. They’re not quite sure about how they should be starting off projects. And when we’re in that situation, we end up with a lack of clarity and a lack of clarity and poor governance over the program.
Whereas if you’ve got… We use some simple tooling like ChatGPT, and most organizations are using it to some extent, and some people within organizations be using it an awful lot. Unless you’ve got clarity over what it is you’re trying to achieve and how you’re going to do it, you will end up with a bottomless pit. You are putting not just cash, but you’re also putting time, energy, and people’s resources operating on things that aren’t going to deliver value.
So you can do these things in a pragmatic way. You can do them in a way that is not a money drain, but you have to have real focus on what it is that you’re trying to achieve and what the outcome and the business benefit you want to drive out. Because otherwise, you just get change request, you get change request, you go off with the next greatest thing. Salespeople, I work in sales, I know what salespeople like. They love shiny objects. You see something, “Oh look, squirrel! There’s another one!” It’s very easy to do that if you don’t know what it is you’re trying to chase.

Mike Vizard: Ultimately, what is your sense of what will people actually go do? Will they just kind of use something like ChatGPT as it is? Will they put something in a vector database and try to extend it or will they go customize and build their own LLMs? And what would drive that decision one way or the other?

Andy Campbell: Great question. I think the jury’s kind of still out. I see that there’s a few stages. You’ve got people that are just exploring. They’re still trying to find out and think, “What is the best way we can use this in our… What’s the use case within our business?” Secondly, you’ve got people doing some tactical stuff and they’re looking at GPT saying, “Hey, we can send our automated response on emails. We can do that this way. We can reply to customer’s dead quick,” and there’s small tactical solutions that they can put in place. Again, they might have some value or they may not, too.
There will some larger enterprises that are thinking, “Hold on, there’s a strategic issue at play here. We could think about how we can use robotics, AI/ML, and we can use it to massively transform the way in which we engage with our customers. And if we do that, that’s big ticket stuff. That’s long duration stuff. And that could be very expensive and that could involve a lot of custom development and it’s potentially high risk as well. High reward, high risk. That’s this kind of strategic stuff.
But the last other thing is most appropriate as what we’ve termed pragmatic AI, which is saying, “What are the things that we can do that’s going to give us some payback into our organization? They’re going to have tangible benefits around the organization, show progress, show benefit.”
We can use that to reinvest into the next phase. Take it off in bite-sized chunks, small pieces of capability, show some payback, and use that to grow the way in which you put these kinds of initiatives together. The phrase I tend to use is, “Think big, start small, grow quick.” Think big, what’s the implication of all this stuff? But start small. Start with something that doesn’t require a whole lot of customization because it’s really easy to chase the next best thing to come along. So think big, start small, grow quick.

Mike Vizard: What is your best advice then to IT folks that are confronted with these issues? And it’s a somewhat awkward conversation sometimes, because the business comes in and they read the latest greatest thing on AI, or they engage with something that a Microsoft or a Google built or whoever it may be. And then they expect the internal IT team to go execute something like that, and the probability assessment of that is zero. So how do you kind of have that conversation? Because the business people think that their internal IT team is the equivalent of a Google and perhaps not.

Andy Campbell: Yeah, yeah. And the trouble as well is if the internal IT team think they’re a bit of a Google as well, and they think, “Hey, that’s a great idea. Let’s do a bit of that.” Get energized and want to work on new and interesting things. I go back to the Japanese model, I’d make sure that the people in IT really understood what it was that the finance director’s trying to achieve.
I want to improve my DSO. Okay, so how can I approve my DSO? Well, if I know why people aren’t paying me. If I can understand what the characteristics are. What are the data elements that show if somebody’s like this, they’re not going to pay their bills? They say they’re going to pay their bills, but last time they didn’t. Oh, funnily enough, the invoices are bigger. They tend to pay them up 38 days rather than 36 days.
If we can really understand what the problem is from the finance director’s perspective, and the IT people can get that, they can work jointly on doing something. It’s where we have the silos. We don’t have the conversations between the business units and the IT team working collaboratively together. That’s where the problem arises. So my takeaway would be cross-functional teams working in that way. Define the project. Cross-functional teams. Get your data sets right and then you can move ahead.

Mike Vizard: All folks, you heard it here. Hold hands and look before you leap, right? That’s basically what it comes down to. Andy, thanks for being on the show.

Andy Campbell: Mike, it’s been fabulous to speak with you. Thanks for your time.

Mike Vizard: All right, and thank you all for watching the latest episode of the Techstrong.ai video series. You can catch this episode and others on our website. We invite you to check them all out. Until then, we’ll see you next time.