Synopsis: In this AI Leadership Insights video interview, Mike Vizard speaks with Peter Mulford, chief innovation officer of BTS, about how AI will change the way the C-suite engages in business and with customers.

Mike Vizard: Hello, and welcome to the latest edition of the Techstrong AI video series. I’m your host, Mike Vizard. Today, we’re with Peter Mulford, who’s chief innovation officer for BTS, and we’re talking about how, well, AI is going to change the way the C-suite engages with the rest of the business, and for that matter, customers, and maybe even their families.
We’ll see how this goes, but Peter, welcome to the show.

Peter Mulford: Thanks a lot for having me, Mike.

Mike Vizard: I think most C-level executives are all focused on how they’re going to use AI to drive increased profitability and more revenue, but how will it change the nature of their jobs and the way that they engage with their colleagues, each other, and maybe even their customers?

Peter Mulford: That’s the $64,000 question, Mike. I would say the way you could answer it is really by looking out over different timescales. In fact, most of the work we’ve been doing with chief executives in this space usually involves going out three to five years and working out: “Well, given what we know about the trajectory of the technology, what sort of future states can we expect, with a high degree of confidence, for our businesses?” And then ladder back to the current moment.
And if that sounds kind of confusing or impractical, the main reason we’re doing it is there is still so much uncertainty about the technology itself that trying to start today and look out say, three months, to get a sense of how it’s going to change the C-suite, is more like gambling than planning. But going out three to five years and looking backwards seems a little bit more rational, at least from a probabilistic sense.

Mike Vizard: It seems to me, at least, that we’re heading down a path where we’ll all have our own digital AI assistants, and my assistant will talk to your assistant, and maybe something that appears to be a workflow will emerge from that. But will we lose touch with each other because our assistants are doing all the chatting?

Peter Mulford: Okay, yeah, that’s interesting. Well, I think what you’re describing there is related to a broader concern around technology, and this has been going on for a while, mind you. I’m sure you’ve seen this, that people are increasingly more distracted and interrupted, even at work, by digital technology of one form or another.
And I think what you’re probably pointing a light at there: Is AI going to make that trend even worse by having us spend even less time sitting and thinking and collaborating with other people, versus time spent interacting with our digital personal assistants? I would say that’s hard to know if that’s going to happen, but I would say that it’s relatively easy to take care of. By this, I mean it is not actually that hard for leaders to notice when they’re becoming distracted and when technology is really shredding their work day into a series of work moments, and even work moments into staccato yelps of interaction with technology, versus more meaningful conversations with other people.
So all of that is to say, Mike, to your question: You could easily imagine a future in which this happens, in which everyone from the C-suite on down is spending more time with their GPT, their Generative Pre-trained Transformer, than they are with other people. But it’s also a future that is really easy to avoid. I don’t imagine us all being sucked into this out of necessity, so I’m less worried about that.

Mike Vizard: Will it change the relationship between C-Suite executives and the machines themselves? And I ask the question because almost every day, somebody in the C-Suite starts a sentence out with the two words. They’re usually, “How come?” And then somebody usually goes diving deep into some sort of analytics thing that hopefully teases out something that looks like a reasonable answer.
As we go forward, though, won’t the machines just tell us what we need to know? When I come in in the morning, I might say, “Tell me the three things that are likely to get me fired,” and then the machines will tell me what I need to figure out.

Peter Mulford: Okay, that’s interesting. What I think I’m hearing you say there is a propose of how we started our conversation. Let’s imagine for a future. Let’s imagine for a moment that the technology gets to that level of accuracy, right? I mean, we can have an interesting debate about whether or not, to what degree the promises of AI are shiny objects, versus the degree to which they really will get so sophisticated that you could get to this kind of future. Let’s imagine for a second that we do get there and that the technology really is that good. You’re asking, “Well, what’s going to happen? How are leaders going to change the way they make decisions, the way they prioritize time and resources, et cetera?”
The way to think about answering that question, Mike, is the following. Whenever we make a decision, it doesn’t matter whether we’re on the front lines or all the way to the board of directors. What you’re basically doing is a series of steps that we’re all familiar with, that you’re starting with, “What’s the decision I need to make?” Check. “What kind of information do I need to make that decision?” Check. When I analyze that information in my head or on a computer or with my marketing team or whoever else, you’re arriving at a prediction, and then based on your prediction, you’re going to apply judgment to that prediction. You’re going to judge, “Well, if there’s a 10% chance of this happening or a 90% chance of that happening, what am I going to do?” That’s where the human judgment comes in, and your judgment is usually driven by your values, your beliefs, your ambitions, whatever it is. Then you take a decision.
Now, the way to think about how AI will change all of this is AI isn’t going to make decisions for you. What AI will do is it will sit comfortably in the space between predictions and judgment. In other words, it will no longer be the case where I’ll sit down and I’ll have to figure out, “Huh, if I increase prices on product Y, what’s going to happen to demand?” And I’m going to sit down, and I’m going to do flip-charting, and maybe I’ll break out Excel. I won’t be doing that any more. What AI will be very good at is improving the quality of my predictions. What AI won’t be good at is telling me how to judge them.
So this is a monologue; I apologize for that. But what I would have you and your listeners understand is if the technology does get to this place, which it very well may, where it can really start to impinge on executive decision-making, it will do so in a way where it will improve the accuracy of predictions in any given domain. And as the value of human predictions drops, the importance of human judgment will go up. So chief executives will still be making very important decisions, together with their teams, but they’ll be decisions around, “How should we judge or value what the models have predicted for us?” Whereas in the past, we might have had to do the predictions ourselves.

Mike Vizard: Will we trust the outcomes of the models more than we have traditionally trusted data analytics so far? And I ask the question because a lot of times, IT folks show up with a report, and the business people nod their head, pat them on the head, and send them on their way because they know that the data used to generate the report was junk because they’re the ones that created the data in the first place.
And so they rely more on their gut instinct, and sometimes that’s hard to distinguish between what’s instinct and what’s indigestion. But will we get to the point where the executives do trust what they’re seeing coming out of IT systems?

Peter Mulford: Well, a proposal of what you just said, I wish I had an AI to predict that for me. Nobody has a crystal ball, but it’s funny you bring this up. One of the things, as you probably know, BTS does, is we run executives through business simulations. And the purpose of that is to let leaders take the future for a test drive and to experience, down to the balls of their feet, the mindset shifts and behavioral shifts that will be required for them to flourish in the future. That’s why we use simulations for all of our executive development. And one of the dynamics that we’ve been building into our simulations quite often lately is a dynamic in which your data science team brings you a recommendation on an important decision you should make, based on data science, and then we give them the choice: “What are you going to do?”
And it’s so interesting, Mike. What often happens is the leaders just ignore. Even in a simulation, they’ll ignore the recommendations of the data science team outright if those recommendations run counter to their feelings or their intuitions. And we could have an interesting conversation about, as you would guess: The degree to which the team accepts the recommendations of the data science team depends on their background, whether they’re numerate or not, whether they have a finance background or not. But in general, what we’ve noticed, and of course, this is what we talk about in these executive development programs, is what sits downstream of these experiences is leaders realizing, “Oh, boy, if we’re going to flourish in this brave new future, we have to make a shift from trusting our gut to testing our gut.”
And what that has to do with your question, Mike: It doesn’t mean that you have to accept the conclusions of a data science team just blindly. What it does mean is you need to sit comfortably in the space between conclusions that make you uncomfortable and an honest interrogation of how those conclusions were derived. And an honest interrogation doesn’t mean, “Well, I don’t like it, so I’m going to ignore it and I’m going to pat you on the head,” or “I’m going to pull out some confirming proof that AI doesn’t work. Here’s what I just read in periodical X, Y, Z.” What it does mean is you make a good-faith effort to understand, “Well, what was the sort of data used? How was it used? And what’s the probability that I’m wrong and the data scientists are right?”
And that requires a lot of energy and training and quite frankly, recognizing some biological default mechanisms you have in your brain to get right.

Mike Vizard: To be fair, though, a lot of times, the data science folks don’t have a lot of business expertise, so they’ll come back with some awesome model that concludes that we noticed that revenue dropped sharply every seven days, only to discover that we’re closed on Sundays. So yes, that is an issue.

Peter Mulford: Well, hang on a second, though, Mike. I got to ask you about that. What I just heard you say there is, “Yeah, so the data scientists working at your company don’t have an understanding of business.” So now whose fault is that, and what’s the easy remedy?
I mean, you can see where this is going. I would be concerned, or I’d have some well-intended advice for any leadership team that has set their data analytics team up to make recommendations about their business without any understanding of the business. I think those days, you’re going to see less and less of that, fingers crossed, in the days ahead.

Mike Vizard: All right, from your lips to God’s ears, as they say. The next question I have, though is: So ultimately, are we going to have to see, or will there be a new generation of executives who are “AI native,” and they understand how to use this technology to drive decisions, and they understand how to ask the right questions and interrogate systems? And will we see something of a turnover here?

Peter Mulford: Again, no one has a crystal ball, but if history is any guide to this, you could ask the question: Are there any CEOs left who say, are computer illiterate or who don’t know how to use Smartphones or who don’t know the ins and outs of Microsoft Excel? In 1992 or 1993, you could say, “Well, sure.” Back in the day, before even Excel was really a big thing, and Quattro Pro was still on the market. Remember those good days? People could fairly ask the question, “Are executives going to be able to get behind PCs?” I think we’ve all heard those apocryphal quotes from CEOs who said, “PCs is just a fad, and that’ll go away.” Now look where we are now. I think it’s reasonable to assume that it won’t be long before executives are as comfortable using AI tomorrow as we are using a Smartphone today.
And I think that shift is going to happen even faster than what we’ve seen in the past. All you have to do is look at that quote or the piece of data, which I think everybody’s seen now, that shows how quickly people adopted Chat GPT. Have you seen that chestnut? I think it was what, one million people in five days? Now, you have to ask yourself, “Well, okay, that’s a cute number. What does that actually mean?”
Whenever you get a million people using something in five days, what sits underneath that is two things: a technology that is on the one hand, powerful and useful; and on the other hand, easy to use. That’s the magic. And that’s why when you see those statistics, they’re often compared to things like Netflix or things like Instagram or whatever, and that’s where I think the technology is going, and that’s why it’s got everyone so excited, because on the one hand, it is quite powerful. On the other hand, I don’t need to be a machine-learning engineer or have a degree from Caltech to use the thing.

Mike Vizard: So ultimately, what’s your best advice to C-level executives? What’s that one thing you see in your simulations that continues to just make you shake your head and go, “Folks, we’re better than this?”

Peter Mulford: Yeah, that’s a great question; thanks for that. I think the best way that not just executives, but anyone, in fact, anywhere, not even in business, but particularly in business, can prepare themselves for a future using AI that is exciting versus anxiety-ridden, is just to understand what it can and cannot do. And honestly, that takes two hours, max. It’s a fancy way of saying to really lean in and start to get the most out of AI, you don’t need to be able to do a Python implementation in TensorFlow. You don’t need to be a coder and go to a hack code-a-thon or any of that.
What you do need to do is understand three things: how it works; what it can and cannot do; and how to think strategically about using it. And what I’ve noticed, just from personal experience, as a matter of actual experience and observation, is something magic happens when business people suddenly realize what AI is. And hint: It’s just math, actually. It’s just computerized prediction with all kinds of balloon lights and glitter around it.
It’s amazing what happens at that moment, and I get to see this all the time. It’s really a privileged position when executives suddenly realize, “Oh, is that it?” And then it shifts from, “Ooh, well, that’s it.” It literally goes from this mysterious thing to, “Oh, okay, now I get it,” to, “Ooh, now I get it.” And this can happen in the course, literally, of just a few hours, no more or less.
So that’s the best recommendation. Rather than be hypnotized by all the quotes from the Yann LeCuns and Elon Musks of the world, which can be a bit dizzying, just take two hours to figure out how it really works. And then you’ll see, “Aha, okay, now I’m less anxious about it, and I’m a little bit more excited about it.”

Mike Vizard: All right, cool. Folks, you heard it here. AI: it’s not magic. You will just need to figure out and understand the core concepts, and you may not need to become a data scientist; you just need to be AI-savvy.
Hey, Peter, thanks for being on the show,

Peter Mulford: Mike, thanks for having me. Take care.

Mike Vizard: And thank you all for watching the latest episode of the Techstrong AI Series. You can find this episode and others on the site. We invite you to check them all out. Until then, we’ll see you next time.