Synopsis: In this Techstrong.ai Leadership video interview, Nick Magnuson, head of artificial intelligence (AI) for Qlik, explains why it's important for business and IT leaders to join councils exploring the ethical use of AI.
Mike Vizard: Hello and welcome to the latest edition of the Techstrong AI Leadership series. I’m your host, Mike Vizard. Today, we’re with Nick Magnuson, who’s head of AI for Qlik. And we’re talking about a new AI Council that they have put together. And we’re going to dive into what that means and what it’s all about, but it seems like councils are in vogue these days. Hey, Nick, welcome to the show.
Nick Magnuson: Yeah, thanks for having me.
Mike Vizard: So, what is the thought process behind this particular council? And it seems like there’s a lot of folks talking about everything from AI safety to pick your poison for anything related to AI these days, but what makes this council different and what are you guys trying to achieve?
Nick Magnuson: Well, I think our customers have some real challenges, concerns, and of course there’s real opportunities in how they think about AI, how they’re going to employ it, what sort of business they’re going to solve.
And for some, it’s a little intimidating, so we felt that bringing together a council of some of the people in the world that are thinking about these very topics at a very strategic level, our customers and ourselves at Qlik could benefit from having their perspective and their leadership as part of guiding our own thinking around the products that we take to market.
So, that was really the impetus is, to make sure that we were challenging our own thinking in this regard, and if therefore our customers could benefit from it. I think one of the things that we strove in this particular council to achieve was a diversity of perspectives.
A lot of times, you see AI Councils built out, and it is very theoretical, very academic in nature, which is great. You need to cover those bases. But as we put this council together and as you see in the background of the individuals that are now part of it, we’ve got a very diverse set of people that are thinking about it from a societal impact, ethics, responsibility, of course the rigor of the academics and stuff like that, where we’re looking for emerging trends through it. But having that diverse perspective I think was one of the major things we were trying to achieve with it, so that it could guide our strategy and guide our customers in their own thinking.
Mike Vizard: What’s your sense of talking to folks? Have they figured out what to do with AI just yet or are they still casting about use cases or are we moving beyond that to where folks are trying to operationalize this within the context of some business process?
Nick Magnuson: Yeah, I think there’s a spectrum there for sure. There’s certainly companies that are fairly far along that journey. That’s I think the right way to characterize it for any organization, it’s a journey. It’s not a destination to itself. Many customers that we work with are far along in that journey, and they do have models, AI models, that they’ve put into production that are generating real material business value.
And certainly, they’re looking for the next thing to expand upon that roadmap that they’ve developed. And then, you’ve got folks that are earlier in their journey, and they’re still kind of thinking through where they can apply it and what risks that might pose and making sure they think about it in an appropriate way.
But I think the advent of ChatGPT and all that has made it a bit more of a top-down sort of impetus where every C-suite in the world now knows the power of AI because they had it in their hands, or for my kids, it’s in the classroom and stuff like that. So, there’s a bit more top-down pressure now than there was before, and I think that’s causing organizations to figure this out.
And again, going back to the council, we wanted to make sure that we had thought leaders in the space that could help guide our customers and again ourselves, so that as that pressure is being felt, they have some frameworks in the people that they can look to help them understand how to apply it and what to be thinking about in terms of the governance of it and the responsibility aspects of it as well.
Mike Vizard: How do you think it will impact the way we think about analytics? It’s a topic you guys are close to. A lot of times, people have these dashboards. It takes a while to create the dashboards, and sometimes I got to hire a specialist just to sit on top of the analytics. Are we going to democratize analytics?
Nick Magnuson: Absolutely. And I think that’s a great thing, right? That’s one of the major challenges with analytics is there’s a certain amount of literacy. Qlik has been a big long-time advocate of putting forward data literacy as an objective. And I do believe that AI helps lower the burden, helps lower the threshold that is required to engage with data and analytics, and that is very positive. If it means that you can engage with data in a way which is more natural to a line leader or a business leader or somebody who’s not accustomed to trying to building and navigate a dashboard, I think that’s a great thing.
And certainly, we’re investing in ways in which we can make that experience a more natural extension of an individual’s given workflow. So yeah, at the end of the day, I think that’s positive. And we’re investing in those technologies to make sure that we’re able to keep pace at the rest of the market on that.
Mike Vizard: Is the relationship with the applications going to change? And I’m asking the question because historically you’d have to query the data, and that meant you needed to know what question to ask in the first place, and a lot of folks don’t even know what question to ask. And I guess can we get to a point where I’m looking for a feature that basically says, “Danger, Will Robinson, here’s three things that are likely to get you fired.”
Nick Magnuson: Right. Yeah. Well, I don’t think we’re far off from that at all actually. I mean, I think if you look at some of the new foundational models or large language models that are out there, you can ask them to give a certain amount of data that it has access to, what are some of the key questions? What are the key insights? And they actually do a really good job of at least getting you started. I’m not going to say they give you the right answer out of the gate, but there’s certainly now a path to creating that proactive type of insight out of data without having to know what you’re looking for.
And again, AI, that’s one of the things that it can do, is it can find things that the human mind cannot because it can find and identify patterns across so many different inputs. And so, I think we’re on that path right now. I think it’s going to be an exciting journey where we can use this evolving technology to increasingly make it more efficient to get information inside out of data, whereas before you had to know what you’re looking for in order to get to it.
Mike Vizard: Do you think folks will trust the outcome? Because historically there’s always been two issues with data. One is the quality of the data is often suspect, and there’s a lot of errors. And so, business people look at what the IT present them with, and they’re a little skeptical sometimes they go, “That doesn’t quite jive with my experience.”
And sometimes it’s hard to figure out whether or not that’s experience or indigestion, but one way or another, folks look at the data, and they don’t make a data-driven decision anyway. So, when do we get to some sort of deeper faith in the data and what it’s telling us?
Nick Magnuson: Yeah, that’s certainly a challenge. Now, I think that having trust in the answers that AI systems can provide is something that we all got to be thinking very critically about. If I know, as I interact with an AI system, that I can always get to the source of its decision or its recommendation or the insight, it’s incumbent upon the user to then interpret what that is and use their own judgment. I don’t think that goes away just because we’re using AI to help us elicit those insights.
So, at Qlik, we invest really heavily in making sure that any AI that we use or that our customers are able to use has a direct pull into the auditability of the inputs that it was used and the source documents that were used for that, that it can be explainable. With our automobile products, we were very early in using Shapley Value as a way to try and explain what those models are predicting.
Similarly, with the stuff that we do around generative, it’s always the ability to go back to the source documents. So, someone who gets a response and has to question whether the validity to answer, they can go back and they can do their own assessment of that. I think that’s critically important both, not for our customers, but for the market in general, that we make that a conscious thing that we’re trying to build into the way in which we employ AI.
Mike Vizard: The other challenge has always been as I embed analytics into a process, I’m trying to automate something ultimately, but as one wag said it, the trouble with AI is, “It’s one thing to be wrong, it’s another thing to be wrong at scale.” How do we put some governance around this so that we don’t let something spin out of control? Because it’s possible. I mean, it may not happen every day, but when it does happen, it can be quite the event.
Nick Magnuson: Yep. And there is sort of the challenge where you take AI, and you start to remove the human in the loop and AI made this insight, and now there’s a human to question, judge it, and then use it for an action versus AI makes the recommendation and there’s an automated workflow that picks it up and automates the action on the other side of it.
I think there’s certainly a desire, and I think a responsibility to put monitoring in place where, yeah, you have these AI systems, but we’re constantly governing them through monitoring of the quality of the inputs. You mentioned that earlier. Certainly, AI systems can go off the rail, so to speak, if the underlying data, we talk a lot about the data foundation as a key thing to build on top so that AI systems can build on something that is of high integrity, high quality, high level of trust.
So, monitoring the inputs is a key thing. As those things shift potentially over time, you know that it may have implications for AI systems that sit on top of it. And then, monitoring the quality of the output of those AI systems is another component of that governance framework. Certainly, that has to be in place.
I do tend to recommend, especially early in the journey that you invoke a human in the loop element to any of these things so that as you start early in your journey, you’ve got humans that are there able to interpret and understand what the AI system is doing before moving into something that is more programmatic in nature. But yeah, that governance framework monitoring is really important.
Mike Vizard: Literally, there’s a lot to talk about on these council meetings. Are folks more willing to be open with each other these days because of AI and all these issues they face or…? Because historically, everybody played their cards a little close to the vest.
Nick Magnuson: That has changed, in my perspective. I think folks realize that this is a bigger societal challenge that we’re going to face. And if we’re not able to open up about our own specific challenges, how can we expect us to address them at scale? So, one of the things I appreciated in talking to the members of REI council before we formed it was that they were all very interested in understanding our specific customer’s challenges and trying to get a better understanding of that.
So, as they think about it from their perspective as thought leaders in this industry, they had tangible things that they could take to their constituents in the way that they think about policy and other things, because a number of them are involved in that aspect of things that it’s grounded in things that are real to the customers that are trying to grapple with the challenges around AI.
So, I do think there’s been a change where the enormity of the scope of what AI can do has forced a lot of folks to open up about their own thinking on it, and I think that’s fairly healthy. And again, I think the council members that we’ve got to work with are certainly thinking about it from that perspective as well.
Mike Vizard: Do you think we have enough leadership here? And I’m asking this question because if you talk to the average employee, they’re nervous. I mean, I’ve talked to some folks who are at the end of their career, and they just shake their heads and go, “I’m not sure what the youngsters are going to be doing for a living, but it’s going to be fundamentally different from what we did.”
And I wonder if we’re just not seeing enough leadership from the business execs and the IT leaders about what can we do with all this and how does that transform the opportunity for people in general or are we just all collectively stumbling forward?
Nick Magnuson: Well, I don’t think the average employee is the only person that’s nervous. I think those executives themselves are fairly nervous as well. I mean, like I said before, there’s a lot of top-down pressure in these organizations now because the boards, the C-levels, they all know the power of AI because it’s literally in their home, and that’s made the C-suite very, “Okay, well now we’ve got to do something with AI,” and they’re looking for guidance.
But going back to the council, I think this is a perfect opportunity for those that are thought leaders in this space to help build frameworks or suggest ways of working with AI that they can lean on in order to make educated choices about how they use it in their organizations. I do think there is a certain amount of education that every organization has got to invest in.
At Qlik, we’ve invested heavily in upskilling our own teams to be more conversant and aware of what AI can do. We’ve instituted AI policies as a governance framework. So, anytime that we are using AI, whether it’s in our products or to help our business, there’s certain policies that we have to follow in order to make sure that we’re doing that in a safe and sanctioned manner.
And those types of frameworks I think are really important for all organizations to be thinking about. OWASP has done, I think, a really good job of thinking about the risk with large language models and the infusion of that into generative AI. Again, I recommend people look at that as a framework to follow.
So, I think we’re in the middle of that process where organizations are getting further up to speed and more educated on these topics so they can make good choices. And again, reemphasizing, that’s why we were so set on building an AI Council that had a diverse set of inputs and diverse set of experiences with AI that could help us think about it and help our customers think about it just as much.
Mike Vizard: All right, folks. Well, if you’re looking for a council or a place to meet folks, reach out to Qlik and Nick. I’m sure they’re all looking for all kinds of input that they can get. At the very least, you’ll get a good therapy session out of it. At the very most, in my experience, the best ideas are usually the ones you borrow from somebody else.
Hey, Nick, thanks for being on the show.
Nick Magnuson: Yeah, thank you. Appreciate it.
Mike Vizard: All right, and thank you all for watching the latest episode of the Techstrong.ai Leadership series. You can find this episode and others on our website. We invite you to check them all out. Until then, we’ll see you next time.