Synopsis: In this AI Leadership Insights interview, Amanda Razani speaks with Randy Lariar, practice director of big data, AI and analytics for Optiv, about the risks and concerns surrounding AI in business.

Amanda Razani: Hello, I’m Amanda Razani. And I’m excited to be here today with Randy Lariar, for He is with Optiv and is the practice director of big data, AI and analytics. How are you doing today?

Randy Lariar: I’m great, Amanda, how are you?

Amanda Razani: Good. Can you explain a little bit about your background and experience and what services does Optive provide?

Randy Lariar: Sure thing. So I lead Optiv’s big data analytics and AI practice. We help our clients focus on all sorts of data problems and solutions usually focused around cybersecurity. So that includes a data strategy, data engineering and architecture about consuming massive amounts of machine and application log data to help our clients detect cyber risks – as well as working with, you know, those large stores for data for things like threat hunting and threat intelligence, as well as leveraging a strategic way of using all that investment and data that goes into different parts of the business and bringing that together into a combined set of capabilities. We frequently help our clients with analytics and dashboards, and machine learning and AI, and have been very interested in and engaged in the developments in the large language model generative AI space. I personally came to Optiv two years ago, after spending some time at other consulting firms helping large financial services clients grapple with AI and natural language processing. And they’ve helped to build similar models to what we’re seeing now in the market. You know, the state of the art keeps moving. But some of the core concepts have been available for quite quite some time. But what we’ve seen over the last six months, especially with ChatGPT, and Microsoft and Google making large investments and announcements in the space is a heightened awareness of these capabilities. And it seems a much more democratized access to the tools of AI. Really, anybody now can work with AI through a text prompt. And there’s lots of increased capabilities that go along with that, as well as a hard look at some of the increased risks. So here at Optiv, being a cybersecurity and risk focused organization, we’ve been advising our clients for a long time on the different things that can go wrong in a technology environment. And when you add AI into the mix, there’s kind of two flavors – there’s the traditional types of cybersecurity attacks that if you think of all the benefits that AI is giving, you know, all of us, it’s also giving benefits to the criminals and attackers who were able to attack stronger – kind of more effectively. And, and thus, there’s a lot of just traditional cybersecurity and risk best practices that clients need to be mindful with, as well as building with AI and being aware of the risks that you open up when you expose a large language model to the open internet, for example. Or even if your employees are using it, there’s a number of legal and regulatory and company just reputational and IP risks associated with it. And so we have a number of services, you know, focused in and around AI. We can, you know, just do an overview and executive briefing, if you will, bringing some of our subject matter experts to talk about the different aspects of the problem. We can help with strategy and developing a plan to go from proof of concepts of AI to really wholesale integration and optimization of the different kinds of tools and technology and people process that you’ll need to leverage AI at scale. We have an offering around AI governance to really drill down into – you know, what these models are, how they work, what data they depend on, and how are you staying compliant and aligned with the different legal and regulatory concerns, as well as we have folks from my team who can build models and help you to build a proof of concept or pick a process to start to automate with AI. And then we have various services focused around different technology platforms that we have certified practitioners around in the AI stack. So lots of things, such as Databricks. We do a lot with a bunch of other platforms as well.

Amanda Razani: Wonderful. So let’s go into that with more detail. What are some of the key concerns and risks associated with generative AI models such as ChatGPT? And why has it led to organizations considering banning them actually, or putting a halt on the AI? I know yesterday’s headlines were pretty big with that one Senate statement, putting AI in with pandemics and nuclear war as a risk concern. So can you go into more details?

Randy Lariar: Yeah, well, you know, to separate out some of the kind of the national global concerns about you know, a terminator type AI being able to disrupt you know, massive economic and health concerns – without commenting too much on that, since they’re you know, very smart people working on that – but at the at the corporate level, you know, there’s a couple things to keep in mind. AI models are powerful, because they’re able to generate responses that are inspired by the data that they’re trained on, but not necessarily completely predictable. Oftentimes, they’re like a black box; you don’t know what will come out when you put stuff in. And so there’s a lot of risk around them just providing wrong answers, or potentially, you know, allowing for automation and decision making processes that, you know, are not entirely correct. There’s also the ease with which some of these models can be tricked – the concept of prompt injection to put in instructions that cause it to work differently that was intended. There’s also the concern about the losses and leakage of IP. Right? Samsung was in the news about two months ago, for many of their users. Internally, we are putting in, you know, company IP, in terms of code and company secrets, into a chat box that went out to the intranet. This is not, you know, necessarily an AI risk, but it’s a general awareness of, you know, you shouldn’t be putting company secrets into web boxes that you don’t control. And so there’s definitely concerns around the popularity and the capability of these tools that are so great, that enables people to do some bad behaviors that, you know, need to be monitored and controlled for. There’s additional legal concerns, especially, you know, models like GPT-4, where we don’t know exactly what they were trained on, and we don’t know, if at some point, someone will claim, you know, some stake in the economic output of those models that, you know, might be the subject to a lawsuit or, you know, complaints over IP and copyright. There’s also going to be regulatory concerns from the US Senate, from Europe, from Singapore, from lots of different, you know, governments, large and small. The state of New York has an AI regulation in place regarding decision making around recruiting and hiring. And so, you know, these, these are powerful tools, but with them is going to come a whole host of risks that companies need to be able to navigate, really have a process and a plan, so that they can bring AI capabilities to market quickly, without doing things that expose them to massive, potentially even existential, risk.

Amanda Razani: So there are many concerns. And for the companies that are saying to ban AI technology, what is your stance on this? Is that the right solution moving forward?

Randy Lariar: I think it’s, uh, you can start with a ban. Certainly, with any new technology, you may want to take a beat to determine you know, what your strategy and approach should be, but in the long term, and even in the short term, that’s going to be unpractical. Because, the players like Google and Microsoft are embedding these AI capabilities into the tool sets. You can create AI documents, right now, in Google Docs, and you can use Bing chat, you know, before long there’s going to be Windows type AI capabilities. And they’re just sort of the tip of the spear – every vendor and product out there, and we partner with, you know, hundreds of them have something about AI right now on their website, and more likely than not, a product team is building those kinds of capabilities into the tech stack. So, you know, you will need to have a approach to AI. You can try to block the big ones, but they will pop up, it’s kind of similar, I think into, if you look at how in many organizations, Excel spreadsheets play an important part of the business process. Sometimes that’s okay. Sometimes you don’t want to run your entire, you know, P&L off of just an Excel spreadsheet with no systems and controls, depending on the size of your company. And so really, some transparency and awareness of what your business is doing with AI is important, as well as a sense of, you know, what is and is not an acceptable use, and what are the alternatives. Many organizations have looked to build their own internal large language models, OpenAI, and then a whole host of open source inspirations that draw lineage down from like the Facebook model have really shown that this is possible at a much more, you know, affordable price point. You don’t have to go out and train a GPT-4 level model to get a lot of the capabilities and so many, many companies are looking at internal models, or they’re building, you know, monitoring capabilities; so that if you do interact with the GPT, you know, there’s some visibility into what your employees are doing with it. Some ability to block inappropriate requests that go out, and some ability to, you know, require people to take a training or be aware of best practices before they really start to leverage these tools.

Amanda Razani: Got it. So as companies try to implement this AI technology, my question is: How do they do that? Because it’s advancing so quickly, and in my view, it’s rapidly advancing; so if they implement this technology now, how do they keep up with, or how are they even current, maybe, a year from now?

Randy Lariar: Well, I think it’s gonna be an ongoing process. There’s a tremendous amount of research and development happening in the open source and academic community – some of that is going to be, you know, very powerful for these businesses to take advantage of. And some of that will be, you know, need some time to mature. I think that companies need to approach new capabilities and models and techniques, like they approach new software. You don’t just in an enterprise context, plug in the new tools to your environment, unless you’ve done a pen test, you’ve done a review of the risk, and you understand kind of what what capabilities and risks that’s adding to your stack. Likewise, organizations that want to use new models, especially if those models are based on you know, open source resource – maybe use a code based Python library that has known vulnerabilities, you know – need to be thinking about these kinds of concerns. It’s not to say that you can’t do all that stuff. I think it’s just important to have a process. And, you know, that process needs to include, how do you bring things on? And how do you monitor what you currently have, as new vulnerabilities emerge every day and new new concerns happen every day? It’s just an ongoing process as part of the core mission of a security team. And the good news is, this isn’t something that, you know, companies aren’t familiar with; we’ve been doing this, you know, in cyber for years and years on traditional technology. AI is very powerful, it has some different ways that it can be, you know, messed with. But at the end of the day, it is still technology that can fit into the same framework, and process. And ultimately, that is shown to be an effective first thought. The other thing you have to think about in security is layers of defense, and just assume that whatever you put in, you know, does have a vulnerability; you know, may eventually be misused. So what’s your plan B, especially as you start automating business processes? Suppose your AI goes down, or the model changes, or it stops working, how do you expect it to, you know, if you’ve already reallocated your people resources away from those processes, you could be in big trouble, depending on how critical that process is. And, you know, again, that could be an existential risk that puts you out of business, because you can’t actually do your business. Or it could be something minor, like helping you to write marketing emails, which could be very quickly replaced by a different AI or replaced by other people who could come on to help. So, all of this stuff can’t happen in a vacuum; you can’t just have AI kind of running wild. But on the other hand, it will be a core business function to, you know, use it and get the advantages from it and mitigate the risks.

Amanda Razani: So, for companies just now considering AI and wanting to implement it, where do they begin? Do you have any suggestions? How do they start?

Randy Lariar: I think it’s good to take a look at the potential use cases, and create, like, a backlog or a list of all the things you might want to do with the AI. Survey your company. Certainly encourage your users to experiment with tools like ChatGPT and Google bar, to get some sense of what they’re capable of. But also just be aware of, you know, the different considerations and risks – nothing commercial, nothing that, you know, you could potentially be sued over, because you’re creating economic benefit based on someone else’s IP. But once you have that, that backlog, you can also then start to think about, you know, what are the types of data that are needed to to do these automations and enable these things? If you have a data loss prevention program in place, you should have some, you know, classification of your data of what is, you know, top secret, highly confidential versus, you know, commercial and widely available. I would start to think about processes that use those lower risk data types, I would also look at the business process as well and figure out how critical is – like we just discussed about, you know, enabling ongoing business critical operations that are helping to augment and accelerate things that that are happening, and I’d pick the low risk areas and start to build there. You’ll also want to think about things like machine learning and AI operations. So how does data get into these models? How are they trained? How do they monitor to make sure that they continue to be having results that are useful? You know, large language models are all of the hype right now. But there’s a lot of other AI that is in your environment, that likewise needs a lifecycle. So as your boards investors and executives are asking for more AI, it’s a really great time for data professionals to say, you know, yes, but we also need to do data governance. And we also need to make sure that we have the right tools and monitoring in place so that we can use these responsibly. And then, if and when, a regulator comes asking, or we get, you know, discovery in a lawsuit, we’re able to produce records and documentation that shows we know exactly what our models are doing and how they’re behaving, and we’re monitoring and training them appropriately.

Amanda Razani: That brings another question as far as – let’s talk about these risks, again. And some of the issues. In what ways can generative models be integrated into existing workflows without replacing the human involvement? I know many people are worried that AI is going to be taking their jobs.

Randy Lariar: Yeah, I mean, if you look at some of the AI – in responses with, you know, you ask it a domain specific question, and if you’re a domain expert, it’s usually pretty easy to find the hallucinations in there and find, you know, just just made up facts or things that just aren’t quite right. That’s not to say that, like the GPT-4 has scored, you know, really impressively well, on a number of different, you know, domain tests, and is quite knowledgeable. But at the end of the day, these models are very fallible. And so when I think about advising clients on using AI, it’s really about trying to take out some of that complexity. And try to give it specific tasks that you can monitor or chain together a set of tasks, and each one of those has, you know, guardrails and parameters of what you’re expecting it to do. So for example, using AI to summarize documentation. If you’re not relying on the, you know, the vast corpus of trained information that we don’t know much about. But instead, you’re saying, here’s a specific piece of text, please summarize it – it tends to work much better. If you can give it a couple of examples of what that summary should look like, it even works even better. Or you can do some training of the models with different techniques to make them really highly tuned for a specific task. And so when you start to think about AI, the best way to use it is to give a specific model a specific query. You know, tight guardrails on how it’s supposed to work. And then chain that together – either a set of API’s or a set of other business processes that are kind of inputs and outputs to the API’s. And generally, with process automation, the first step is just to map out your process and say, you know, how do we do this today? What are the steps, and many times you’ll discover there’s a lot of inefficiency in your process that if you can resolve that, you should resolve before you put in AI. Because AI will just automate and accelerate a bad process. So you don’t want to do that. So first and foremost is to understand your process, break it out into those little chunks. And then certain chunks can be replaced with a generative AI or some kind of text driven model. There are other kinds of models and things that might help with other chunks. As it all happens, especially if these are business critical processes, there should be a step where human checks the output, or human, they call human in the loop, and there should really not be a process that involves AI that doesn’t involve a human, if that process is in any way find a critical, what you do then is you give the human the ability to instead of create to check, and that tends to be a much faster, you know, cycle. And so person who could only do one thing in an hour can now do 10 things or 20 things in an hour. So that person has just become a whole lot more effective. I think that’s the model that we will see organizations adapt. So if you’re currently you know, employed and doing things, you know, you have some expertise, you have things that you can contribute, that will make these models work better. A lot of times things in career is all about mindset and taking the opportunity; if you’re a subject matter expert, and you’re part of the business, I can help you be, you know, 10x 20x 100x more effective at that thing. And that’s how you should be, you know, thinking about these technologies in order to be, you know, gainfully employed. If you’re the type of person though – all you’re doing is taking data from one screen and putting it into another screen, without too much thought; that is, I think, a very, very easily replaceable role. And so that’s something to think about in your career is how do you add value as an employee, and then there’s a lot we can do that an AI can’t do, especially if you’re architecting them with narrowly defined inputs and outputs. You know, people obviously, are the ultimate neural network and have a whole lot more capability. These models don’t even scratch the surface of what a human brain can do yet, which is why all the conversations about the world ending events from generative AI – while they’re important to have at the stage, you know, at the corporate level – we don’t have to really worry about that, because these models are not anywhere near that kind of capability yet.

Amanda Razani: Got it. So many benefits. But how do companies weigh the cost factor? Because implementing AI can be pretty expensive right? Now, I know you mentioned a little bit about that earlier. But can you go into detail? How do they weigh that cost and whether or not it’s going to be a good choice for them?

Randy Lariar: Generally, you know, it depends what you want to do with AI. Certainly using tools like OpenAI or another AI service, they’re not prohibitively expensive, especially to get started. And in terms of the number of requests you make, if you’re training an AI, we’re talking thousands of dollars, not millions, or you know, hundreds of millions like the big tools have. So they’re kind of in line or below with any other kind of IT project. And there’s a traditional kind of cost benefit, you know, you want to see a return on investment. You want to look at your overall spend on on FTAs and other kinds of pieces of that process and what you can take out – every company’s got a different, you know, internal rate of return. They tried to hit on projects and you want to show that you can hit that or exceed that. We’re seeing such a frenzy of AI excitement because in most cases that math works out very favorably – that the AI makes sense to implement, and the returns are extraordinary. which then gets, you know, boards and investors excited about, you know, leveraging those benefits and being able to run a more efficient business. You know, certainly there’s a lot of resources out there. A lot of companies have, you know, very little AI and data capabilities today. And that’s where, you know, service providers are able to help, you know, build a strategy or help you to kind of, you know, cost that out very frequently, you know? Our deliverables of a project is not just an AI model, but a spreadsheet, helping you to understand how the cost gets driven down over time as you implement advanced data and analytics capabilities. And so there’s, there’s definitely a lot of resources out there to help you take advantage of it. But generally, what we are finding is that, you know, again, before ChatGPT, machine learning and AI, data science type projects were a very expensive thing. And you required very expensive resources – some people had, you know, PhDs, very high salary requirements. That’s all still true, but there’s, I think, a decrease in the barrier to entry, and an ability to go faster on less budget. And that’s really, you know, I think the economic root cause for a lot of the excitement right now.

Amanda Razani: Absolutely. So last question, what do you envision the future of AI looking like, in two years from now, as it pertains to business?

Randy Lariar: It’s moving so fast, it’s hard to make any prediction that are gonna stand up to time, but I think we’ve obviously seen hype cycles before, right? Like, there’s a lot of excitement, and then things cool off a little bit. I think this is a little different in that the excitement is justified, and there’s a lot of investment and activity going on into the space that will continue. I think we will see organizations that embrace AI will be able to fatten their margins, or potentially keep margins the same and lower their costs, and at that point, become much more competitive than their peers that don’t have AI. I think investors and boards are going to see those results, and then continue to drive more and more AI capabilities into their businesses. I think within the next two years, or even sooner, we’ll see some spectacular failures of organizations that implemented AI, but didn’t think about the risk side. And so we will probably see companies that are taken down because their automations are not properly secured. Or even, in nation state type level attacks on those type of things, depending on different geopolitical events that may occur. So I think that we’re gonna see a lot of – the next few years kind of broad narrative is going to be driven based on the AI, that’s in the process of working its way through a lot of these organizations, I think it’ll be very interesting to see how the AI like technology providers evolve; there’s a lot of really impressive open source capabilities to build models that don’t require the big text, gate holders, all of the concerns, and, you know, back and forth about training an AI to provide, you know, not provide bad answers – like not to teach you how to hack, or teach you how to build a bomb or to do different things. Unfortunately, I think that there will be plenty of models that don’t have those restrictions in, which will give bad guys all sorts of additional tools. But unfortunately, I think that with the open source, there’s a lot of good and a lot of bad that’s going to come out from that. Which means that organizations are going to need to continue to invest in security and making sure that they’re ready for those kinds of attacks. And on the flip side, I think that there’s just going to be a really fascinating time of good and bad change. And it’s hard to predict specifically, but I think broadly, we’re in a different era, a different phase of how we conduct business kind of akin to how we were before and after the internet, you know. The Internet didn’t really end business as we know it, but it definitely – most businesses need an internet presence and an internet capability in order to be relevant today. You can’t not appear on a Google search, right? And so in a similar way, I think that most businesses, not all, but most businesses will need AI and automation to stay competitive against their peers who will have those things, and so it’s gonna be just a fascinating time of transition and investment and some some really cool capabilities that are gonna come out and change how we think about things.

Amanda Razani: Absolutely. It will certainly be interesting watching this all unfold. Thank you so much, Randy, for coming on our show and sharing your insights today.

Randy Lariar: No problem. Thank you.