Synopsis: In this Techstrong AI Leadership Insights video interview, Max Lipovetsky, chief product officer for Cyara, explains why generative artificial intelligence (AI) will eventually supplant existing conversational AI models.

Mike Vizard: Hello, and welcome to the latest edition of the Techstrong AI Leadership series. I’m your host, Mike Vizard. Today, we’re with Max Lipovetsky, who’s chief product officer for Cyara, and we’re going to be talking about, well, conversational AI versus generative AI, because it seems like there’s a big debate going on out there. Hey, Max, welcome to the show.

Max Lipovetsky: Good afternoon, Mike. Pleasure to be here.

Mike Vizard: What is at the core of this debate? Because initially, all the conversational AI people seemed to go out of their way to say that generative AI would not replace conversational AI, and yet, there seems to be a lot of folks who think the opposite. So, what do you think is going on here, and what are the issues?

Max Lipovetsky: I think to understand this issue, it probably helps to clarify the terms. There’s a lot of discussion about using the term, conversational AI, to describe the technology that’s been used for roughly the last 10 years in terms of natural language, understanding the structure of human language, and being able to really derive two things, entities and intents. With generative AI, and otherwise, spoken about as colloquially as large language models and things of that nature, the difference really is that this technology does not use that same form of understanding human language, instead, looks at statistical models and for predictability of elements. And consequently, is capable of not just understanding much more complicated concepts, but also, responding without requiring-tightly scripted dialogue afterwards.
Consequently, the debate has really been about whether these large models that are trained basically on the internet are able to provide the same level of conversational experience that the NLU-based models have been doing for the last 10 years. And I think that is still somewhat of a debated topic, although with the events of the last year and the advances that generative AI has made overall, I think the number of proponents of the traditional technology is pretty rapidly diminished.

Mike Vizard: It always felt that the conversational AI solutions were limited in scope in their ability to respond, and a lot of the things you wound up engaging with in the online world felt very canned. So, are we just moving on to something that feels like a much more interactive experience?

AWS

Max Lipovetsky: I think that’s only part of it. The reason the traditional technology felt canned is because it was. The only AI part of it really was the identification of the intent, and in some more advanced cases, the entity, so the intent being what the customer wanted to do. Once the intent was identified, you moved into a scripted dialogue. It was simply ask this from the customer, ask this, do that. It was very, very tightly scripted. Every single conversation using traditional NLU technology effectively look exactly the same. Identify the intent, follow the script.
When we get to generative AI, that equation changes completely. You now have a technology that is capable of understanding much more complex, much more intricate statements, utterances from humans. And more importantly, it hasn’t got a tightly scripted response. It’s making up the response to each and every utterance, each and every thing that the customer is saying independently. There is no guarantee that when one customer says, “I’d like to pay my bill,” that the response and the dialogue they’ll get afterwards will be the same as a different customer saying exactly the same thing. So, it will feel far more natural. It will feel far more intricate and engaging and human-like, but I think that’s only just the surface of the difference that this technology is going to bring in for businesses and for customer experience more broadly.

Mike Vizard: How verbal is our interaction with generative AI going to get? And I ask the question because for the most part, the interactions are, here, type some stuff into this box, and we’ll send you back some text. Are we going to get to the point where we are essentially verbalizing what went into the text box and who knows, maybe the machine is talking back to us?

Max Lipovetsky: I absolutely expect that to be the case. In fact, it is the case now. If you look at chatting with any of the large LOM type bots that are out there, you’ll see that they maintain a conversational thread. They remember the context from one question to the next question. You can continue to ask it to refine the content or to ask more probing questions, and it doesn’t just respond with a reiteration of what it said previously. It will generate entirely new content along the way. The closest parallel that I can think of is really around the touring test and the idea that you can’t tell that you’re talking to a machine, and that is effectively where we are now with generative AI. From a customer point of view, it will feel much more like you are talking to a human. And in fact, there are countless startups and even mature businesses out there that are developing virtual agents with the real goal of replacing human agents, human contact center agents in ways that was not previously possible with the NLU technology. So, it will be a much more human experience.

Mike Vizard: Are we going to see, therefore, a proliferation of large language models that are optimized for different kind of use cases and scenarios? Or is everything going to be a derivation of a handful of large language models that people have customized?

Max Lipovetsky: So, what we are seeing more broadly in this space is the idea that there is a base model, and that base model is then trained on enterprise data, and that’s, I guess, the whole benefit of large language models is that they are trained on very large sets of data. And the question then becomes, well, to what degree is there benefit in training the model on different sets of very large amounts of data? That’s an open question. Whilst there are a number of organizations that are saying, “Our LOM is particularly good at this, and another one is particularly good at that,” and there is definitely truth to that set of statements, whether that ends up being material to impacting the customer experience is, I think, still an open question.
What is happening for sure though is that enterprise data is being used to train these large language models to answer specific questions that are relevant to the particular organization. So, it’s almost a, think of it as a cake. There’s a base layer of understanding and capability that comes from the LOM, and then, the enterprise data or the information about the specific organization as a layer on top. Whether changing the layer at the bottom has a practical impact, it’s not clear right now that that will be material towards customer experience. So, if it’s not, it’s likely that one or two large players will dominate the space and use the very large models that they’ve created.

Mike Vizard: How will this play out from a workflow perspective? Because theoretically, at least, I will have some sort of agent, and you might have an agent, and everybody else will have an agent, and the process will have an agent, and eventually, all these agents may want to talk to each other to perform some task. How do you see that evolving?

Max Lipovetsky: That’s an interesting question in the sense that as consumers, the idea of consumers having their own agent that acts on their behalf or their own virtual agent that acts on their behalf interacting with, say, a B2C organization’s virtual agent, really brings us back to the internet of things and machine-to-machine conversations. But now, we’ve got an emulation of humans in the middle. To me, that seems somewhat inefficient. I’m not quite sure why you would do that because we’re just putting a layer of confusion, things to go wrong for machines talking to each other. I think fundamentally, this problem or this technology will solve a problem for consumers and humans interacting with businesses rather than that machine-to-machine type of engagement that you envisage there.

Mike Vizard: So, where are we on this journey right now? What’s your sense of our collective level of maturity? Because I feel like it’s a little uneven, and some organizations are still scratching their head about what to do about all this. Others are much further down the road. How quickly is all this going to play out from your seat and…

Max Lipovetsky: That’s an interesting question because you’re right, it’s very uneven. Some organizations are leaping at this technology, seeing the potential to massively reduce the costs that are traditionally associated with conversational AI, whilst others are holding back and seeing how the market evolves. I think the one thing that nobody could have predicted is just how quickly this technology has moved over the course of the last couple of years. And that has given rise to a lot of optimism and concern within organizations. Generally, the sense that we have is that a lot of organizations are experimenting in this space. They’re doing proof of concepts, they’re doing proof of values, they’re trying the technology out, but there is a significant set of challenges and hurdles in moving from that POC stage to production. And what we are not seeing is at this point in time, is a large migration from NLU-based bots to LOM-based bots overnight. It’s simply not happening.
What we are seeing though is the bot vendors introduce LOM elements into their bot technology in such a way that it’s almost a little hidden from the customers. So, it makes their bots more effective, it reduces the amount of training that is required. It deals with generalized conversation matters, things of that nature. So, the vendors themselves are pushing in the technology, but customers are certainly not making that large scale leap from, I’ve got an NLU bot with five years of investment in it. I’m going to turn that off and turn on an LOM bot tomorrow. They’d like to. They can see the reasons for doing it, but they’re not doing it.
There’s a number of reasons for this. The single biggest one is actually around confidence. With an LOM, because of its non-deterministic nature, the fact that it can give different responses to the same inputs, the fact that it’s trained on a very, very large set of data, has knowledge about all kinds of things that are unrelated to the enterprise, its behaviors are very difficult to predict and understand. These are the things that drive a lack of confidence in organizations adopting it. So, whilst in the proof of concept, proof of value, they can see explicitly what’s going on and that this technology can provide immense value. Those challenges are stopping them from putting it into production and saying, “Let’s go and capture that value right now.”

Mike Vizard: Is this another one of these classic scenarios where the immediate impact is over hyped, but the long-term impact is underappreciated?

Max Lipovetsky: I think that’s a Bill Gates quote, isn’t it? Or a paraphrasing one anyway. Yes, I actually think that that is very much the case. I think we’ve only just scratched the surface of what this technology is capable of achieving. And whilst the things that it should be able to do immediately are obvious to anyone in this space, achieving them is a little more difficult than what everybody would hope for, particularly when it comes to adoption and the human factors within this, our confidence in it, the way that the consumer market will respond to interacting with virtual agents that sound human, but aren’t human. There are a variety of these challenges along the way that we need to solve.
The things that I mentioned previously about the non-deterministic and difficult to predict and test nature of LLMs, they’re all challenges to adoption. Once these are solved, the uptake will be much more significant and have a much more profound impact than what we anticipate are… And for generative AI, as a technology set overall, it’s very difficult to predict what the full set of capabilities and impacts that they may have. We’re just scratching the surface.

Mike Vizard: So, what’s your best advice to folks then as we look to navigate this? Clearly, there’s some transition coming up, but a lot of people work for business executives that are thinking the world is going to change overnight, and they can change the entire economics of their organization. And other IT folks are a little more skeptical. How do I sit between these two things?

Max Lipovetsky: I heard a quote from one of the founders of one of the large language model vendors, to say that they had a betting pool running on when the first one-person billion-dollar company would arise. You talk about changing the economics world. Well, that’s an unthinkable concept even a few short years ago, and yet, it now seems like an inevitable result. So, the idea that this is going to profoundly change the world, I think is fundamentally correct. If you’re an exec that’s looking at your cost base within your business and you’re thinking about how do I optimize that with conversational AI or generative AI, obviously, the single biggest opportunity you have is your workforce. And that is particularly true in customer experience, contact center, or the larger B2C elements. I think sitting back and waiting for this space to emerge will leave you behind the competition. Everyone is, at the very least, experimenting here and understanding what the potential is for generative AI to meet their use cases.
In our space of conversational AI specifically, that the use case is so obvious that it’s simply silly not to be trying to do something. I think the more genuine question is that the execs need to be asking is, how can I have the confidence to know that I will be able to deploy generative AI towards my real life customers and not embarrass myself, not put my job on the line? There’s been some really interesting examples in the last couple of months of generative AI being asked to sell a Cadillac for a dollar and write poems about how bad the company is that the bot is there for. These are all things that are career limiting, and the question that execs have to ask themselves is, how can I get the benefit without exposing myself to those kinds of risks? And that’s the million-dollar question and the one that actually, Cyara is really working very, very hard to solve.

Mike Vizard: All right, folks. Well, you heard it here. It’s an interesting thing to contemplate. Well, is a company that generates a billion in revenue with two employees, a small business or not. Who knows? We’ll find out. Hey, Max, thanks for being on the show.

Max Lipovetsky: Thanks, Mike.

Mike Vizard: Thank you all for watching the latest episode of the Techstrong.ai Video Leadership series. We hope you enjoyed this episode. You’ll find others on our website. Please check them all out. Until then, we’ll see you next time.