Synopsis: In this AI Leadership Insights video interview, Mike Vizard speaks with Chris White, president of NEC Labs America, about the future of AI.
Mike Vizard: Hello, and welcome to this edition of the Techstrong.ai videocast. I’m your host, Mike Vizard. Today, we’re with Chris White, who’s president of NEC Labs America, and we’re talking about AI and the future of AI, because well, there’s more to AI than just generative AI. Chris, welcome to the show.
Chris White: Thank you for having me. I look forward to the conversation.
Mike Vizard: Generative AI is the latest in a series of advancements that have been going on for a number of years. We’ve had machine learning, deep learning, and all kinds of interesting other data science techniques over the years. What is your sense of the current hype, and do you think that maybe this is just the latest waystation in a progression of things that are coming our way?
Chris White: The current hype around AI is at a level that is a bit higher than it’s been in the past, and I think maybe partitioning between the ChatGPT generative AI and the visual and image-creation generative AI, I think, is also important. I think that this entire area where we utilize AI to actually create content is very important.
One of the things that I often talk about is this idea that the value of an innovation is bounded from above by the value you pay a human to do a similar task. And therefore, what you want to innovate on or build is things that are superhuman in capability: things that are very hard for a human to do or impossible, or to do something that a human can do, but to do it much faster or be able to scale. I think what many of these generative systems do is enable the creation of content at a scale that is much faster than we could have done before, and I think that on the image side of things, it’s not going to be replacing artists or really removing the ability. I think it’s going to augment the ability of artists to actually create content, and I think there’s a huge value in that generative piece that has a very positive side. There is a negative side with deepfakes and things like that, but I think the positive side, in terms of content creation, is going to be enormous.
I think if we transfer over to the other side, looking at things like ChatGPT, again, their superpower is the rapid generation of texts or programming code. They’re not great knowledge representers, so they often make things up or they produce results that feel right, seem right, make sense in your gut, but aren’t actually right. And that’s not that there’s a deficiency in them; that’s what they’re designed to do. They’re designed on a large corpus of data to replicate that large corpus, and that means that they’re going to produce value that is consistent with an average person you grab off the street, not necessarily consistent with an expert.
So if we look at those systems and we realize that they’re not really amazing thinking machines. They’re not going to be running the world. They’re not going to be replacements for Google and search, but that they might be the replacement for content creation, then I think you can see a path where they’re going to have a tremendous impact on the world. But it might not necessarily be an impact that is consistent with the sci-fi or movie AI, where they’re going to take over the world or change things in a way that is no longer human.
So all the generative tools I see as being augmented intelligence tools that enhance the ability of a human to do something that they would have done anyway, but maybe do it with greater fidelity, greater reach, or greater speed. And therefore, I think they’re a very positive innovation. But to your point, it’s just a plateau on the development of increasingly sophisticated thinking machines.
Mike Vizard: As that occurs, will that all be driven by generative AI, or will there be different types of thinking machines that are using different types of AI to accomplish various tasks?
Chris White: Yeah, I think it has to be different types of thinking machines. One of the problems in innovation is people, and I mean people who see it from the outside, have a feeling that if someone did something amazing, like ChatGPT (and I think it’s an engineering marvel that they got it to do what it does), they have the thought that, “Well, if they could just make it a little smarter or if they could just make it a little bit more knowledgeable or if they could just make it a little bit more dynamic in its learning,” that they think that those little bit things are easy. But in some cases, that might take an enormous amount of time in order to make that advance to add those pieces in to modify something.
It’s like an engineer who builds the biggest skyscraper in New York City. Building one big skyscraper is an engineering marvel, but making the second one or making it a little bit wider or making it glass instead of aluminum or aluminum instead of steel, those are not easy challenges. It’s not easier to do that once you’ve made one. And I think ChatGPT is a similar kind of thing, that it is an amazing engineering marvel that does something great, but it’s just a component of the solutions that we need to build.
So there’s going to be a wide variety of things that are like this that enable better reasoning, better computation, better recall. ChatGPT, I think, clearly has the content creation, the human interactivity piece. I think it will own that space, but the connection of that interactivity to something that’s capable of thought, that’s something capable of really recalling real information, I think those are yet to be solved problems. There are things that are doing that, but building those pieces and putting them together, I think, is going to be the important advance that we see.
Mike Vizard: And even within generative AI, the platform that was built by the OpenAI folks is a general-purpose platform, and they basically collected all the data that they could find in the world and trained their model accordingly. And there’s a lot of garbage that goes into that, so it affects the outcome a little bit.
Do you think that we will start to see large language models building generative AI platforms that are more domain-specific? And the result will be they’ll be more accurate, but there will be a lot more of these AI models all over the place?
Chris White: Yeah, so I think a couple different pieces of it. I think that we will start seeing large language models that are specific to different contexts. I think that will absolutely happen. I don’t believe that that’s going to make them more accurate, and I think that’s because it’s a little bit the way these models are designed. I like to think of ChatGPT as being an encapsulation of common sense. It creates content that sounds right, that seems right, that feels right, but it’s not necessarily right. And if you think about it, that’s what it’s trained on. It’s trained on an enormous corpus of data, and therefore, it should be giving you the response that is consistent with all of that data, and by definition, that has to be this every-person common sense. Grab a person off the street. They will answer a question similar to the way ChatGPT might answer a question.
So you can talk about then saying, “Rather than the average person, the data of everybody, maybe we’ll take the average financial person, and we’ll create a financial large language model that answers things in that space.” But again, it’s going to represent financial common sense, and I think that the key: We have to recognize that how you actually get a system to produce good knowledge, good information might be fundamentally different than the way we’ve actually built these ChatGPT kinds of systems. And it’s one of the things that in Labs America, we’re working on how you use these systems as an interface to data, rather than trying to use them as a storage mechanism for data, where they don’t necessarily do a great job.
Now, one area where I will connect to what you were saying, that I do think is important, is if you look at what a large language model encapsulates, it is a model of the language itself, the creation of content and how those pieces are put together. And therefore, because humans think in a way that is connected to the language that they first learned, it’s an encapsulation of the way a native English speaker might think, as well as the content they create. And for a Japanese company like NEC and a country like Japan, not having a native Japanese large language model is a critical thing. So I think what you will see first is countries building large language models that are in their language, because that’s an encapsulation, not just of the language. It can’t just be translated; it’s an encapsulation of the thought process and the way people actually think.
Similar to what I said, too, if large language models are about content creation, no country wants to be at a deficit because they don’t have rapid augmentation creation tools. So if a large language model exists for English but doesn’t exist for Japanese, that puts Japan at a risk because content creation in the US or English-speaking areas can be 10 times faster than content creation in Japan.
So there’s a lot of work going on, I think, in companies like NEC, to understand how we build native language large language models, how we build specific vernacular or area large language models and how we can use those to intersect and interact with data, rather than have those models actually storing the data directly.
Mike Vizard: So what does that look like? Because one of the challenges with these models today, to your point, is that I have to move a massive amount of data somewhere, and then I train the AI model on it. I have a lot of GPU usage. I then have humans come down and sit there and say, “Yes, no, yes, no,” and then eventually, something good comes out. Can we get to a model where we’re going to be training AI without having to move all this data around? And is that going to be some form of inference, or how do we get there?
Chris White: Yeah, I think what this is starting to hint at is we need to have a much better fundamental understanding of these AI systems and how they accomplish what they do. And I think that fundamental understanding is a requirement for us to get away from moving large amounts of data around and large-scale training and the trial-and-error piece that we have right now. So I think that’s a key next step, and if you look at most industries, they often go this way. There’s an initial creation of something that actually grows, and people push on it really hard, and you get to a point where it doesn’t get to the next level until you build a better fundamental understanding of what’s there.
I’ve made an argument lately that I think we’ve gotten to the end of easy AI. Not to say that it was not hard to get to where we are right now. A lot of people put a lot of effort and a lot of time to get to where we are. But if we think about what an easy problem is, an easy problem is a problem that we’ve seen before. You can solve problems you’ve seen before very, very quickly, and I think we’ve been solving problems by increasing the amount of compute, increasing the amount of data, and increasing the amount of people working on them, replicating the same kinds of ideas to get to where we are right now. And I think we’ve reached the pinnacle of achievement in that space. We can’t increase the quality of AI systems by increasing the amount of data the same as we have over the past 10 years or to increase the amount of compute or increasing the amount of people. There’s just not enough data in the world for that to be true.
So I think we have to find new ways of solving these kinds of things, and I think that’s going to be a move to a more fundamental understanding of what’s going on and then leverage that fundamental understanding to do things like shifting one distribution to another, being able to train once and then use what we’ve trained to move that smaller compact representation around. And use a large language model to train another large language model, for example, rather than having to go back to the original data. How we use these foundation models as foundation models and adapt a way, I think, is one of the next big things, the hard problem that we need to solve. And once we solve that problem, we end up with another rapid expansion phase where we replicate that solution in many, many different, distinct regions.
Mike Vizard: Do you think that there’s a lot of awe about generative AI right now because we don’t really understand how it works? And do you think that if we take away some of the mystery around how AI systems are built and how they actually work, that more people will start to just look at it and say, “Yeah, I get it. It’s science, but it’s not magic, but at the end of the day, it’s going to be part of our lives. As long as we understand how it works, we’ll know how to use it?”
Chris White: Yeah, I think that’s right. I think I will go even one step further. I think if we had a better fundamental understanding about how humans think, we would have exactly the same view, that somehow we’re not amazing, that we’re okay, and what we do is unique and novel and gets by, gets us through the world, but that it’s not nearly as special as we actually think it is. So I think that’s coming.
And then I think on top of that, yes, the understanding of how that intersects with large language models and what’s really happening at a more fundamental level, I think, will re-normalize a lot of the hype and a lot of the scariness that people have with respect to these kinds of things. It is a natural way that has always been, that people are scared of new technology that is out there until they understand how it works, and once they understand how it works, it gets incorporated in the way we live and what we do. And it’s no longer something to be afraid of; it’s something that we use as a tool to advance our existence. And I think that piece is really coming.
You may have heard me say that I think that invisible AI is the thing that’s going to be much more important than the visible ChatGPTs or the AI systems that we see in movies. And what I mean by that is that we’re training an entire generation of society, starting at age one year old. A one-year-old taps on a tablet and plays with an iPad with the expectation that when they tap, it’s going to change and do something for them. So we’ve trained them to think that they don’t have to change for the world, but the world should be adapting to their needs and to their requests. And I think that that idea of AI systems that are prioritizing and optimizing and predicting the needs of individuals is going to be where the big revolution is going to come in this space.
And you call it invisible AI because it’s the AI that you only notice if it stops working. Instead of people being excited that the stoplights are all changing to green, based upon them going into the region, recognizing they’re there and where they want to go, they’ll be upset that they have to wait at a stoplight. Same thing. The fact that you’re getting email that is phishing and spam and all those kinds of things, AI systems should be able to reduce and remove a lot of that as we go forward in the future. So getting something like that is going to be a failure of this invisible system that is optimizing the world for you, and I think that’s where the big win is, because that’s beyond just creating content. That’s doing an enormous number of different things for us that really, there’s no way you could have humans do, and that’s why the value of that innovation goes way, way up.
Mike Vizard: So what is your best advice to folks about how to approach all of this? A lot of organizations are scratching their heads. They don’t know exactly where they’re going to apply these efforts. Some of them have even had some bad experiences with various data science projects. How do you get your arms around this, as a company or an organization, to get on the right path?
Chris White: I think that the key is to not believe most of the hype, so that’s one. To not believe that you or your company are really far behind everybody else because it seems like everybody else is using this new tool and somehow your team is just not smart enough, not good enough, or not gathering it. So I think those two are the two anxiety-ridden pieces that pop up that cause organizations to deer-in-headlights and not move.
But what they need to do is look at what the superhuman capabilities of a technology are. So ChatGPT is fairly easy to recognize: that it can create content much faster than an average human. It has a breadth of topic areas that it can converse on in a common-sense way, beyond that of an average human. And therefore, if it’s interfaces, then you should be looking at your operations, your business, and places where you have content creation needs or where you have human interface to data or information needs, and you should think about how you might want to apply it inside of that space.
So I think that’s the critical piece, of not assuming that it can solve everything, not assuming that what you need it to do is an easy change on something that was impossibly hard to create the first time. But instead, looking at what it’s capable of doing and then trying to apply that in your own context.
Mike Vizard: All right, folks. Well, a great man once said, “We have nothing to fear but fear itself,” and AI is no different. Hey, Chris, thanks for being on the show.
Chris White: Thank you for having me. It’s been a fun conversation.
Mike Vizard: All right, and thank you all for watching the latest episode of Techstrong.ai. You can find this episode and others on our website. We invite you to check them all out, and until then, we’ll see you next time.