Mike Vizard: Hello, and welcome to I’m your host, Mike Vizard. Today we’re talking with Tom Siebel, who’s CEO for and we’re talking about, of course, generative AI, but this time in the enterprise. Tom, welcome the show.

Tom Siebel: Thank you, Mike.

Mike Vizard: It seems like this exploded out of nowhere. But if I look back in time, you guys were pretty much one of the first folks to be talking about generative AI in the context of an enterprise environment, at least – what’s your sense of what’s going on, what the impact is going to be, and just how revolutionary is all this?

Tom Siebel: I think the impact of AI writ large on kind of commercial and industrial and, you know, people’s social media systems is going to be huge. And this is, you know, medicine, production, automotive, transportation. This is big, I think, you know, the developments that we’re seeing in generative AI, going back to 2020, are really quite significant. I think the power of these large language models is staggering. I mean, this is a breakthrough, I think – on the order of importance of, say, the internet, on the order of importance of maybe the smartphone, the iPhone is a big one. And you know, the power of these generative pre-trained transformers, and these large language models is really significant. So it’s going to change a lot of things; it will do a lot of damage, it will cause a lot of harm, and it will do a lot of good. So it’s, it’s a little bit scary. But the underlying power of these technologies is awesome. And in some ways, terrifying ways. And, in some ways, very productive ways.

Mike Vizard: Do you think that we’re taking enough of a strategic view of all this, because it feels like to me at least that, you know, individual employees are playing around with things and they’re creating, you know, more efficient documents and emails, and I’m sure that will help with productivity, but it doesn’t seem to me that that’s really the massive benefits of the corporation that it might be. So do we need to take a giant step back and think about this a little bit more?

Tom Siebel: Well, I think it’s hard to take a step back when people are spending. You know, this idea that we’re gonna put research on pause for six months. I mean, come on, come on. I mean, really? I mean, the author of that is somebody – there are a few people for whom I have more professional respect, honestly, but that idea is kind of wacky. I mean, do you think the Chinese are gonna sit still for six months? You think the Russians are gonna sit still for six months? You think the people in academia are gonna sit still for six months? No way. No, how – the people at MIT, Stanford, Berkeley, Illinois, whatever it might be – they’re not going to sit still. This is going forward, it’s going forward real fast. And we’re spending billions of dollars in, you know, hard research and development at Google, at Microsoft, at OpenAI, in the academia, at MIT, in China. I mean, this is a big deal. And this horse is out of the barn. And it’s going to be an enormous force for good and an enormous force of unanticipated problems.

Mike Vizard: Do you think as we move down this path that – are we going too fast? I mean, it’s one thing to slow down for every six months, or should we accelerate and go find out what these issues are sooner than later?

Tom Siebel: Well, I don’t think the idea that somebody’s going to slow down for six months is – this is accelerating. And you know, there’s going to be, you know, lots of malicious applications of this. But I think there is also to be some highly beneficial applications of generative AI. And we’ve been doing some work. We’ve been working with generative AI going back now about three years. It’s a very serious work that we began in September, associated with using generative AI to change the fundamental nature of the human computer interaction model with enterprise applications. And I would say that you know, what we’ve accomplished in enterprise application software, be it ERP, CRM, SCADA systems, whatever it might be, manufacturing systems, HR systems expense management systems, I mean, the human computer interaction model, what we’ve accomplished in say, the last 40 years of development in this application is pretty dreadful. I mean, if you look at the version 23 that Oracle released today or the current version of SAP that’s out there. Or I think behind me, there’s a Bloomberg terminal, that Bloomberg terminals basically unusable, right? As is, as these other applications are. Now, there is one human computer interface that we have billions of people who know how to use, that’s basically the Google search bar. Everybody knows how to use it. And, and using the work that we’ve been doing is applying kind of the Google user interface model, with natural language processing, kind of the utility of enterprise AI, reinforcement learning – reinforcement learning, is when humans interact with the machine learning model to help train it; the classic case being, you know, hot or not going back to the old Facebook at Harvard, okay, where they showed, you know, that was kind of, you know, hot or not, and whether we’re training the machine learning model about all these females at Harvard. And anyhow, reinforcement learning and generative AI, we can use it to fundamentally change the nature of the human computer interface. So that rather than dealing with all these menu structures, and, you know, like the Bloomberg terminal, you know, Ctrl, Shift F 13, to figure out what the stock price is, you just have a search bar, and asked the question. The question might be how we’re doing against our, our diversity goals in in the Chicago office, look at how we’re doing against our carbon dioxide goals in our, in our Netherlands division? How are we doing against our recruiting targets in North America? And you type in whatever the question is, and then it comes up just like, we when you combine these technologies, enterprise AI, reinforcement learning, generative AI, it will give you the answer, it will give you the GPT generated answer over on the right. And it will give you all the other places in the enterprise. So we’ll use the large language model not to search the internet, but the large language model will search the enterprise, be at the airport, be at Shell, Koch Industries, whatever it might be, you know, show you all the other places in the enterprise, where there is relevant information that you might search, just like Google works. So now we have a human interface that there are at least 2 billion people on the planet that know how to use and you know, every kid knows how to use it, every housewife knows how to use it, every CEO already knows how to use it – why not use that for all enterprise applications? So I think that this is a non obvious use of what’s going on with generative AI. But this is how we’re applying it; to fundamentally change the nature of the human computer interaction model. So these applications are much more accessible, the information that we don’t have a black box problem associated with AI, where it explains how it’s making the decision, and you can use it, and then you can use it as the chat GPT capability to you know, further query, you should drill down into it. So this is what we’re doing across all of our applications. And we think it’s a very important development in enterprise AI. So this will make in basically a very important development in enterprise application software hard stop. So we think it’ll make these applications, you know, much more easily accessible by a broader group of people in the government entities and private enterprises and the public at large. So that’s, I think, a beneficial application of generative AI. And you know, you and I can think of, you know, you know, scores of examples, from deep fakes on, and new, very new creative sources of fake news, where these these capabilities will be used maliciously.

Mike Vizard: It seems like there’s lots of these large language model projects that are certainly popped up. There are different open source ones. There’s ones that require less data, they’re more narrowly focused. So might we be looking at a world where there are hundreds of these large language models embedded in applications?

Tom Siebel: There will certainly be hundreds of large language models available. Okay. Maybe 1000s. Okay. Available to in the open source and, and the proprietary software community. I think that in any given application, we’ll probably tend to use one or two of them. I think that but there will be hundreds to 1000s of these available, there is a lot of work. This is an important development in information technology. This is a big one. This is like the transistor. This is like the internet. This is like, you know, the smartphone. And we’re gonna see a lot of innovation. And you know, many of the applications will be beneficial.

Mike Vizard: Do you think the applications themselves need to be vetted? We’ve heard all kinds of interesting aberrations that are happening inside some of these large language models and open AI, and hard to tell if that’s a temporary situation. But do we need some way to say that this thing is trusted?

Tom Siebel: I think that this issue associated with general AI and AI writ large and the social implications of these technologies, the input, the implications that they have on privacy, the damage that they will do to public health, the damage they will do to the ability to run like civilized societies, we need to have a very open dialogue about these issues and explore these problems. People do not talk enough about how big and how problematic these problems are. And so we need a dialogue. Now, you know, the initial the knee jerk reaction, if we’re in the EU, or in Washington, DC is to regulate, regulate, regulate. Now, I don’t recall an instance of the United States or, you know, with the possible exception of thalidomide; okay, I can’t think of an example of regulation, that has been very positive. And I don’t think these regulators in the EU nor, and I know, in Washington, DC, do not, they have no idea what they’re talking about. Okay. And so it’s an important issue. And I think we need to rely on leaders in the media like you, okay, and to put the ethical implications of the tech stack technologies on the forefront. It needs to be discussed in the academy. I mean, the been, when you look at the damage that’s done by social media, in the last say, you know, 15 years, that’s nothing compared to the damage that that might be done in the next the next 15 years with these new generative AI technology. So this is really troubling. And we need an open dialogue. And I don’t think I don’t think we can count on the regulator’s take care of it. I don’t think we can count on private enterprise to police itself. Okay. I don’t think we can count on the governments to behave beneficially because governments don’t always behave beneficially. And so it’s a troubling area, I don’t think it has nearly enough attention.

Mike Vizard: Do you think that maybe we need some sort of therefore peer review kind of process around open models and make these things transparent. So everybody has a pretty good understanding of how something works, and therefore we have less fear?

Tom Siebel: Well, Mike, I think this is one of those examples where the cure might be worse than the disease. I mean, first of all, the idea that we’re gonna have a peer review of algorithms before they’re used, it’s really going to be just a boat anchor on innovation. And I think that then it gets politicized. Okay, you know, as it when we get into issues of, you know, equity and all these other things. So it becomes politicized, where somebody is trying to promote their political agenda, whatever it might be, whether it’s, you know, black lives, or white lives, or Asian lives, or whatever, you know, these things that that become, you know, more important than science. So, you know, I haven’t found a cure, I can’t think of a cure yet. I don’t, I don’t have a solution. I talk about it all the time. I write about it. I’m glad that you’re talking about it. And you know, that it’s important. But peer review, it’s just gonna be a bodega, and innovation and people are gonna ignore it and operate in an extra illegal fashion and become criminals.

Mike Vizard: So you’re a C-level executive today running a company; what’s your best advice to them right now? Because I’m sure they’ve all heard about it. They’ve all seen something of it work at this point. But if you’re one of those folks – why don’t you tell them the troops?

Tom Siebel: Well, I would, if I was C-level executive, I would familiarize myself with the technology, I would get a couple of books, I would get a copy of the Master Algorithm and I would read it. Okay, I would get a copy of the Perfect Weapon by David Sanger. And I would read it. I would take the time to come from familiar with these issues, and what the problems are. So you can be able to carry on an intelligent conversation and, and, you know, fundamentally, you know, as a C-level executive, it’s important that your organization does the right thing. Okay? And the idea that you can delegate this to some department that you have this in charge of ethics, okay? Get over it, you can’t delegate it to the Department of Ethics, because you are the Department of Ethics. And if you didn’t learn the difference between right and wrong from your mother, by age 12, okay, there’s something wrong. Okay. And so you need to, you know, you need to do the right thing. Okay. And I think the fundamental test, you know, as your litmus test, as you face these decisions, is should be, you know, what would your mother think about it? Okay. And, you know, that’s what but you need to do. Accept responsibility as the CEO and you are responsible for the ethical behavior of your people in your organization. They can’t delegate it. And you can’t say you don’t understand it.

Mike Vizard: All right, you heard it here – like every innovation, it cuts both ways, handled with care. Tom, thanks for being on the show.

Tom Siebel: Nice to talk to you, Mike.

Tom Siebel: All right. Thank you all for watching the latest episode of We invite you to watch this episode and others on our website. And till then, we’ll see you next time.