Mike Vizard: Hello and welcome to the latest edition of videocast. I’m your host Mike Vizard. Today I’m with Bret Greenstein who is a partner for data and AI at PwC. And we’re talking about the investment that these guys are about to make in AI, which is a staggering number that I’m gonna let Brett tell you about. And also they have a new partnership with Microsoft. Brett, welcome to the show.

Bret Greenstein: Thank you, Mike. It’s great to be here. And it’s really great to talk about a topic that’s so hot and interesting right now.

Mike Vizard: So what is driving all this investment? It almost feels like maybe we have a little irrational AI exuberance going on given everybody’s kind of driving so much funding into this space. But from a customer perspective, what do you think is happening here?

Bret Greenstein: While AI has been interesting for a very long time, I’m thrilled to be working at a time when it’s actually cool enough to talk about with everyone. It wasn’t cool before and it’s suddenly become so. And it’s not because the technology suddenly made some magical moment happen. But it’s that it became accessible to a much wider audience, and progressively has gotten more capable at the work that so many of us do. And so the reason companies and and PwC are so interested in this is the huge transformational impact it’s having on the day to day work that people do. Helping knowledge workers work faster, helping people be more productive, helping people do their job, it’s so profound and wide reaching that everyone is trying to understand how to apply it safely and smartly.

Mike Vizard: We all seem to be obsessed with all things related to open AI. But my understanding is that’s only one of several types of large language models we may see or people may wind up using and those models may even be more useful because they’re tuned around a specific set of data. So are we really interested the tip of this iceberg?

Bret Greenstein: Yeah, it’s by most accounts. This is the early stage of a wide scale adoption of generative AI, of which we’re all familiar with open AI and chat GPT. There are other similar technologies that are emerging. And I think what’s exciting is as they are trained to be better at different tasks, and they begin to get incorporated into business processes and applications, we’re gonna just see more and more tailoring a specific expertise. And the ability for a lot of companies to turn what they do into something that is powered by AI. Like I think you we look at the models and say these models are exciting. Look, it can do a lot of things. And maybe there’s a new model that’s maybe perhaps better in legal or other things, which you really see, like you will take a look at the announcement we also made with a company called Harvey, bringing in legal expertise and open AI to make a domain specific capability. That is where it gets exciting. And every one of us is good at something. What a lot of people are talking about on the internet generally, is how do I take what I’m good at incorporate AI to help me scale what I do for more people. That’s when it gets really exciting is what the legal example is, you see it in medical, and I see it in innovators all over the place.

Mike Vizard: There’s always a lot of concern about job functions and roles. But it would seem that maybe jobs aren’t going away. But roles will definitely evolve. Do you think industry leaders need to be more proactive and having that conversation with their teams and saying, hey, well, exactly how are we going to use this stuff? And what might that look like?

Bret Greenstein: The upscaling required here is certainly not insignificant. And I think the fact that it touches so many knowledge worker roles, everyone who’s typing or reading something – in some way their job can be changed, improved, more productive with AI. So as a result of that training people to work with it is very important. Even though perhaps some of the tasks that I do in the day I was doing manually I could do with AI now. And some processes will change. The point is if I know how to use it, I’m going to be even more effective. It’s not just turn it on and people are suddenly magically productive. But you have to know how to use it, how to put context and how to ask the right questions. One of the one of the great examples I often hear is yes, it can develop software. That doesn’t mean we don’t need developers, we actually need developers who can guide AI to produce something amazing. The elevation of the role is not from the coding the actual typing. Yes, that’s important. But now it’s more important is the creativity, the critical thinking, the context says I need an application to do this. And if it does this, it’s going to do a really great job and have a big impact. And so I could spend eight hours coding that or I can spend one hour Thinking about it in seven hours letting AI code, and then I can refine the code and make it better. I could spend more time on that create critical thinking part, which I think is really the essence of what creates a reason everyone gets into development, for example, they want to build stuff they want to create, they don’t want to type characters they want to build. And so if this helps them build, they get to really explore that side of their passion much more.

Mike Vizard: One of the things I think people fail to consider is sure I can use generative AI to create an article, but that article is kind of going to come across as we say, average, so everybody will be average. So if I don’t spend time crafting then and shaping it into something that’s excellent I’m kind of wasting the opportunity. And the same thing with code or whatever other function it is you might be doing. So you think people kind of realize that general AI isn’t going to do your work for you. It’s just going to kind of get you closer to home.

Bret Greenstein: Closer and faster. I was I met with a woman who teaches creative writing, and I ran into her in the mall; we were talking about topics. And I asked if she’d heard of ChatGPT, but she didn’t think about it. And I explained to her what it did. And then I showed her on my phone, I asked her – I said give me a crazy example of a story. I had it write a story for her. And at first she thought, what does this mean for me? Like my job is how am I going to teach creative writing; if it’s doing the writing – and I said, you came up with the idea. And you decided if you liked what it produced or not. And then you can change that prompt to make it better. The ability to use it as a tool to generate content is really that valuable skill. It was her mind that was valuable, not the mechanics of writing of the grammar or the spelling. But really that shaping of that concept, right? She came away really, really excited and is now using it in her classes.

Mike Vizard: Do we all need to become prompt engineers? Or will tools kind of emerge that will do the prompting for us?

Bret Greenstein: It’s a great question, prompt engineering. While it’s an easy word to say. And everyone started talking about it since chat GPT. Most people are just prompters. They’re not prompt engineers, they just asked a question, create something that might tune it a little bit. Prompt engineering is more of the discipline of refining a prompt, adding a lot of specificity and content context to produce a great output. And they measure the output. And they look at the input and they adjust the prompting to generate that great output. A lot of that prompting is going to be buried in applications and you’ll never see it. So if you were to integrate with a search site to look for real estate, you could create an amazing prompt to generate a real estate search listing. But instead, what you’re really going to say is I’m Bob, I’m moving to this place, you know, I have two kids and I really care about schools and travel airports, it’s going to convert that with some very advanced prompting into a sophisticated prompt that generates a great list for you, that identifies it and puts it more in context of you. Usually you embed a lot of the things you know about a domain into the prompt, the end user just knows what they want. But they don’t know how to make the large language model produce the output that they need. They just know what they want. And so another company I was talking to produces online educational content, and they produce courseware for people. But what they really want to do is they want to ask you, what kind of job do you want? Send me the job listing? What kind of background do you have? Send me your resume, what are your personal goals, and from that create on demand live custom content to train you for that job. And it’s completely doable, but you could never prompt that yourself. But they know enough about education, curriculum, resumes and job postings to form a very elaborate prompt they can build into their application and just ask you a couple of questions, generate use the prompt and then produce a really great output.

Mike Vizard: Oh, how will AI evolve? I mean, prior to this, we all have seen machine learning algorithms to a certain degree and maybe less so deep learning algorithms previously known as neural nets. And now we have generative AI. Are there more types of AI coming? And do we have to figure out which use case goes with which type AI?

Bret Greenstein: They’ll certainly be more types. I think the most interesting evolution of this is in two dimensions. One is the training base is going to continue to grow. But there is an endpoint. It’s already been trained on a high volume of data available on the internet and private websites that they could acquire very large volume of data. But there’s a finite number of human finite amount of human generated content in the internet in the world. And AI generates content faster, so it’s going to continue to go wider on the types of content that it’s trained on. At the same time though, you’re gonna see innovations like auto GT GPT, which is the ability to create agents that use it, so you can prompt a system. But more importantly, you can ask an agent to accomplish a goal and it will generate multiple prompts and look in multiple places and achieve work to achieve a goal. Like if I that same real estate example I gave before I could ask for exactly what I want. But instead I could say, watch for this every day and when the price point is a certain point. Let me know which columns are best for me, that’s an ongoing activity that an agent can perform using prompting and using the large language models, but also taking some action, looking at the results, taking additional actions adjusting, and it’s a little bit advanced, but it’s certainly a part of the use cases and capabilities that are going to be much more prominent.

Mike Vizard: To your example, if I have a AI agent that is optimized for, let’s say, selling my house, and you have one that is optimized for buying my house, multi at some point, just cancel each other out. And it’ll be kind of a zero sum game, or how do we how do we have this kind of opposing missions for different AI bots? And how might that get resolved?

Bret Greenstein: I’m smiling, because I’ve experienced this already in customer situations. And also with a company I know who does a particular task in legal profession. I’ll use the example in healthcare – if you’re a doctor who uses AI to generate a request for coverage for medicine, it’s going to generate a really good request, you don’t have to type. All thing you just say what you want, give some background even pointed at your patient files, it’ll generate a great request. Now the insurance company’s gonna receive this perfectly formed request, well articulated good logic, hopefully to maximize you approving that medication. But the insurance company has a bot on their side, an AI on their side, reading the incoming emails, assessing them, making decisions on risk and costs and, and other medical knowledge to decide whether to grant coverage for that medicine or provide an alternative. So in that case, an AI may talk to an AI. And it does sound surreal. But it’s actually in both sides, it’s helping both ends of this equation, the doctor is no longer having to do the tedious work of writing a really great request, they can say what they want to have generated; the insurance company isn’t wasting time processing stuff, they’re actually making better policies and decisions on what should be covered or what shouldn’t be. And on both sides, you’re getting high quality content. What I typically see in these, if you think of everything as a workflow of human activity – at some point, you’re going to say I want to fix this process and use AI for it. But what you really have to think about it what is upstream and downstream from you. So in cases of customer service, for example, there’s a company that’s working to automate customer service by leveraging a generative AI front end, you can ask questions and get your problem solved. And if it can’t be solved, a ticket gets open. But if they can automatically open a ticket, the people downstream for the ticket will get a really well formed ticket. And the people downstream will then have all the information they need to solve your problem. The once you fix one piece, you can either go downstream and fix downstream or upstream from whatever whether it is with AI. And I I understand your point that isn’t it zero sum game, but I think it’s actually raising the game. I didn’t expect that metaphor, but it can look kind of nice, but it raises the game for for all of it. Because the quality of all the documentation is better. It’s more complete. And the people spending time or spending time on consuming it or setting policy or making decisions, not on the generating of what is a really inefficient process of writing emails, reading emails, replying to emails, you know, researching – those tasks are non value add. But thinking, knowing what to ask for knowing what policies to set, you know, that’s high value add.

Mike Vizard: They say the sign of intelligence is knowing what questions to ask, not necessarily the answers to everything. Um, you guys partnered with Microsoft? So how do you think that the ecosystem around AI is going to evolve, there’s going to be lots of different people providing different services, a lot of them will be invoked through API’s, I don’t think we all need to go build models. So when do I build and when do I kind of just invoke something through a cloud service?

Bret Greenstein: We are at a point where the number the majority of the work will be more on adapting models than on building, there will always be people building. And certainly there’s an advantage to companies that have large volumes of data, and huge clouds upon which to build and train large models. And for the rest of us the ability to use those models by extending them. And that’s what most companies do. They’ll embed their documentation, their knowledge into an existing large language model, so that it can do something that is more uniquely what they do. To continue that earlier example about real estate, whatever that company is, they know a lot about real estate, they know about, you know, dollars per square foot, they know about views, all the words that matter, all the things that matter. And they can use that documentation. For example, what houses sold and bought at price is different than what was asked for what were the characteristics that were different that might have led to it, they can put all that knowledge into an existing large language model, and create a really compelling experience for an end user. And the end user doesn’t have to know everything about real estate or large language models. They just have to know what they want. So there’s kind of a core of large language model development that will go on for a while, as it continues to get better and better until it’s good enough for a lot of things, you know, wide range of things. And then there’s people who are applying it with their own knowledge. Those are generally companies or applications, people who provide a service. And then there’s the people who consume and use it. And their experience will be one more where they’re looking for the system that provides the best possible answers. And I think the consumer expectations are gonna go up a lot. In the coming years, you already see it, how frustrated people get when they’re asked to press one for, you know, customer service, press two for sales. It, it sounded fine when that was the best in class. But you should say, my car is broken, can you help me, it should know those customer service. It’s obvious to you and I, but it wasn’t to a prompted system before but it is to generate AI. And I think this idea of being able to say what you want in your own terms in your own way, and having a generative AI system, understand the intent of what you want, is kind of game changing and customer experience. And everyone’s going to expect that. I think even in education, there’s a lot of debate and discussion around, you know, do students use chat GPT to generate their their papers? Is it okay that they did? Is it okay, that teachers might use it to grade it? These are all really great questions, but it’s gonna look really silly to sit at a typewriter, typing things out and waiting out stuff. It’s like, it’s archaic. I know, that’s already okay. But my point is, the mechanism of writing is less interesting than the thinking of writing. And that part is going to continue well; tools may help us do it better and faster.

Mike Vizard: So do you think ultimately, right now, maybe the biggest issue we have is just a simple lack of imagination about what we can do with all this stuff?

Bret Greenstein: Yeah, it’s educates imagination, but it’s also education. And so I do find that a huge part of my day is spent helping people understand how it works, what it does and doesn’t do, and how it might apply to everyday things, whether it’s at home, in school or at work. And it’s kind of the fun part of my day, because it comes up so often. And and I think what’s interesting about that is people assume that generative AI understands us, because it appears to speak like we do. And that confuses people. And it makes them limited in what they think about what it can do. But when you recognize it as a large predictive model, it’s trained on the patterns of the way people talk. And doesn’t just parrot back what it said it actually looks for patterns. And so a pattern would be how’re you doing this morning? Mike? There’s a multiple number of answers, you might say, but you’re not going to say purple. It’s obviously not the answer. Purple? The answer might be I’m feeling great doing well energized, excited for the day, there’s a pattern. And so when you ask for something, it looks for something close to that. And it’s model, and then based on everything else that said in the context, will find the best possible answer. And so when you recognize it as a predictive model, you start to think differently. What does it know? Was it trained on how do I get it to predict an answer that’s close to what I want? That’s where some of the art of prompting comes in. And I think it’s also why people get disappointed when they first try it and don’t understand that it’s not a person. It doesn’t understand the words. It’s doing prediction based on what you gave it, the context of the world and what it was trained on. And it tries to give you the best possible answer based on that on the math and the way the model works. And that’s what it’s doing for us.

Mike Vizard: All right, folks, you heard it here. It’s a brave new world. And it’s going to be all kinds of interesting things that are going to happen that we never imagined. Probably sooner than later. Brett, thanks for being on the show.

Bret Greenstein: Thank you so much, Mike.

Mike Vizard: And thank you for all watching this latest episode of the video cast. You can find this episode on our website along with others, and also the complete transcript. And with that, thanks again and we’ll see you all next time.