Synopsis: In this AI Leadership Insights video interview, Mike Vizard speaks with David McDaniel, the field CTO and chief architect for 66degrees, about a new alliance with Google and the reasoning behind it.

Mike Vizard: Hello and welcome to the latest edition of the Techstrong.ai video series. I’m your host, Mike Vizard. Today, we’re talking with David McDaniel, who is field CTO and chief architect for 66degrees. They have a new alliance with Google. They’re building some large language models that drive generative AI. We’re going to talk about what’s the reasoning behind this alliance and what we can expect. Welcome to the show, sir.

David McDaniel: Thank you very much. Thanks for having me.

Mike Vizard: What is the primary reason that you guys have signed this alliance? I know you’ve been a long time partner with Google, but there’s a lot of choices when it comes to platforms for large language models. So, why Google, and what are you guys working on?

David McDaniel: Well, firstly, we’re a Google-only shop. So, we really don’t work with any other clouds. We focus entirely on Google’s product set. We believe that they have a great position in the AI marketplace. They are the ones that created the transformers that ended up driving this entire LLM revolution that’s going on now. So, we think they have the best tooling and services to really make a difference in the world.

Mike Vizard: Are you building LLMs on behalf of clients, or are you going to go build LLMs that you are going to use to drive different processes? And what exactly is the strategy?

David McDaniel: Both. Primarily, we are a services company that’s partnered with Google. So, we are building services and tooling, and projects around LLMs. We’re not actually building any LLMs themselves, but we are utilizing Google’s LLMs, existing, and integrating them into other suites of tools that make a solution for our customers. We’re also doing that internally too for some of our own efforts.

Mike Vizard: What exactly are your customers telling you they want to do? Because part of it seems to be there’s a notion of a phrase called grounding and, “I’m going to take my data and apply it to an LLM to create some sort of modification that applies to my business use case.” What’s the process for doing that? What do people need to be thinking about?

David McDaniel: Well, we see customers falling into two camps, generally right now, is, “Hey, we hear about this LLM. We want to use it. Tell us how we can take advantage of it.” And the other camp is, “We have a strong data foundation already.” Or, “We know we need one, and we have data we want to integrate into an LLM and use it to innovate faster, operate more efficiently.” And those type things. So, a lot of what we’re doing is helping our customers understand the best approach and the best tooling, and then helping them actually implement those services.

Mike Vizard: Are there security implications that go with that? How do I know what will become of my data once I expose it to an LLM?

David McDaniel: That’s a great question, and there are security implications. That’s really the big difference between a ChatGPT and a Bard, and the enterprise class LLMs and platforms that Google is providing. Bard is a Google product, but it’s, along with ChatGPT, are really consumer grade products where you don’t have a lot of that control that enterprises demand. So, you don’t get to restrict data leakage or ingestion into their model. Whereas the enterprise tooling with Google, you do. You get assurances that your data stays within your environment.

Mike Vizard: As we think things through right now, a lot of organizations have invested in data science projects over the years with mixed results. And some of them are a little bit, shall we say, chagrined over those works because the truth of the matter is a lot of the AI models that were created never made it into production for any one of a hundred different reasons. Is generative AI going to be different, or are we going to see something that is maybe more accessible and easier to become more successful with?

David McDaniel: I think as the entire data science and AI/ML environment grows up and matures, we’ll have a higher success of models getting put into production. I’ve read statistics. Between 90 and 95% of models never make it into a production environment, like you said, for a lot of reasons. But with guidance from 66degrees and our experts that really know what they’re doing, I think we can guide our customers into successful production deployments.

Mike Vizard: I think one of the issues that people have encountered is that the folks who work in the data science team are not especially close to the business. With generative AI, Is it easier to bring the folks who know the things about the business and the data into the project, and maybe that may be something that will make this whole experience different?

David McDaniel: I believe it is because it’s a little easier to identify with some of the results of an LLM because you can show it to business users, and they don’t need to be a mathematician or a data science expert to understand what those results are. They can read them for themselves. So, we think that with the proper guidance, we’ll be able to show very direct results.

Mike Vizard: If that’s the case, will it also become easier to fine-tune these models over time? Because I think that’s also been one of the challenges we’ve had, is that if we built something on a traditional machine learning model, we had to bring it back and retrain it, and then redeploy it. Is this going to be a little more adaptable?

David McDaniel: I think there are more options to customize, if you will, large language models. There is fine-tuning. There’s prompt engineering, there’s pairing the large language models from Google with things like Matching Engine, which help with Enterprise Search and other known data use cases. So, I think it will get easier to do those.

Mike Vizard: What do you see people doing in this whole space right now that maybe they should reconsider? What’s your best advice to folks? It’s still early days, but what should people be aligning themselves around for best practices?

David McDaniel: I think with anything that receives a lot of hype the way LLMs have and ChatGPT, I think that you’d need to really analyze what your true requirements are and match those to what the capability of the service is. I see customers, we’ve talked to a few of ours, that have already integrated ChatGPT and may not fully understand all the consequences of doing so because they are basically creating data leakage for themselves. So, I think that’s a big consideration.

Mike Vizard: What do you think might be the way to integrate this within our existing processes for building applications? We seem to have a challenge integrating DevOps and MLOps. And generative AI will be an extension of MLOPs. So, how do we bring all this stuff together in a way that’s cohesive? Because right now, it feels like we’re having a Venus versus Mars conversation.

David McDaniel: I think like everything else, MLOps is also maturing. And with our expertise in DevOps and MLOps, and AI in general, data science, having the data science foundation is a key piece because kind of like anything else, if you put garbage data in, you’re going to get garbage data out. So, once you have data foundations and all your data pipelines in place, I think it makes it easier to then layer on generative AI capabilities. And to me and to us, it really is a layering, not a replacement, and hopefully not something brand new. If somebody doesn’t have a good clean data science foundation, then it’s going to be a bigger lift, of course.

Mike Vizard: We also hear about bias as a major issue. And it’s not clear to me that people understand the data science principles to avoid bias. So, a lot of these models could wind up being more trouble than their worth. So, what’s your advice to folks in terms of thinking through the whole bias side of equations?

David McDaniel: Well, it’s a complex one for sure, and this is where we end up doing a lot of education. Having that clean data science foundation and a strong data vetting process is incredibly important. We’re doing a project for ourselves where we’re looking at internal documents and fine-tuning, and augmenting a LLM with our documents for helping us to generate new iterations of these documents. And understanding the cleanliness of your data and its appropriateness for inclusion is very, very important. So, that’s where you have a human in the process, is making sure your data’s still correct. A lot of people do worry that these AI LLMs are really going to replace vast majorities of people working. I think it’s going to change things and it’s going to make a lot of people’s jobs more efficient, but there still is always going to be a need for humans in the process.

Mike Vizard: What’s your sense of, “Where are we with regulations at the moment?” Some folks are kind of sitting on their hands going, “Well, maybe we should just wait for all these regulations to sort themselves out before we invest because we don’t know for sure if what we’re building is going to be allowed.”

David McDaniel: Well, I think there should be some kind of guidelines or regulations around the use of AI for sure. I think unfortunately, the regulating bodies move just simply far too slowly, and they’re always going to be playing catch-up. I think we’re going to see a little bit of a wild west, and we’re going to see some negative results of some people using these LLMs for malicious or nefarious uses. And that will heighten the need for regulation. But I just think the pace at which this is going is far too fast for any regulations to really rein it in anytime soon.

Mike Vizard: As we sort that through, do you think there’s any application going forward that’s not going to have some sort of generative AI component to it? It seems to me it’s just going to be pervasive.

David McDaniel: It is going to be pervasive, but we have been working with a few of our customers in healthcare and legal environments, highly, highly regulated environments where I don’t see, necessarily, LLMs and generative AI being applied across the board. You don’t want a AI necessarily to start recommending what drugs are used for a patient necessarily, at least without being checked, because you don’t want it hallucinating or guessing, or mixing and matching the wrong documents to come up with a recommendation. That’s one reason we also are strong believers in verification of your data. So, as we build these LLMs or these services around LLMs, we always make it so that you can refer back to the originating documents.

Mike Vizard: One of the things about ChatGPT is that it’s a general purpose LLM. And they hoovered the world and created this massive LLM that has a lot of data in it that may or may not be true. Do you think as we go forward, that we can build LLMs using less data that might wind up being more accurate?

David McDaniel: Absolutely. I think you’re going to start to see almost verticalized LLMs that are applied to very specific use cases within industry, document generation like RFP or RFI, contracts documentation or generation, rather. I think you’ll see things that are very much more purpose built. And that will help with some of the issues that LLMs have, like hallucination, because they will be trained on a much finer and smaller dataset, making them more efficient and less costly to use than something as generalized as ChatGPT.

Mike Vizard: All right. I love that word, hallucination, because when you and I say something that’s mistaken, we’re just flat out wrong. Machines are hallucinating, though.

David McDaniel: It is kind of crazy.

Mike Vizard: Okay. What is that one piece of advice … Or when you see other people working on these projects today that you just look at and you shake your head and go, “That’s going to maybe take us down the wrong path.” What’s that thing that you see folks doing that maybe they should think twice about?

David McDaniel: Trying to use LLMs as the hammer for all nails. It’s going to be a great solution for a wide set of problems, but not every problem. People, right now, are still in the very early days of learning really where generative AI applies. A lot of customers come to us and say, “Hey, we want to index all of our documents and be able to have an LLM do Enterprise Search for us.” And while that is a use case, it’s also been a use case for the last 20 years that we have solutions for already that don’t use LLMs. So, make sure the use case really does demand an LLM.

Mike Vizard: All right, folks. Well, you heard it here. We may be entering a period of irrational AI exuberance as they say. Sir, thank you for being on the show.

David McDaniel: Thank you very much for having me.

Mike Vizard: All right, folks. You can find this episode and others on the Techstrong.ai site. We invite you to check them all out. And once again, thanks for spending time with us, and we’ll see you next time.