Synopsis: In this Techstrong AI Leadership video Michael Maxey, vice president of business development for ZEDEDA, explains how artificial intelligence (AI) will be brought to the network following the picking up of $72 million in additional funding.

Mike Vizard: Hello and welcome to the latest edition of the Techstrong AI video series. I’m your host, Mike Vizard. Today we’re with Michael Maxey, who’s VP of business development for ZEDEDA, and we’re talking about AI at the Edge because, well, it’s only a matter of time before it gets there. It’s just a question of whether it’s going to be trained at the Edge or just maybe inference engines at the Edge. We’ll see, and as we find out. Mike, welcome to the show.

Michael Maxey: Thank you. Nice to be here. Nice to see you again.

Mike Vizard: So you guys just raised a significant amount of funding to drive the next wave of the company. What exactly do you think is going to be happening at the Edge and what should we expect? Because I think a lot of people are, they think that we can’t train and inference at the same time at the Edge, and there has to be this loop or is that changing?

Michael Maxey: Well, it’s a good question and we did just actually raise a bit of capital and we’re super excited about that. Company’s been around about seven years. So we have a number of AI running at the Edge today, and today they look very single purpose and they’re very curated by data scientists and they’re sort of treated like a newborn baby or back like a pet if we’re in the pet’s first cattle mentality. But where we think it’s headed is towards that cattle where you stand them up and bring them down very quickly. You provide a level of services underneath that these AI models can take advantage of, and you make it easy to sort of build those use cases.
So we see a lot of investment around tooling, around workflows and around being able to run these different workloads on GPUs or commodity hardware at the Edge. And today, to answer your direct question, it’s mostly inferencing. There is some data collection that gets pushed to the cloud for training. We’re starting to see some training on the Edge, TinyML and some of these more machine learning style models, obviously versus the large language models. But we’re starting to see patterns for particular industries where you’ll have curated data sets and models that run, more learning at the Edge instead of inference. But historically, the business has been inference.

Mike Vizard: So what types of applications are we seeing at the Edge with AI? Because I mean, I understand the latency issue, but I have data that I’m both creating and collecting at the Edge, and I want to analyze it and process it at the Edge, but how do I apply AI to that if I’m training the model somewhere else? So how does that whole workflow get managed?

Michael Maxey: Yeah, well, I mean it’s an emerging part of the industry and what we’re seeing is that it gets managed similar to how it gets managed in the cloud. You have containers, you have virtual machines, you have workflows that are stored in, get style workflows that are managing the model flow over time. And that part of the industry is starting to mature to where you’ll start to see complex workflows like we see in the cloud today. So if you look at state-of-the-art, that’s what’s happening. What’s happening on the ground is stuff that’s been implemented over the last couple of years where some of that state of the art wasn’t there, and it is more of a custom configuration or bespoke. I think a bespoke is a great word for what’s happening on the Edge today. But we see a lot of standardization coming and we think the lamp stack of AI Edge will emerge in this year, and we’re going to do a lot of investment in that space to help drive that. And we think that will unlock a lot of opportunity at the Edge once it becomes more programmatic and simpler to do.

Mike Vizard: To your point, we have all these folks using ML apps to create the models, and then we’ve historically had things like DevOps to deploy applications. Are these things going to converge eventually and what does that look like?

Michael Maxey: That’s a great question. I think there’ll be some convergence, just like following patterns we’ve seen in the data center 20 years ago in the cloud over the last 5 to 10 years. As the CICD pipelines of the Edge come together, it becomes less bespoke. So today that works falling on the DevOps engineers. You have data scientists creating and fine-tuning and using observability to improve the models, but they often sort of say, “Okay, the model’s ready, figure out how to get it to the Edge.” And that often in our world lands on the operations folks or the DevOps folks or even OT. We do quite a bit in the industrial space, and OT is a leader in Edge AI, although they may not get credit for it, because they’re kind of viewed as legacy industries, but who implements that today is a bit bespoke. But we are seeing that come together into more manageable, predictable job titles and workflows to sort of push this out of the cloud and into the Edge.

Mike Vizard: So will we see the training of the AI models at the Edge someday? And what will it take to achieve that? Because from my perception, GPUs are expensive and well, they’re hot, they consume a lot of energy. So how do I do that at the Edge?

Michael Maxey: I think the answer is yes, you will see training there, but it’s not how you think about training in the cloud. I don’t think you’re going to train on a large language model on the Edge. You’re not going to scrape the internet on an Edge device. But if you are in a particular industry, you’re an oil and gas provider, and you have the ability to sort of bring that data set down into, okay, I’m seeing data come off my sensors from my pumps. I’m seeing data come out off my sensors from methane. How do I bring that together and predict what’s going to happen? How do I increase pressure to pull more oil out of the ground? These smaller problem sets become training at the Edge because you don’t need the internet. What you need is your sensors and maybe historical sensor data or even manufactured data based on sensor data to train that model that can be done on a couple of GPUs at the Edge. And that’s different than a large language model like OpenAI where a lot of power and water is going into those.

Mike Vizard: Personally, the excitement these days is about generative AI, but at the Edge we’re going to have lots of different kinds of AI models I would assume. Some of them are predictive, some might be generative, some might be others. What are you seeing people actually doing?

Michael Maxey: Today? A lot of vision, and it started with is somebody wearing a hard hat or looking at a manufacturing line and figuring out anomalies in your product, but they’re getting more advanced. They’re starting to use vision to ask questions. So they’re combining not only a vision model but a language model or an intent model. So we see these multi models starting to pop up. They’re pretty popular in the cloud, but we’re starting to see them more on the Edge where you’re combining maybe a vision model with a text model so you can interact with the data and figure out are customers happy? How long are they standing in line? Things like this. So that’s a convergence we see is sort of a multi model approach and we’re starting to think about services around that and how do you make that easy for the enterprise.

Mike Vizard: So what’s the coolest thing you’ve seen at the Edge when it comes to AI? Whether you can name a customer or not, but when you look out there, I think a lot of people are scratching their head about, well, what use cases make the most sense?

Michael Maxey: Yeah, I mean those are maybe two different questions. The use cases that make the most sense, I think bring instant value to customers. Where they’re investing is, I already mentioned, predictive maintenance. Things like this where they can bring together a system, they can solve for that first outcome, it’s valuable to the company, but what follows it is where they get creative, right? Once they have this Edge infrastructure in place and they can update it through orchestration and they can start to pull data out of models or out of the flows and do unique things, it’s these second projects that is where creativity starts and where I think it gets interesting.
So where the market is today are kind of these single use models and use cases. There’s some advanced examples, but in general, we think that the market’s there. What’s interesting is when you look at, you go to the trade shows and you look at what people are building. There’s a company called StoreGenius that’s built in effect, an interactive kiosk for grocery stores that can detect your mood and detect the tone of your voice and is a combination of 30 or 40 different models coming together to help you find beer that goes with the cheese you’ve selected.
And I think this is where, looking at that demonstration, you’re like, “Oh, okay, she’s recommending cheese. How hard could that be?” But when you look behind the scenes at all the effort that’s going on around and all the coordination that’s happening across the models, this is where it gets super interesting. And I think the combination of being able to do a lot on that Edge piece and then make it really transparent and easy to the end user so it’s consumable is sort of that magic sweet spot. You shouldn’t know its AI when you’re talking to it. And today we do.

Mike Vizard: Who’s in charge of all this stuff? Because there’s all these OT folks and IT folks and data scientists and cloud architects. So who’s leading the charge?

Michael Maxey: The charge varies by company. Where we’ve found success and where we think that it comes together in a lot of cases is the enterprise architect. Typically, there’s a very senior technical person that’s sort of defining not only the AI part of that stack, but what does security need to look like? What was networking need to look like? All the components that come together to make that Edge application run. That’s a space where we’ve had a lot of luck and where we think a lot of the decision making happens. Now, they don’t spend the dollars, but they certainly decide like, here’s the right answers or the two answers we should go experiment with.
And then the decision making is, to your point, it depends on the company. We see it coming out of OT where they’re trying to go to more standards based or they’re trying to use open source, or they’re trying to enable a new use case that a particular business unit has thought up and investments will be really heavy around that. That can happen, and can also happen out of IT. You start to see Edge titles in IT as these become more formal in companies and they start to become a central platform authority. So it is still a bit all over the place, but we see a lot of it coming together around that architect community, and that’s an area where we like to focus.

Mike Vizard: How much of this might evolve into something that feels more like a managed service? Because ultimately a lot of these Edge deployments are in places I don’t really get to easily. So maybe somebody else can do that.

Michael Maxey: I think a fair amount can. Sitting where I sit and doing what we do at ZEDEDA with Edge orchestration, I think the answer is we try and enable, you don’t need managed services, right? You don’t need to go to these devices to update them. You don’t need to roll the model, shouldn’t require a truck roll. It shouldn’t require a managed service person to come with a USB key and update these models. So we’ve really tried to build a platform that makes it, you don’t need managed services, you don’t need an IT on site.
Now that said, I think there’ll always be a business for someone that takes the raw infrastructure and wraps business logic around it. Every particular company is going to have nuances in their product and the way they do things, and the technology they use that’s going to require that last 15 to 20% of this is how my business does it, versus, “Hey, here’s a generic model for visual inspection”, for example. So I think an area where there’ll always be sort of services, but our goal is to really remove the need for humans in the infrastructure layer and in the AI layer.

Mike Vizard: All right, what’s that one thing you see people underestimating when they start thinking about AI at the Edge?

Michael Maxey: The quality of their data. I often ask, okay, so how’s your data collection? Are you tagging it? Do you know where it’s coming from? Have you kept it historically? This is an area where I think… There’s good technology coming into this space and there’s great companies in this space, but that’s an area where I think if you’re considering going down the AI journey, start with your data because you’re not going to get value without good clean data. So start collecting and tagging and labeling and start today so that when AI comes, you have a good base to work off.

Mike Vizard: All right, folks. You heard it here. AI is awesome and new stuff, but funny thing happened along the way. The same old garbage in, it’s the same old garbage out. Hey Mike, thanks for being on the show.

Michael Maxey: My pleasure. Thank you.

Mike Vizard: Thank you all for watching the latest episode of the Techstrong AI series. You’ll find this episode and others on our website. We invite you to check them all out. Until then, we’ll see you next time.