Synopsis: In this Techstrong AI Leadership interview, Gabriela Koren, chief revenue officer of Dataloop AI, explains how marketplaces for artificial intelligence (AI) tools and platforms will bring together data scientists, data engineers, application developers, DevOps teams and cybersecurity professionals.

Mike Vizard: Hello and welcome to the latest edition of the Techstrong AI Leadership series. I’m your host, Mike Vizard. Today, we’re with Gabby Koren, Chief Revenue Officer for Dataloop, and we’re talking about a marketplace that they’ve created where developers and data scientists and anybody else who’s interested can come and find all the tools and platforms they might need to build AI models. Gabby, welcome to the show.

Gabriela Koren: Thank you. Thanks, Mike. Thank you so much for having me.

Mike Vizard: There’s no shortage of marketplaces from everybody from AWS to Google and Apple and whatever you can think of, but what prompted you folks to create this marketplace, and what’s unique about the way data science and developer teams work these days?

Gabriela Koren: That’s a great question. So what prompted us is that we understand that organizations, enterprises, different sizes, really all sizes want to capitalize and leverage on AI. They want AI to bring efficiency to this organization, different use cases, different industries, everybody wants to take part of this. What we find out is that the roadblocks are, A, it’s really hard to cooperate because you have so many stakeholders involved in the process, and this is a true relay race. So, between the data and the models and the production and the verification and the QA, everybody works in a silo.
So what we wanted to do is to take our existing platform that is so powerful and so robust and make it available for the different stakeholders. So this is how the idea of making this as a marketplace was born. So this is a platform in which you can bring any type of data, any type of model, any type of use cases, and have everybody working the cycle from thought to production under one roof. What makes us unique is that this is really geared towards AI. This is what makes us so unique.

Mike Vizard: Do you care whether the tools are open source or commercial or not, or is it just more about fostering the collaboration?

Gabriela Koren: That’s a great question. So, what we see more and more in terms of open source is that models, many of them are open source. You can bring any model, it can be open source model, it can be a self-developed type of model, which sometimes it’s very unique, so it’s developed by the organization itself. So any model can be brought into the platform as well as any application or what we call element can be brought into the platform.
Also, what is important to mention is that this is meant for organizations that want to keep their data safe. So you want your data to be uploaded, to be managed, to be touched by people that are allowed to do so. The moment that we take this marketplace, but we keep it within the environment, this is very appealing for enterprises that want to keep their data safe and secure.

Mike Vizard: You mentioned the relay race that exists and members of that relay race include developers and data scientists and data engineers and traditional database people and all kinds of ITS folks. Are you starting to see anything that feels like a set of best practices for bringing these various types of folks together, each of which has its own rather unique culture?

Gabriela Koren: Yes. So what we’re seeing, and I think this is the biggest change that happened, sometimes change take a lot of time, and now we’re seeing changes that they shorten the cycle itself, is that the traditional way that engineers were just coders and then data scientists were building models and each one have its own job. You actually are required to be much more flexible.
So we see that data scientists need to understand in data and models, and then MLOps need to understand the data and models. So the jobs are changing and are adapting themselves to the very fast pace that everything is moving. So this world in which you do your thing and then you pass it along to someone else, these days are over. The overlap between the different stakeholders is being much more noticeable in this race.

Mike Vizard: As we start building out these AI models, I feel there’s this notion that we’re going to spend six months build the AI model and deploy it and then move on to the next one. It feels like though there’s a workflow around this thing where the model itself starts to decline, shall we say, over a period of months as new data starts to emerge, and I can’t just patch the model, I got to kind of go back and train another one and replace it. So, are we appreciating this extended workflow between when the model gets trained and the inference engine gets deployed?

Gabriela Koren: That’s an awesome observation, which is so true. The days that you have one model and you work with this model and you keep training the model, these days are also over. Today, the engineers and the data scientists want to see, “Hey, I have four models that I want to try,” and different versions of the model that you want to try and you want to compare between them which one is performing the best. Now, the data, of course, changes and the data is also a variable today, different types of data. So I’m sure that one of the questions or one of the comments that are going to come up today is about multimodality. So different sources of data are coming along.
This is another variable. So the models and the need to have this flexibility of you can use different models and change them and then change the versions and test them again and see their behavior based on the data that you have is a requirement. So having a platform that allows you to compare versions, to bring models easily, and then see if they feed your use case, and if not, you just move on to the next one, but there might be another use case in your organization that can capitalize on that particular model. This is another benefit of using a platform like Dataloop.

Mike Vizard: How easy is it to rip and replace a model? And I’m asking the question because the pace of innovation here is pretty fast and I feel like I wake up every morning and there’s yet another new LLM that does something better and faster than the one before. Do I ignore that one because I’m working on this other one or can I switch over to a new one, and how painful is that?

Gabriela Koren: That’s a great question. So to change a model depending on the use case might be as simple as, “Hey, let’s try this new model. Let’s see this version of a model and see how this version performs or completely rip and replace a model.” What we are seeing from our experience, especially for enterprises and we work with agriculture and with automotive and with retail with defense as well, is that they usually don’t change models in an extreme way. They stay with the same type of model. They’re testing different versions of the same model. So there are some use cases in which ripping and replacing a model is a big deal.
When it comes to LLMs, you see it today, every day there is a new model, and then it might be easier for some of these use cases that we see becoming more and more popular that you can just test a new model and within a couple of days you can replace it. For others when it comes to computer vision, heavy data, videos, drones information, and so on, it might be much more difficult to change your model, but you still need this model to perform in order to release to production the use case. So you cannot compromise on the quality of the data and the results.

Mike Vizard: These days it all feels like fun and games, but eventually, a bill shows up. Do people need to be smarter about what type of model they’re using in terms of the parameters that it invokes as it relates to cost? And are we getting better at figuring out what all these tokens cost and how we got to keep an eye on that bill because the cost of an individual token, not so much? The cost of thousands of tokens that it all represents prompts, quite expensive.

Gabriela Koren: You’re absolutely right. So let me pivot. I’m not ignoring your question. I’m going to pivot a little bit from your question. I think that the biggest realization of the last 6, 9, 10 months is that data is king and data-centric organizations are the ones that are moving much faster. So, the cost of bringing a model or building a model or changing a model, yes, as you said, the tokens have a fluctuation that are important to manage and to manage it very, very carefully.
But when you realize that the key is to have the right data, the right amount of data, you don’t need more data necessarily, you need the right one, the accuracy, the quality, this is what makes organizations really differentiate themselves for the pace that they are adopting these type of solutions and these type of use cases focusing less on the model itself and more on the data itself because that’s the key.
The model, in a way, becomes more of a constant compared to the data. That is the expensive piece to bring to store, to manage, to qualify, to label. All these processes are all much more focused on the data, and I think this is one of the reasons that we see more open source and open models as a trend and much more focused into the data side.

Mike Vizard: You, of course, are in charge of selling some of these services, so you’ve seen a lot of activities from customers. But let me ask you this, what’s kind of your insider pro tip for people who are looking at all of this and trying to figure out how to get to where they need to be and what are you seeing that other folks do well?

Gabriela Koren: So what we see that folks do well are the ones that take are disciplined. And I can say this in any area of new technology or technologies that are becoming buzzwords and everybody wants a piece of them, organizations that are very disciplined, that means they understand what are the use cases, how to measure success, who is in charge of ensuring that the cycle is being followed, it’s being measured properly, that you see that there is value at the end of the road, it’s not AI just for the sake of AI. AI needs to have a purpose and the purpose is value to the organization. And the way that I present this within my organization and with my customers is that value can come in three ways. Either we do a better job managing data, we do a better job shortening the time, or we make the organization more efficient in terms of utilization of resources.
If we cannot measure back to any of these three, we’re in the wrong path. And when we speak that language with our customers and with the different prospects, and you see suddenly the light goes up and they will say, “Oh, okay, now we’re talking the same language.” In organizations that are not working that way, it’s a long process. Then you see people coming in and out and consultants coming in and out and they don’t have a very clear path on what does it mean to be successful when implementing an AI project. So the ones that do it right, the ones that are disciplined, they do it really, really well, and then they can keep growing and adding more use cases and bringing more AI into the organization. That means that they do things faster, better, and with better utilization of data.

Mike Vizard: I would say last year there was a fair amount of irrational exuberance around all things AI, and this year feels more like we’re trying to operationalize it a little bit and bring some adult supervision to this. Do you think that people are narrowing their use cases because they are looking at the budget, the costs, and all these traditional issues, so they’re trying to pick the two or three things that they’re going to deliver an ROI on?

Gabriela Koren: So, from what we have seen, yes, there was a race like everybody wanted to be first and have it and they pour sometimes millions into these projects. Many of them, just for the sake of it. Today, if you read Gartner and Forrester and others, they’re talking anywhere between 83 to 97% of AI projects completely failing. What does it mean that they fail? That means that either the organization decides to drop them or they haven’t shown value. So we saw a stop for a second, “Okay, let’s think this over. Let’s think this through. Let’s see which areas make sense to have use cases. Let’s test them.” So what we are seeing in terms of trends is organizations that are coming to us and say, “Okay, we have one case that we want to test. We have one project that we want to test.”
Unless that trend of, “Okay, let’s pour tens of millions of dollars into AI and then we’ll find a use case to build around it.” We’re seeing the other way around, and I think that this is good news because organizations like ours that are here for the long haul and they want to be partners for a long time with the customers are the ones that are being more successful than the ones that just come up have a shiny solution, and then they disappear because customers are not continuing with them. So I see that the market is being more conservative when it comes to AI. They’re thinking and they’re measuring better the results. And this is good news for a platform like Dataloop.

Mike Vizard: I’m not sure you’ve heard the joke about the AI team that discovered that there was a sharp drop in revenue every seven days, and they came to the business and reported that to which point the CEO looked at them and said, “Yes, we’re closed on Sundays.” Do the business people need to work with the data science teams and make them a little more business savvy?

Gabriela Koren: Absolutely. Organizations that don’t do that are going to find themselves Sundays. Look, if you go back in time, they will knock onto the data scientist’s door and say, “I need a model and you have six months to build it for me.” Now is, “I need a model and you have two days.” And the data scientist will say, “Okay, in order to make this faster, I need to look at the data.” So now you see how they’re being forced to cooperate.
And this is good news because we need data scientists and data engineers and developer engineers and the executives to be speaking the same language and looking at the same data, the same use cases, the same models, so they can really make it work and build that, what we call, a pipeline that has a beginning and then at the end, there is production. So they need to adapt, and if they don’t adapt, it will be very hard for them to continue bringing value to their own organization.

Mike Vizard: One of the issues we hear a lot about these days is there just aren’t enough GPUs to go around, so people are struggling. Do you think that’s making people a little more open to how they build and construct AI models with different types of infrastructure because, “I don’t necessarily have to use a GPU for everything, and I certainly don’t need GPUs on the inference side”? So are people getting wiser?

Gabriela Koren: I think that a big part of our conversation with customers is about computing and that fear that someone will make a mistake and suddenly my GPU or CPU power will need to be tripled and grow exponentially, and I don’t have the budget for that. Actually, we are having these conversations every day. So our suggestion to these customers and the way that we work in partnership with our customers is, again, let’s go back to the data. How much do you really need? Let’s use automation in order to review the data and pick the data and verify that you have the right quality. Instead of just bringing more horsepower, let’s use this wisely. And this is a message that truly resonates with customers.
On the flip side, when they do know that they need more computing power is they want to be alerted. They want to know the usage on a daily basis, and they want to monitor the processes to say, “Hey, no, sorry, Mike. You cannot use as many machines as you want just because you want to try new things or you want to test.” There needs to be a reason because it’s not an endless resource. And organizations were being hit big time by that. So the way that we build our pricing makes it also an incentive for us to be much more efficient with the way that our customers use our platform.

Mike Vizard: All right, folks, you heard it here. If you want to be smarter about how to go shopping, you always went to the marketplace. It’s where other buyers were hanging out. There was plenty of sellers. You exchanged information, you chit-chatted back and forth. It’s no different with AI. Go hang out in a marketplace. Hey, Gabby, thanks for being on the show.

Gabriela Koren: Thank you so much. Thanks, Mike. Thank you for having me.

Mike Vizard: And thank you all for watching the latest episode of the Techstrong AI Leadership series. You can find this episode and others on our website. Until then, we’ll see you next time.