Mike Vizard: Hello and welcome to the latest edition of the Techstrong.ai video series. I’m your host, Mike Vizard. Today we’re with Devavrat Shah, who’s CEO for Ikigai Labs. I won’t spell that for you. You’re going to have to look that one up. But we’re going to be talking about Large Graph Models, which are different than Large Language Models, and we’re going to have a lot of different models out there, so we might as well get used to the idea now. Devavrat, welcome to the show.
Devavrat Shah: Thank you for having me.
Mike Vizard: If you don’t mind, explain exactly what is a Large Graph Model because not everybody seems to know, and put some context around this if you would, please?
Devavrat Shah: Absolutely. First of all, again, thank you for having me. So, Large Graphical Models or graphical models are a way to look at the data from the lens of probabilistic perspective. They have been around for a very long time. We have developed these large graphical models a particularly computationally efficient way to learn structure from any tabular data in a domain agnostic manner. What do I mean by that in simple terms? In simple terms, if you give me a spreadsheet, you don’t tell me anything about rows, anything about columns, spreadsheet has lots of missing values, some of them are errors. No worries. We’ll learn the structure within it on its own. We don’t need world’s data out there, internet scale. We just need your spreadsheet, and from that, we’ll learn everything about it.
What does learning everything about it would mean? Well, if you have missing values, I’ll fill them up. If you have values that are anomalous, I’ll tell you how anomalous they are. If you want to know what are the similar low rows, what are the similar columns, I can tell you that. If you want to generate new rows that look like existing rows of your spreadsheet, we can do that. Or I can generate new columns that look like that. More generally, if I want to generate a new spreadsheet with different dimensions that look likes your spreadsheet, I can do that.
Now, if you have given me multiple spreadsheets and you want to learn the relationship between them so that you can stitch information between them, you can do that. And there are a ton of other things on top of it that you can do with this as a core model. So, putting it other way in a modern parlance, you can think of large graphical model as generative AI for tabular data.
Mike Vizard: How do I know that the outputs being generated are accurate? I think people are more willing to accept the outputs regarding a letter that they’re writing using an LLM. But spreadsheets are things I tend to run my business on, so how do I know I can trust and validate the data?
Devavrat Shah: Absolutely great question. So, I think if you are a data scientist, you would say, “Okay, here is what I will do. I will take your existing spreadsheet, I’ll remove some of the data from it, ask the system to fill up the missed data, the removed data, and compare it with those removed part.” what one would call cross-validation. There are a ton of benchmarks that we have created around it.
For example, one of our null innovation is taking traditional timestamped or time series data, converting into tabular format, and then apply our generative AI for tabular data. It allows graphical model to launch structure from it. Forecast. Impute change point. And what we’ve found through a large number of scientific benchmarks that we are the best time series analytics company out there. So that’s one way I would try to convince you as a thoughtful skeptic at why we are doing a good job at doing predictions.
Mike Vizard: Will this solve a thorny issue that’s been plaguing us forever? And it goes something like this. Business leaders are skeptical of the analytics presented by the IT organization because well, they know where the data that was used to create the analytics comes from and it’s a little shaky because they’re the ones that entered it or collected it and they have reasons to be suspicious, shall we say? If I have this capability, can I assume, therefore, that the data will be more reliable and then the outputs and the analytics are things I can trust more?
Devavrat Shah: It’s a fantastic question. So I think two reasons, as you pointed out why data or analytics is untrustable, right? One is data might have lots of missing information. It’s not complete. And when data is not complete, analytics typically looks at only partial data and then maybe throws out some of that and then does things. Second is, at times, data is, let’s call it biased in the way it’s collected. That is what I would call a problem of causal inference and we’ll talk about that in a second with an example.
Short answer to your question is that because the way we try to build holistic probabilistic model of this and then use that to both fill up data, clean data, and do right causal correction of data, indeed we believe that you should be able to trust the outcomes based on it more than others. So even if you’re going to do same traditional analytics… Let’s suppose that you use given data, apply on top of it our technology to, let’s call it, prepare your data better and then do traditional analytics, I would say it would be better than just doing it on existing data.
Mike Vizard: In my experience, the bigger the spreadsheet, the more biased it is. So how do I know that what the output here is going to be something that has not been unduly influenced?
Devavrat Shah: So I think let’s take a simple example of bias. Okay, so there was an interesting data set in the dawn of e-commerce there was collected, where because e-commerce came around, unlike physical commerce, now you can change the price of product at your will. What people started doing is started changing the price of products over time. And then people found that as you change for a given product, change the price, and look at the demand, effectively… what people found is the data points started looking like as if you increase the price demand started increasing. That’s bizarre, right? Because what you would expect is that if you increase the price, demand should decrease. What was happening behind the scene? Something interesting was happening through a bias that was created in the data.
So in the dawn of e-commerce, people were typically shopping online more on the weekend rather than weekday. Understandably so because of that, since e-commerce companies knew that there’s a lot more demand over the weekend compared to weekdays, weekdays they kept prices artificially low versus weekend they kept prices artificially high. And so then there was this correlation that gave this false causation narration saying that you increase price, your demand increases.
So how do you correct for this? The way you would correct for this is when you learn a joint probabilistic distribution over this data, you’ll find this hidden phenomena called confounders of variable that explains this difference. And that’s exactly what probabilistic model does, and that’s how it tries to correct it in the specific context.
Mike Vizard: Does that get us to some mean time-to-value faster? And I bring up the question with all due respect to data scientists, but we’ve seen many a project where they show up, they run the data, they do all the analysis, they lock themselves in a room for six months, and they come out and tell everybody something they already know. So how do we drive this into something where people will go, “Wow! That’s an amazing insight?”
Devavrat Shah: I think… first of all, data science, AI, machine learning, statistics… they’re not magic. They’re just trying to bring out facts that are present in data that typically you may not know because you’re not paying attention to it or you can’t pay attention to as simply a human individual. So with that as a baseline, now if you want to get to the value in a specific context…
You correctly pointed out that traditional way of bringing data science or AI or machine learning is just a broken ecosystem because it takes six to twelve months, you need a team of data scientists who are not the people who are domain expert, and all that. And that’s why these things happen. “Oh, you came back to me after spending a lot of money from me and telling me what I already know.”
And this is where what you do want is constant engagement with the stakeholder. And at Ikigai, that’s why we built an end-to-end AI app platform where individuals, business individuals, not just data scientists, can actually participate directly in the process of building and solution. And having the ability to do everything with the complicated tabular data, we built the right sort of building blocks that enable them to sort of put things together in a low-code, no-code manner. So either you can be actually a person doing it or you can work very closely with, let’s call it a savvy analyst, not a data scientist, and sort of work together to bring these to value. And you can bring these to value in two to three weeks rather than six months.
Mike Vizard: How hard is it to set all this up? Today, people look at LLMs, at least, some things they’re familiar with and it’s a significant investment. So is this easier or simpler? Is it something I’m going to invoke through a cloud service? How do I get from where I’m today to where you are?
Devavrat Shah: Great question. So we have tried to make it as easy as possible for anybody to get access. For example, we have set up an academy with very much interest of both educating people broadly to understand what AI can or cannot do. And second is, very quickly, help them understand that can build the solution to give them enough confidence. So there’s an academy.ikigailabs.io. If anybody goes there… they would have access to this simple step-by-step process where, within two hours, they will go from nothing to building an fully functional embedded software AI app where the purpose of the embedded software AI app would be, “Hey, I’m going to buy this house in Boston. What do you think is the price for this?” and then it’ll tell you the answer based on the real-time data. And then actually… If you say, “Well, no, no, this is wrong.” You can actually correct it, and the system will learn from it. So really you can build that not as an expert, but as just somebody who’s willing to spend time. That has been our north star and that’s what we have done.
Mike Vizard: How accessible will all this become? Today, if I want to ask any kind of complex questions, I typically form my questions, I send them over to a business analyst who shapes them into something that looks like a SQL query, and maybe in a day, maybe in an hour, maybe in a week, somebody comes back with an answer and half the time either I forgot what the question was, or B, the question I asked is no longer relevant. So can this be much more interactive?
Devavrat Shah: Absolutely. So that has been one of the driving force for us. That is, if you as an… exact business analyst, you want to answer these question yourself, you know where the data is and you have the right, let’s call it, authentication permissions, then you don’t need to go anywhere else. You can do it yourself with Ikigai by just dragging and dropping pieces. You don’t need to know SQL. You just need to know what question you’re asking.
Mike Vizard: So where are we in terms of the maturity of this technology? You said it’s been around for a while, but I don’t see it widely employed. So where do we go next?
Devavrat Shah: Primarily, the technology has been developed by me and my group at MIT over the past two decades. And we are making commercially available through Ikigai Labs and our platform is… We’ve been building it commercially for past four plus years. Now, it’s been utilized by a variety of enterprises. 20,000 plus individual learners are actually on our platform using it every day. So really it is ready for prime time right now and we are going to bring it to the world.
Mike Vizard: So ultimately, are we going to wind up in a world where there’s a lot of these Large Graphical Models deployed alongside LLMs and who knows what else we might be using out there… But there’s going to be a lot of diverse range of things out there. Is that the world we’re heading towards?
Devavrat Shah: I think the way I would think of, for example, let’s start with the question of how does Large Graphical Model compare and fit with Large Language Models, right? In my mind, Large Language Models are beautiful ways to design modern interface for any software app. For example, if you use a product, any video recording product right now, got a ton of buttons there, right? Mute me, start my video, stop my video, et cetera, et cetera. What if there was this one simple interface where you could just say, “Mute me.” or do this or do that? And that’s the type of interface that Large Language Model can enable. But then Large Language Models are like these amazingly shiny robots who can talk to you, they can understand you. But then you ask them, “Hey, give me water.” if they don’t have access to tap to fill the water, they can’t give you.
Large Graphical Models are effectively the way to deal with your tabular data that’s there for every enterprise. That’s where most of the enterprise data is sitting. And to make sense of it, you need them. So in a sense, the way we advocate, want to build a next generation embedded AI software apps is to say, “Well, use Large Language Models as a beautiful interfaces and use large graphical models as a way to extract information from your tabular data and let them work together.”
Mike Vizard: So if that is the case, then let me ask you the one question that everybody’s probably got on their mind. Does that mean that LGMs will see less hallucinations, otherwise being flat out wrong, than we might see otherwise?
Devavrat Shah: We believe that is going to be the case. And the reason is very simple. Hallucination in Large Language Models or things like that happen because it’s trained on entire world’s data. Okay? So in a given context, it’s selling you something that is out of context. Graphical models, we focus on simply learning from your data only. And so it’s going to tell you what your data has, not something that’s out of the context.
Mike Vizard: All right, folks. Well, you heard it here. We’re going to have a lot of large models, then they’re going to be fit for purpose, and we’re going to figure out how to wield them accordingly, and that may take a little bit of skill. But the world is definitely not one model for all things. Hey, Devavrat, thanks for being on the show.
Devavrat Shah: Thank you for having me.
Mike Vizard: Thank you all for watching the latest episode of Techstrong.ai. You can find this episode and all our others on our website. We invite you to check them all out. Until then, we’ll see you next time.