Amanda Razani: Hello, I’m Amanda Razani with Techstrong.ai, and I’m excited to be here today with Molham Aref. He is the chief executive officer of RelationalAI. How are you doing today?
Molham Aref: Hi, Amanda. Thanks for having me. I’m doing great.
Amanda Razani: Glad to have you on our show. So, we met a little while back at the Snowflake event. And first, can you start off by sharing what services does RelationalAI provide?
Molham Aref: Yeah, so we are the AI coprocessor for the Snowflake Data Cloud using relational knowledge graphs. So to unpack that a little bit, when we talk about AI, we’re talking about the set of capabilities like of course, the traditional language model, deep learning capabilities, but also things like simulation, things like prescriptive analytics, that involve solvers and optimizers that do linear programming and integer programming. Things like decision rule engines, graph analytics.
So in all the years of working on building, helping folks build intelligent applications, you learn that you need to combine a variety of these techniques together to produce something that can help folks drive important decisions at scale. So that’s what we mean by AI coprocessor, and of course we’re doing it using Snowpark Container Services inside Snowflake. So we inherit the governance that Snowflake has. We provide all that capability in a cloud native way. So we separate storage from compute, and all the nice things that made Snowflake so effective and made it the choice as the data cloud accrue to us. And we’re also relational. So at the paradigm level, we’re compatible with Snowflake. So you don’t have to change paradigms as you support these types of AI workloads, so…
Amanda Razani: Excellent. I know that was some big news about your AI co-processor. So what impact will this have on the enterprise?
Molham Aref: Well, with the alternative for people who want to build intelligence into applications is to acquire and implement point solutions that live outside of Snowflake. If you think about a typical Snowflake user, they’ve just spent a lot of time and energy moving all the data onto Snowflake to have it in one place to govern it, to have all the nice properties of Snowflake’s cloud native architecture. And so it would be a shame to have to then pull the data back out and put it into a point solution for graph analytics, for example, or a point solution for rules, or a point solution for prescriptive analytics. So we make this all native to Snowflake and simplifying the footprint, simplifying the cost, and putting folks in a position to drive more intelligence and make better decisions and be more successful that way. So we’ve been very happy with the reception so far. We’re told we’re one of the top three most requested capabilities in Snowpark Container Services, and we’re just delighted with all the customer interests that we’re getting so far.
Amanda Razani: Wonderful. Congratulations. So let’s delve a little deeper into some of the parts of this, the growing role of language models. How does it pertain to AI as that technology continues to advance?
Molham Aref: Yeah. Well, obviously it’s a major breakthrough up until the creation of these language models. I’ve been doing AI machine learning under various labels for over 30 years now. Up until the creation of language models, we had to develop models that were problem specific. So if you had a fraud problem, you developed a fraud detection model. If you had a supply chain forecasting problem, you developed a model or set of models to help you forecast demand, and so on and so forth. And so language models for the first time are trained on all the text in the world and images and videos in some cases, and they give us a very general purpose tool that we can use to solve problems that we didn’t have before. And so this is a major breakthrough in my view. And the great thing about language models is that they can help us create sort of the semantic layers and the knowledge graphs that sit above Snowflake that help people understand how all their data silos connect with each other.
So it’s very positive in the sense of accelerating that transformation, but it’s also, they benefit from having the creation of the knowledge graphs and the semantic layers, because, as we’ve been reading, language models don’t always give you the most correct answers. If they don’t know something, sometimes they’ll mix an answer up. And so being able to ground the language models with the data that lives inside Snowflake, through these knowledge graphs that we can create, is a very, very nice symbiotic benefit. So they make knowledge graphs easier to build, and then when you have the knowledge graphs, we make the answers that the language models can provide more accurate and more grounded in truth. So big breakthrough.
Amanda Razani: Yeah. So what do organizations need to know when implementing AI when it comes to data management?
Molham Aref: Well, I mean, data drives AI, and so if you don’t have the right data assets or if you don’t have them organized in a way that you can use them effectively, then your AI is less capable, less accurate. But as we talked about, it can go in the other direction as well. And so it’s really important to think through how data can help AI and then how AI can help us make the most out of these data assets and start solving problems that we wouldn’t have been able to afford to solve before, because building these sorts of problem-specific models would’ve been too difficult or too expensive or too time-consuming.
Amanda Razani: Absolutely. So for organizations that are just starting on these initiatives to bring in more AI into their company, what advice do you have for them as far as how to get started?
Molham Aref: Well, I think moving to Snowflake and consolidating the data in one place, I think is a great step. It’s liberating for the human users of that data to have one place to go to and get all the data, but then also organizing it using these semantic layers and these knowledge graphs, I think would be great for the human users of that data, but also a great way to present the data to the AI. So making foundational investments like that, I think are very important.
Amanda Razani: And I have another question. Why is there a need for infrastructure to efficiently build an intelligent application? And how will AI help speed up the process?
Molham Aref: So, if you look at how we’ve had to do this in the past with data distributed across hundreds, thousands, tens of thousands of databases, sometimes, it just complicated the process of building these models. And then if you look at the number and the variety of technologies we’ve had to use to assemble an intelligent data application, you needed OLTP technology to collect the data. You needed OLAP technology to be able to analyze the data historically. You needed a planning technology to plan forward.
Now you have predictions based on what might be happening in your business. You’ve needed a variety of programming languages and a variety of AI technologies that each came in separate silos. And so, you needed really a hairball of technologies to build one of these apps. And these apps historically have been so difficult to build that you often had whole companies dedicated to building, for example, credit card fraud detection solutions, or supply chain solutions, or revenue management solutions. Now that we have these capabilities around the cloud and data clouds, you don’t need as much complexity. You don’t need as much variety and technologies and so on. So yeah, it’s very important to make those foundational infrastructure investments first so that everything thereafter becomes smoother, easier, faster, cheaper, and so on.
Amanda Razani: Absolutely. So AI is advancing really swiftly. Where do you see the future of AI, say, a year from now?
Molham Aref: Look a year from now, I think we will be harvesting a lot of the benefits that we are starting to see come out of language models. I think people are still trying to figure out how to make them work at scale and how to make them work reliably enough, and how to understand how to interpret the results. So I still think we’re going to be in harvesting mode. Who knows? It could be next month, some new breakthrough comes through and my answer is instantly obsolete. But I do think that we’re going to be digesting these language models and learning how to use them at scale. So, I think that’s most likely to be what we’ll be doing a year from now. But if you think ahead to five and 10 and 20 years from now, I think these language models are just the beginning. I mean, there is so much more that’s coming down the pike, I think, that will combine language models with other AI techniques to produce something that starts to approximate our capabilities as humans.
I also think that industries that have typically had high touch, where you needed a lot of services, a lot of human driven services, I think those industries are potentially going to change dynamically. Their economics are going to change dynamically. Or as before, you might’ve had to add labor, human labor, every time you grew your customer base. Now we can take a lot of what human laborers can do and automate it away with language models, and driving opportunities for faster growth and higher profitability in businesses where you’ve had to have a lot of high touch human support.
Amanda Razani: Definitely. So, if there is one key takeaway you’d like our audience to have from this conversation, what do you want to share with our audience today?
Molham Aref: Yeah, so I, again, take this language model revolution very seriously, but also make sure that you have the right infrastructure around a data cloud platform, and a semantic technology that’s based and grounded in knowledge graphs that makes it easier for people and for the AI to navigate the data assets. So yeah, that would be a key takeaway.
Amanda Razani: All right. Thank you for coming on our show and sharing your insights.
Molham Aref: Thank you for having me, Amanda.