Synopsis: In this AI Leadership Insights video interview, Mike Vizard speaks with Avthar Sewrathan, AI Lead for Timescale, about vector databases and more.

Mike Vizard: Hello and welcome to the latest edition of the Techstrong.ai video series. I’m your host, Mike Vizard. Today we’re with Avthar Sewrathan, who is AI lead for Timescale, and we’re talking about vector databases and search and adding all this stuff into our existing databases. Avthar, welcome to the show.

Avthar Sewrathan: Thank you so much for having me, Michael. Pleasure to be here.

Mike Vizard: You guys have expanded your ability to support vector capabilities inside of the existing database. There are other folks out there that claim to have dedicated databases for this capability. What’s the benefit of one approach versus the other and what’s driving this conversation these days?

Avthar Sewrathan: Yeah, I think that’s a great question to start off with. I think that, roughly speaking, there is two approaches to this that we see in the market today. I think much of the motivation around vector data and for the specialized vector databases as well as existing databases adding vector capabilities is that it’s the one way to make your AI apps useful with your own data and data that the model might not have been trained with. And I think that whenever a new technology comes out, you think that you need to have a specialized database or a specialized set of technologies to deal with it. And you see this flurry of startups out there that some have raised massive amounts of funding to build databases, especially for vector data.
And on the other hand, folks know and love existing databases and stuff that they’ve been using and building with for a while. And part of much of the wishes that they have and much of the motive that they have is, “I don’t really want to learn a new piece of technology. I don’t really want to learn how a new system works. I know how a database, like Postgres for example, works. I wish I could just use that.” And so developers face, what I call, this paradox of choice, where on one hand they have specialized technology, which seems like the right thing to use. You should probably use a more specialized tool, at least starts the thinking. And you have this existing technologies that they know, they’re familiar with, they’ve been using for years. And so the question is, which one of these tools do you use to adapt to this new use case that AI has introduced?
And I think that we fall squarely on the side of using existing databases and using… In our case, we introduced a new product called Timescale Vector, which adds Postgres support to, sorry, adds vector support to Postgres, building off open source products like PG Vector. And we think that the two major benefits is that there’s operational simplicity. It minimizes the number of things that developers need to have in their stack. So the whole idea of simplifying your stack, not having to worry about a separate database and another thing to monitor and do high availability and things for. And the other thing is about learning. And I think especially with AI, this space moves so quickly that I, as someone who’s full-time job is in this space, have a tough time keeping up. And I can’t imagine for developers trying to build with this, there’s new things coming out every day. We basically give them one less thing to worry about where they can use a familiar piece of technology and not have to learn a new query language or how this new thing works.
I think the takeaway for us and what we hope is that it allows developers to focus on delivering value and focusing where they can actually work on these new applications rather than just learning new tech for tech’s sake. And I think that’s where Postgres particularly shines, because it’s so well-known and well-loved. So that’s the two ends of the spectrum. I think that while developer preferences are still being worked out, I think giving people the option to not have to use a specialized vector database is pretty attractive. And that’s why we’ve embarked on building Timescale Vector.

Mike Vizard: Does this just speak to good old-fashioned data gravity? The data’s in my database there, moving it into another database just so I can show it to a large language model may not make a lot of sense.

AWS

Avthar Sewrathan: Yeah. I think, if anything, it just adds a lot of complexity. It’s more application level plumbing that you have to do. It’s more systems that you have to keep up in terms of data integrity and data syncing where you need to make sure that, for example, let’s say if you have your user data or content data in a Postgres database, you need to make sure that that is up-to-date in the vector database, and you run into all these issues where it’s just another headache for the team who are just trying to focus on building products. And I think you’re exactly right where it’s like these existing technologies, the data is already there. And so why not give developers opportunity to access these capabilities and the tools that they already use? That’s the thinking behind it.

Mike Vizard: How simple is it to manage all this stuff these days? Because I think a lot of people, when they think about vector, and then they’re like, “Oh, that must be rocket science.” Is this accessible for the average DBA who’s running a database these days? And how does this whole thing come together in your mind?

Avthar Sewrathan: Yeah. When I was learning about this space probably in late last year, when ChatGPT got released is when I dived head first into it, there’s different levels of abstraction that you can work with. One of the things that we have really held as a principle in terms of our building of Timescale Vector is making sure that we can make it as easy as possible for developers to get started without having to dive into a bunch of machine learning theory or know the history behind embeddings and things like that if they’re not naturally interested in that kind of thing. So for example, one of the things that we’ve done is partner with these various frameworks that are out there, some of the most popular ones that we’ve done partnership with, LangChain and LlamaIndex, and we’ve also created our own Python library. That abstracts away some of the theory where it’s just as easy as saying like, “Oh, I have this piece of content. Use the embedding function on it,” then such against it. You don’t have to worry about fiddling with certain parameters if you don’t want to.
I think that that’s where some of the more specialized vector databases, that’s the appeal that they bring with. They abstract all the stuff away. But on the other hand of the spectrum is when you do want that transparency, you fall back on Postgres and you fall back on the fact that you can use all this ecosystem and tooling, whereas with some of these newer databases, you don’t really get that. So I think that it’s getting more and more easy for people to build AI-powered applications. And I think that what we hope is that we give folks that have this kind of Postgres expertise, the ability to now go and experiment and now go in bold where they previously might’ve thought that, “Hey, the barrier is just too great.” Now they can say like, “Well, I already know Postgres, it’ll take me an afternoon to learn this, and then I can be on my way with this new skills under my belt.” So that’s how we approach it.

Mike Vizard: How do you think the relationships between DBAs and the data engineers that drive that into the process is going to evolve with the data science folks and the DevOps folks and eventually the security folks? I mean, it feels like it takes a small village to build a model these days. So how does that workflow look to you?

Avthar Sewrathan: Yeah, I think that the way I see it is that more and more of these roles are getting fused together. And I think there’s a saying called the collapsing of the stack and the collapsing of these different roles where I think in times past you would have a separate data engineering team, software engineering team, DBA team. I think today those roles are getting merged together. And especially with the rise of large language models and open AI and some of the other model providers, what it actually enables you to do is have an individual software developer or have a DBA/developer being able to do what previously you needed teams of machine learning and data engineers to do and allowing them to access it just via an API call or some function calls or even a SQL SELECT statement in a database.
And so that’s my opinion. I see these roles fusing more. And I think one side effect, if we zoom out to an industry level, what AI allows you to do is get more done with just an interested individual and allow them to supplement their knowledge through things like ChatGPT or various copilots. And so I think that trend is going to continue of this talent stack collapsing and having increasingly bloodlines between a software engineer, a data engineer, DBA machine learning engineer. That’s my opinion on the matter.

Mike Vizard: You’d think the rise of AI will drive us to be a little bit better at data management. I would argue that maybe we’re not all that good at it over the years. But the issue with an AI model is once it learns something, it’s really hard to unlearn it, so you wind up having to replace it if something goes wrong. But do you think we need better focus on our data management workflows?

Avthar Sewrathan: Yeah. I think two things are true. One is that most definitely the data that a particular model is trained on is super important. I also think that this is where vector databases and databases that can handle vector data do play a role, because they enable you to do, I think, what’s known in the industry as retrieval augmented generation. Basically what this means is it allows you to have the model used as context things that it wasn’t trained on or things that wasn’t in its underlying training set that is then fetched from a vector database. And what we do, one of the unique features that we actually add with Timescale Vector is giving developers and companies ability to have a time-based retrieval in the applications that allows you to supplement the model with data from a particular time period or to age out irrelevant information and essentially have the model way less data that might be much older and way more data that might be newer or vice versa.
And I think these are the kind of capabilities that, coming from Timescale, where primarily we started off in time series, we actually see the core value of time as a component, as something that people build the applications around that we’re bringing to the vector database space and to the AI space as folks who have seen dynamics in other markets such as time series and Postgres as well. And so, I think, to answer your question, there’s a role for the underlying models to be trained on good data sets and for those to be really refined, but there’s also a role for companies to say, “Hey, we’re going to take any model, we’re going to take these open source models, and supplement it using retrieval augmented generation,” and we give them an extra tool in their toolkit with the time-based retrieval to help them further fine tune their results so that they give their users the most relevant responses.

Mike Vizard: What’s your sense of how many LLMs will an organization wind up exposing vector data to? Because I feel like, if I look at ChatGPT, it’s general purpose, and it takes them several years to upgrade that thing. I think there’s going to be a lot more domain specific LLMs that are smaller and trained on an narrow set of data. And are these things going to be federated? Will they have parent child relationships? How do you think that that will all come together?

Avthar Sewrathan: Yeah, that’s, I think one of the million dollar questions out there, or potentially billion dollar questions. I do agree with you that I think there’s a lot of promise in these domain-specific models. I believe there’s a paper out there around something like textbooks is all you need where it’s like, “Hey, if you just train something on a textbook for particular domains, it produces really, really good results than this general purpose model that takes billions of dollars to train.” On the other hand, I mean I think, in general, it’s an open question about whether or not the right way to go is sticking with the state-of-the-art general purpose models, like a GPT-4 or to try and fine tune open source models like Llama 2 or something like that to these domain-specific applications.
And I think that one concern that I have is trying to advocate for too many domain-specific models is maybe shortsighted, where part of the value and part of the reason why this whole [inaudible 00:13:10] space really took off last year is because of these emergent properties in the large language models like GPT-4 that you don’t know before time. The emergent by definition is that they come out after the fact and you only discover what these models are capable of once you try and use them.
And so I worry that’s prematurely optimizing. I do think cost and ease of use have a role to play as well where we think… Right now, for example, GPT-4 is state of the art, it can be used for most things. But the problem is inference is expensive per token, relatively speaking to, for example, open source models. And so I think for certain businesses they have to weigh up, what is my cost factor for this? How much do I value having a more powerful model? And then finding some middle ground. I do think that you may end up having a model, like one model for customer service, one model for idea generation, analytics one, and then a general purpose model that several people can plug into with their knowledge bases and data sources that help them do more creative, more value-driven work. Whereas, the simpler domain specific models are designed to produce very low variance, low error responses where, for example, in customer service, you don’t want to get too creative when you’re answering someone’s query. You want to give them the straight shoot to the book answer. So that’s my opinion.
I do think it’s the open question the world is developing, but that’s how it can play out. I do think it’s some combination, but if I had to choose right now, I definitely think these general purpose models we might end up having, even on a country specific, like one of the AI founders, Emad Mostaque, from Stability talks about these models being or large models that are country and culture specific. So I think that’s one road that we can definitely go around. It remains to be seen what actually is going to happen.

Mike Vizard: What’s your best advice to folks then as you look at all this? I mean, I think everybody’s experimenting with this stuff, but I don’t know if there’s a reasonable path forward that will get to a better result faster. Or is this all just going to be the proverbial school of hard knocks?

Avthar Sewrathan: Yeah, I do think that one of the reasons why we embarked on this journey to build Timescale Vector is because we think that there’s a lot of power and a lot of potential in AI and in the applications of large language models. And I think that there’s applications from things like “driving operational efficiency” within the company, doing more with less, but there’s also these new capabilities that allow you to create really magical, really wonderful experiences in your product that can really basically just add value to people’s lives.
And so I think that one piece of advice I have is that I think everyone who can and who’s interested or maybe on the fence should be experimenting, at least companies should be having these R&D or experimental groups and really encourage their individual developers to go and play around with this, have hackathons, have 20% time or something like that. I think that’s one of the reasons why we built Timescale Vector is just to lower that barrier so that, hey, if you already know Postgres, this is an easy way to get started.
The other thing I would say is, in terms of a way forward, is that if a company is a step further where they’re in their experimental phase and they’re trying to pick a set of foundational technologies, one thing that’s been on our mind is building on a solid foundation that allows developers to scale and companies to scale as they add more data. One example that we often see that people want to build is the proverbial chat with their documents, chat with their knowledge base kind of things. And initially when you have a small example, you have a few thousand documents, it works fine and the cost isn’t that much, but as you scale up and as you add more and more data, that’s one of the places that we’ve seen that costs can really increase.
And so, one of the things that we’ve done, for example, is added a new index type in Timescale Vector that allows you to not scale with memory but scale with disk. It’s based off algorithm called DiskANN. And what that allows you to do is basically store lots of data and query, lots of data very quickly without having to pay as much as if you were to start that whole index in memory. So I think keeping in mind scalability and keeping in mind like, okay, what would the system look like if I added 10x or a 100x more data? Those kind of fundamental systems engineering questions. That’s another thing I encourage folks to keep in mind.
So I think those two things, spirit of experimentation. And then two, thinking about scaling and thinking about costs and what would this system look like with 10x and a 100x more data. Those are two pieces of advice that I would give to any organization.

Mike Vizard: Well, folks, you heard it here. The AI gold rush is definitely on. But if you look back in history, the folks that made all the money back in the original Gold Rush, they sold blue jeans and tools, so maybe we’ll see the replay of that here. Avthar, thanks for being on the show.

Avthar Sewrathan: Thank you so much for having me, Mike. Really appreciate it.

Mike Vizard: All right. And thank you all for watching the latest episode of Techstrong.ai. We invite you to watch this episode and all the other ones we have on our website. And until then, we’ll see you next time.