AI news

One of the issues faced by organizations at the forefront of operationalizing large language models (LLMs) is the number of vector embeddings with thousands of dimensions that need to be traversed by an application keeps increasing. Data science teams typically make use of approximate nearest neighbor (ANN) techniques to search these vector embeddings, but as the number of dimensions increase, it starts to adversely impact the performance of applications.

Rockset this week announced it has resolved that issue by applying approximate nearest neighbor (ANN) search in a way that can be applied to more than a billion vector embeddings.

The overall goal is to ensure AI applications can provide immediate responses to any query asked across thousands of dimensions, says Rockset CEO Venkat Venkataramani.

Rockset leverages metadata for both unstructured and structured data formats along with NVMe-based systems that have Flash memory solid-state disks (SSDs) to instantly surface all similar data required to, for example, drive an AI recommendation engine, notes Venkataramani. The platform stores and indexes vectors alongside hundreds of terabytes of metadata, including text, JSON, geo and time-series data that can be accessed via a search engine based on a SQL engine.

IT teams can modify AI applications by inserting, updating and deleting vectors and metadata in milliseconds with no reindexing required.

AWS

The Rockset platform is based on RocksDB, an open source log structured database engine based on a key/value store that has been extended to support SQL queries. Originally developed by Facebook, that approach enables organizations to offload queries to an indexing engine that can process sub-second queries while transactions continue to be processed using a relational database.

Earlier this year, Rocket added an ability to search vector data that is now extending to support highly interactive AI applications at scale. In addition, the platform provides support for Langchain, an open source framework for building AI applications.

There are many ways to build AI applications but it’s still early days as far as providing IT teams with the tools required to build and deploy them is concerned. In addition to all the infrastructure required, IT teams will need to define workflows spanning data science, DevOps, data engineering and cybersecurity teams to make it possible to build and deploy these types of applications consistently and safely.

Regardless of the approach favored, it’s already clear that the silos that exist within many IT organizations is going to be a hinderance. The more capabilities that can be invoked via a single platform, the simpler it becomes to overcome that issue in way that is less costly, notes Venkataramani. “Building AI applications is more expensive,” he says.

It’s not clear how many organizations have the skills and resources required to build these applications, but over time it will become easier to build them as further advances are made. In the meantime, organizations should assume that every application to one degree or another is going to be infused with AI. The only thing left to be determined is who will actually build them at a level of cost that drives a meaningful return on investment (ROI).

TECHSTRONG TV

Click full-screen to enable volume control
Watch latest episodes and shows

Edge Field Day

TECHSTRONG AI PODCAST

SHARE THIS STORY