AI News

MongoDB, Inc. today extended its alliance with Amazon Web Services (AWS) to preview an integration between the vector search capabilities its provides via the MongoDB Atlas database service and Amazon Bedrock, a managed service for accessing multiple large language models (LLMs) that drive generative artificial intelligence (AI) applications.

Announced at the AWS re:Invent 2023 conference this integration will, in the next few months, make it possible to apply MongoDB Atlas Vector Search to LLMs from AI21 Labs, Anthropic, Stability AI along with Titan LLMs developed by AWS.

That approach makes it possible to safely extend those LLMs using data already stored in a document database rather than requiring organizations to configure a separate vector database. Instead, MongoDB has added a vector database to its service through which data residing in its document database can be indexed.

LLMs running on the Amazon Bedrock service are then invoked via an application programming interface (API) that can be accessed using agent software to drive a retrieval augmented generation (RAG) process. That approach provides the performance benefits of a dedicated database to convert data into vector embedding that can be used to further train an LLM, says Ben Flast, a lead product manager for MongoDB.

There are multiple ways to extend an LLM, but one of the simplest is to use a vector database that enables an LLM to analyze proprietary data alongside the data originally used to train the LLM. The issue that organizations encounter when attempting to operationalize generative AI is that LLMs are trained using data that has been made available up to a certain date. A vector database makes it possible for the LLM to be trained using additional data without having to update the LLM.

Organizations, just as importantly, also retain control of their proprietary data rather than having to worry about whether their data is being incorporated into a publicly available LLM in a way they no longer control.

It’s still early days so far as embedding generative AI capabilities into business processes is concerned, but in time they will be pervasively applied. “We’ll see a new class of application that are processing generative AI in the background,” says Flast.

In addition to building new applications, many organizations will modernize legacy applications by adding generative AI capabilities, he adds.

The challenge, of course, is that organizations will soon find themselves needing to embrace best practices for managing terabytes of data that will be used to train LLMs. The quality of the output of any LLM is directly tied to the quality of the data used to train it. Unfortunately, many organizations have data that in addition to often being conflicted is strewn across multiple repositories. Generative AI will require organizations to apply more discipline to managing the data that might be exposed to an LLM.

It’s not clear how long it might take organizations to manage data more rigorously, but given the high priority that generative AI has been assigned, the pressure to bring order to what is often a chaotic process is already starting to mount.

TECHSTRONG TV

Click full-screen to enable volume control
Watch latest episodes and shows

Mobility Field Day

TECHSTRONG AI PODCAST

SHARE THIS STORY