typing

Teradata has made it possible for enterprise IT organizations to integrate any large language model (LLM) they build directly into its core data management and analytics platforms hosted in the cloud.

At the same time, the company is now collaborating with NVIDIA to add support for the NVIDIA NeMo and NVIDIA NIM microservices framework to the Vantage platform that already spans multiple cloud computing environments.

Additionally, Teradata is committing to making available IT infrastructure provided by NVIDIA across its cloud services. Teradata VantageCloud Lake NVIDIA AI accelerated compute will be generally available first on the Amazon Web Service (AWS) cloud next month, with inference capabilities being added fourth quarter and fine-tuning capabilities scheduled for the first half of 2025. NVIDIA AI Enterprise integrations will also arrive in 2025.

Collectively, those capabilities will make it simpler for organizations to both bring LLMs to where their data resides and expose data to an LLM using retrieval-augmented generation (RAG) techniques.

For example, enterprise IT organizations can bring AI models to where data already resides in either the VantageCloud Lake or ClearScape Analytics platform provided by Teradata. That specific capability will be made available first on AWS, followed by cloud services from Microsoft and Google in the first half of 2025.

Additionally, organizations can opt to expose data residing in Teradata platforms to customize either an external generative AI model or an AI model that they have deployed directly on the Teradata platforms.

In either scenario, organizations can continue to use Teradata platforms to process and analyze data while retaining control over how and when data is exposed to an AI model, says Hillary Ashton, chief product officer at Teradata.

As generative AI continues to evolve, organizations are starting to get a better handle on governance, performance and cost challenges, she adds. They realize they need to be able to flexibly invoke AI models of multiple types and sizes, notes Ashton.

Some of those AI models may be massive, but increasingly there will be much smaller domain-specific models running directly on top of a database that might be deployed in the cloud or in an on-premises IT environment, she adds. Regardless of approach, enterprise IT organizations do not want to be locked into a specific AI platform, says Ashton.

From a Teradata perspective, AI models are just another type of compute engine that needs to be brought to the data, notes Ashton. “We’ve already been doing that for years,” she says.

It’s not clear to what degree AI will force organizations to revisit how they manage data, but Teradata is betting more organizations will prefer to leverage their existing investments rather than trying to move massive amounts of data to a different platform. Data, after all, has gravity and can be expensive to migrate.

One way or another, however, organizations will need to ensure that the right data is exposed to the right AI model at the right time. More than a few of the AI applications being built will require instant responses, which means data will need to be processed in near real time.

The one thing that is for certain is the IT and data engineers that have mastered the skills and processes needed to achieve that goal are becoming as valuable as the data science teams that built the AI model in the first place.

TECHSTRONG TV

Click full-screen to enable volume control
Watch latest episodes and shows

AppDev Field Day 2

TECHSTRONG AI PODCAST

SHARE THIS STORY