
Teradata today extended the reach of a platform for building and deploying artificial intelligence (AI) applications to on-premises IT environments.
Sumeet Arora, chief product officer for Teradata, said the Teradata AI Factory, in addition to being a component of the existing set of Teradata Vantage cloud services the company already provides, can now be deployed in data center environments directly managed by internal IT teams.
That capability is especially critical in highly regulated industries where compliance mandates make it difficult to rely on cloud services to build and deploy AI applications, he added.
Additionally, on-premises IT environments can often run AI applications more cost-effectively compared to a cloud service that attaches a cost to each token required for input and generating an output.
It’s not clear to what degree AI might be driving a renaissance in on-premises IT environments, but given the sensitivity of the data typically used to build AI applications, there is a natural reluctance to rely on shared IT infrastructure provided by a cloud service provider.
Teradata is now providing organizations with an alternative option based on the same IntelliFlex platform it created for analytics applications that can be deployed either in a data center or some type of managed hosting environment where an internal IT has more control.
The overall goal is to make it simpler for enterprise IT teams to streamline the building and management of the data pipelines required to build and deploy AI applications using a core platform that has been extended to include a built-in vector store, a microservices framework developed in collaboration with NVIDIA, tools for managing the lifecycle of AI models and AI tools such as JupyterHub, ModelOps, Airflow, Gitea, and Devpi.
Additionally, Teradata provides access to data ingestion tools, a QueryGrid framework to move and connect data across systems, support for Open Table Formats (OTFs), and object-based storage systems.
In effect, Teradata is making a case for a more turnkey platform that doesn’t require an internal IT team to build, deploy and maintain all these individual components themselves, said Arora.
There is, of course, no shortage of platforms now for building and deploying AI applications. Historically, many of these AI platforms have been custom-built by IT infrastructure specialists who have been attached to a specific data science team. However, as the number of AI applications being built continues to exponentially increase, there is a growing need to optimally manage the IT infrastructure they depend on in a way that cost-effectively scales, noted Arora.
It’s not clear how many AI applications will ultimately be deployed in some type of on-premises IT environment. In most cases, organizations are likely to train AI models in the cloud, while deploying the inference engines created in on-premises IT environments where much of their data already resides.
Regardless of approach, the one certain thing is that AI applications will span everything from the cloud to the network edge in ways that will require a lot of IT infrastructure expertise to optimize.