AI

It should be apparent to the most casual observer of the development of artificial intelligence (AI) models that organizations are now going to increasingly face significant challenges managing the data required to train them. As a result, providers of data management platforms are now racing to make it simpler to manage data at scale. The latest example is an Alluxio Enterprise AI platform launched today that enables AI models to consume data more efficiently from the cloud to the network edge.

Very few organizations manage data optimally, but with the rise of AI the issue is becoming much more acute. Alluxio is extending its existing data management portfolio to specifically address issues such as the need to optimize caching for AI model accessing graphical processor units (GPUs), says Adit Madan, director of product management for Alluxio.

At the core of the Alluxio platform is a data management platform designed specifically to take advantage of NVMe, a storage access and transport protocol for flash memory and solid-state drives (SSDs) that enables much higher throughput and faster response times.

The Alluxio Enterprise AI platform provides access to up to 100 billion objects stored on a commodity storage service such as Amazon S3. The more efficiently access to data is provided at scale, the less costly it becomes to invoke GPU resources, explains Madan. “GPU resources are expensive,” says Madan.

In fact, demand for GPU resources is so high IT teams need to make sure that AI models are optimally making use of a scarce resource. Alluxio claims IT organizations should expect to see up to 20x faster training speed compared to commodity storage, up to 10x accelerated model serving, GPU utilization rates higher than 90% and up to 90% lower AI infrastructure costs.

Unfortunately, far too many organizations underestimate the need to invest in modern IT infrastructure, including, beyond GPUs, faster storage systems and vector databases, to drive the building and deployment of AI models at scale. Experimenting with one or two AI models is not nearly the same thing as trying to deploy what will rapidly become hundreds running in production environments.

Business leaders may be enthusiastic supporters of AI, but when it comes to customizing an AI model or building one from scratch, many of them are likely to experience sticker shock. When the cost of all the infrastructure is added up alongside the salaries of the data scientists, data engineers and application developers required, the amount of funding required far exceeds any previous generation of IT technologies. Given those costs, that also means there is going to be little tolerance for making inefficient use of those highly constrained resources.

It’s still early days as far as adoption of AI in enterprise computing environments is concerned but the need for a more disciplined approach to managing IT is becoming apparent. The challenge is that in the cloud computing era many organizations because used to the idea that infrastructure resources would always be abundantly available, so now many of them are rediscovering an appreciation for what were once considered IT management fundamentals.

TECHSTRONG TV

Click full-screen to enable volume control
Watch latest episodes and shows

Networking Field Day

TECHSTRONG AI PODCAST

SHARE THIS STORY