AI News

MinIO has created an instance of its object storage system that is optimized for artificial intelligence (AI) applications accessing exabytes of data.

AIStor provides access to a S3 application programming interface (API), dubbed promptObject, that makes it simple to directly invoke unstructured objects in the same way data science teams typically access a large language model. For example, when launching a query, an end user can ask a question that includes external data that wasn’t used to train the AI model.

That approach makes it possible to expand the capabilities of an AI model without retrieval-augmented generation (RAG) techniques or a vector database, say MinIO CEO AB Perisamy.

In effect, AIStor uses a PUT and PROMPT paradigm that replaces the traditional PUT and GET method used for accessing data, he adds. Additionally, multiple objects can be used to chain functions together.

Other capabilities include AIHub, a private repository that is compatible with open Hugging Face API that can be used to store AI models and datasets directly in AIStor.

MinIO has also added support for S3 over Remote Direct Memory Access (RDMA) to enable AI applications to access data over 400Gb and 800Gb Ethernet networks in a way that reduces server utilization.

Finally, MinIO has also revamped the user interface for its console to include identity and access management (IAM), information lifecycle management (ILM), load balancing, firewall, security, caching and orchestration capabilities, along with an Operator that makes deploying AIStor across a fleet of Kubernetes clusters easier.

In general, AI applications running at scale require a different approach to storage that legacy network attached storage (NAS) platforms were not designed to address, says Perisamy. As a result, the rise of AI is accelerating the pace at which organizations are adopting object storage systems, he adds.

MinIO, thanks to a global name space capability, makes it possible to use the same type of object storage found in cloud computing environments in an on-premises IT environment where most organizations continue to have the bulk of their data stored. That approach makes it simpler to bring AI models to where data already resides, versus requiring organizations to move data, says Perisamy.

A recent MinIO survey finds that 21% of respondents are building generative AI workloads in the cloud, while 20% are deploying them in on-premises IT environments. It’s not clear how many AI models built in the cloud might be deployed in an on-premises IT environment or, for that matter, vice versa. The one thing that is clear is that it’s generally a lot easier to bring AI models to data than it is to move data into another environment, simply because it’s where an AI model is hosted.

The challenge, of course, is that managing exabytes of data used to drive multiple AI models requires a significant amount of data engineering expertise that is often hard to find and retain, so the simpler the underlying storage system is to manage, the more likely it becomes traditional storage administrators will be able to help manage all that data.

TECHSTRONG TV

Click full-screen to enable volume control
Watch latest episodes and shows

Qlik Tech Field Day Showcase

TECHSTRONG AI PODCAST

SHARE THIS STORY