NetApp, the storage company, right? Well it was, and it still is in many respects, but the organization has evolved throughout the post-millennial years to become a cloud storage specialist, a unified storage advocate (with the unification there being an intelligent amalgamation of core data stores on traditional servers in line with both hybrid and public cloud), a cloud services organization and hyperscaler storage partner.
This broadening and lengthening of the central NetApp technology proposition arguably validates the company’s current stance as an intelligent data infrastructure services organization.
AI In Infrastructure
Now working to inject a more diverse range of services in the data infrastructure layer, NetApp used its annual user conference this month to explain how it is architecting a range of AI functions into the storage tier.
The core argument from NetApp is, if an organization wants to use AI, then its base substrate data storage layer should run with AI-driven services. Smart services in storage look after cybersecurity risks and backup/recovery tasks, but they also extend to helping organizations use cost-effective storage to separate and manage between so-called “cold or hot data”, cold being less frequently used.
Magical analyst house, Gartner, predicts spending on AI software will grow to almost $300 billion by 2027, and that generative AI will account for over one-third of that. NetApp says that, given this growth reality, the key to success in the age of AI is a broad mastery over governable, trusted and traceable data.
NetApp CEO George Kurian has detailed extensions to the company’s intelligent data infrastructure, including a vision for AI running on NetApp ONTAP, the organization’s own operating system for unified storage. As part of that vision, NetApp has begun the Nvidia certification process of NetApp ONTAP storage on the AFF A90 platform with Nvidia DGX SuperPOD AI infrastructure, which will enable organizations to use leading data management capabilities for their largest AI projects.
Integrated AI Data Pipeline
NetApp has also detailed developments which see it offer a directly integrated AI data pipeline, allowing ONTAP to make unstructured data ready for AI automatically and iteratively by capturing incremental changes to a customer’s data set, performing policy-driven data classification and anonymization, generating highly compressible vector embeddings and storing them in a vector database integrated with the ONTAP data model, ready for high scale, low latency semantic searches and Retrieval Augmented Generation (RAG) inferencing.
Another key element of the mix here is a disaggregated storage architecture. Net Apps says that this enables full sharing of the “storage backend” to maximize utilization of network and flash speeds and achieve lower infrastructure cost. For storage teams who want to improve performance while economizing on rack space and power for very high-scale, compute-intensive AI workloads like LLM training, this is exciting stuff. The company has said that, going forward, this architecture will be an integral part of NetApp ONTAP, so it will get the benefit of a disaggregated storage architecture but still maintain ONTAP’s resiliency, data management, security and governance features.
“Implementing AI requires a collection of finely tuned pieces of technology infrastructure to work together perfectly,” said Mike Leone, practice director, data Analytics & AI at Enterprise Strategy Group, part of TechTarget. “NetApp delivers robust storage and data management capabilities to help customers run and support their AI data pipelines. But storage is one piece of the puzzle. By collaborating with other industry-leading vendors in the AI infrastructure space, NetApp customers can be confident that their compute, networking, storage, and AI software solutions will integrate seamlessly to drive AI innovation.”
In additional news related to its wider platform and operating system evolution, NetApp has also developed what it says is an “advanced” generative AI data vision for integrated solutions that combine Nvidia AI software. This is all about the company providing accelerated computing with its data infrastructure for enterprise-level RAG to power agentic AI applications.
What Is Agentic AI?
For additional clarity here, agentic AI applications can be defined as a still-emerging class of software intelligence that employs a combination of AI processes, models, algorithmic approaches and techniques into a system designed to analyze data set targets or goals before then executing actions to deliver on those objectives. Capable of operating with minimal levels of human intervention, agentic AI applications may approach human-like (or at least near-human-like) levels of cognition and understanding. Their agentic existence means these defined parts of the new AI stack then continue to learn and improve their performance with every action and interaction they are tasked with.
Weaving all these moving parts together will bring new capabilities to the NetApp ONTAP unified storage operating system that can make use of a new NetApp global metadata namespace to unify data stores for the tens of thousands of enterprises that trust NetApp for their data infrastructure. Harv Bhela, chief product officer at NetApp says that this opens up exabytes of enterprise data stored across clouds and on-premises infrastructure to drive RAG capabilities that can put enterprises’ entire data estate to work, accelerating next-generation agentic AI applications.
NetApp has clearly moved on a quantum leap or two since we thought of the organization as a pure storage (competitor company name pun not intended) player with a well-substantiated set of cloud services and intelligence capabilities that go beyond plain old block and file storage (which the company still does, as a bread and butter staple, but in bigger slices these days) for the modern cloud era. What comes next from NetApp is hard to guess… it may emerge from its next chrysalis to become the low-code quantum-compliant intelligent data infrastructure company, but let’s not get ahead of ourselves just yet.