data platforms, AI, NVIDIA, AI platform, data, workloads, language, text, NLP, spaCy, data

Cisco, this week, in collaboration with NVIDIA and VAST Data, added a Cisco Nexus HyperFabric AI platform. It provides a single console to design, deploy, monitor and manage artificial intelligence (AI) workloads running in on-premises IT environments.

Launched at a Cisco Live! 2024 event, the Cisco Nexus HyperFabric AI combines Cisco’s 800G Ethernet fabric software with NVIDIA AI Enterprise and NIM inference microservices and a storage platform from VAST Data optimized for AI models to make it simpler to network clusters running AI models across CPUs and graphical processor units (GPUs) and data processing units (DPUs) provided by NVIDIA.

The two companies jointly provide an enterprise reference design for AI based on NVIDIA MGX, a modular server architecture defined by NVIDIA.

Jonathan Davidson, executive vice president and general manager for Cisco Networking, told conference attendees that Cisco Nexus HyperFabric AI simplifies the deployment of multiple AI models running across multiple pods and clusters. “It takes everything we’ve learned about AI and simplifies it,” he says.

Cisco has been steadily adding AI capabilities to its portfolio for several years. Additional updates to its offerings made this week include more AI models being added to the Cisco ThousandEyes networking monitoring service, an AI Assistant for Webex Suite and updates to Webex Contact Center to enable organizations to design and manage conversational self-service experiences, including an AI Assistant for contact center agents along with third-party virtual agents.

In addition, the company is extending Cisco Hypershield, a framework that leverage AI to improve cybersecurity, to now include support for AMD Pensando data processing units (DPUs) and Intel infrastructure processing units (IPUs).

Cisco has added AI certifications for end users and AI training for service partners to help increase the amount of available AI expertise.

Finally, Cisco revealed it has extended the reach of Motific, a software-as-a-service (SaaS) platform for governing AI models, to add support for large language models (LLMs) from Mistral and revealed it has set up a $1 billion investment fund that provides it with stakes in Mistral, Cohere and Scale AI to better tape into a market that IDC forecasts will expand to $500 billion in next three years.

“At Cisco Live 2024, the emphasis on accelerating business transformation through infrastructure modernization, was unmistakable,” says Paul Nashawaty, practice lead for application development and modernization at The Futurum Group. These innovations and strategic partnerships in AI and hardware acceleration underscore Cisco’s dedication to equipping businesses with the tools they need to navigate and thrive in an increasingly complex digital landscape.”

It’s not clear to what degree AI models will be deployed in on-premises IT environments versus in the cloud but given the amount of data that can still be found in local data centers the need to optimize infrastructure in ways that make it feasible to process massive amounts of data will eventually become a pressing issue.

Of course, Cisco isn’t the only provider of infrastructure eyeing that opportunity. However, given its networking, storage and server portfolio, Cisco is making a case for streamlining the total cost of AI by relying on a single vendor capable of meeting all those requirements.