Nutanix today at its .Next 2024 conference extended its platform for running large language models (LLMs) to add an ability to manage foundational artificial intelligence (AI) models via a revamped user interface, along with providing integrations with the distribution platform for open source LLMs provided by Hugging Face and the NIM inference microservices framework, and the GPUDirect Storage platform developed by NVIDIA.
Scheduled to be available in the second half of 2024, Nutanix GPT-in-a-Box 2.0 also now makes it possible to create application programming interfaces (APIs) for endpoints, manage end user keys, implement role-based access controls and invoke auditing capabilities.
In addition, Nutanix is now adding support for NVIDIA L40S, H100, L40 and L4 GPUs along with GPU platforms from Dell, Hewlett-Packard Enterprise (HPE) and Lenovo. Nutanix also plans to support NX-9151 platforms based on the NVIDIA MGX reference architecture.
Finally, Nutanix launched a formal AI Partner Program to increase the number of technology service providers that have expertise deploying generative AI applications on the Nutanix Cloud Platform (NCP). Initial partners include Codeium, DataRobot, DKube, Instabase, Lamini, Neural Magic, Robust Intelligence, RunAI and UbiOps.
It’s not clear at what rate AI applications are being built and deployed, but it’s apparent that IT operations teams are assuming more responsibility for deploying them. Nutanix is making a case for a set of infrastructure services designed to enable IT teams to deploy AI models in on-premises and cloud computing environments as they best see fit depending on, for example, where graphical processor unit (GPUs) might be most readily available. That’s crucial in an era where supplies of GPU processers are expected to be constrained well into 2025, and possibly beyond.
A recent Nutanix survey found 85% of respondents work for organizations planning to increase their investments specifically to support artificial intelligence (AI), with well over a third (37%) expecting that running AI applications on their current IT infrastructure will be a significant challenge. A full 80% expect hybrid IT environments to be the most beneficial to their ability to manage applications and data.
Nutanix CEO Rajiv Ramaswami told conference attendees that AI models need to run where data resides. The challenge is making it simpler to deploy AI applications in a turnkey fashion using one-click automation enabled by Nutanix platforms, he added.
Justin Boitano, vice president for enterprise AI for NVIDIA, added that, regardless of which AI model used, enterprise organizations will need frameworks such as NIM to access the data needed to drive AI applications. The immediate issue is to make it simpler for enterprise IT organizations to get started building and deploying AI applications, he noted.
It’s still early days, so far as building and deployment of AI applications is concerned, but as organizations look to operationalize AI, determining which platforms to deploy these applications is critical. After all, once successfully deployed, they will be running in production environments for many years to come on IT infrastructure that will need to horizontally scale as compute and storage requirements continue to evolve.