Hewlett Packard Enterprise (HPE) today launched in collaboration with NVIDIA a private cloud platform for building and deploying artificial intelligence (AI) applications in on-premises IT environments.
Announced at the HPE Discover 2024 conference, the HPE Private Cloud AI platform is an extension of the suite of managed HPE GreenLake services that are increasingly taking advantage of an OpsRamp AI copilot to automate the management of infrastructure. The OpsRamp AP copilot is based on an AI for IT operations platform that HPE gained last year via its acquisition of OpsRamp and will soon be extended to add support for application programming interfaces (APIs) from CrowdStrike to add a unified service map view of security.
Available in the Fall, the HPE Private Cloud AI platform is based on the NVIDIA AI Enterprise software platform, which includes NVIDIA NIM inference framework for microservices. On top of that base, HPE AI Essentials software provides a set of curated AI and data foundation tools, including a data lake and observability capabilities, that can be deployed and managed via a unified HPE GreenLake control plane.
HPE Private Cloud AI will be the first turnkey platform for building and deploying AI applications, says Fidelmo Russo, HPE CTO and executive vice President and general Manager of HPE Hybrid Cloud business unit. “Three clicks and your teams are up and running,” she says.
HPE Private Cloud AI also includes NVIDIA Spectrum-X Ethernet networking, HPE GreenLake for File Storage, and HPE ProLiant servers with support for NVIDIA L40S, NVIDIA H100 NVL Tensor Core GPUs and the NVIDIA GH200 NVL2 platform.
The HPE Private Cloud AI platform is at the core of a NVIDIA AI Computing by HPE offerings that will include a global network of system integrators, such as Deloitte, HCLTech, Infosys, TCS and Wipro, to help organizations manage complex AI workloads.
HPE via this initiative is also providing AI acceleration workshops to help service partners to build practices and workshops to train certified IT professionals.
It’s not clear how quickly enterprise IT organizations are building and deploying their own AI applications, but most at the very least are working on proofs-of-concept. HPE is betting that in time most organizations will prefer to devote much of their resources to building and deploying AI applications versus managing the underlying infrastructure upon which they depend. In fact, one of the major challenges organizations encounter today when building AI applications is they lack the data management platforms and expertise needed to customize large language models using, for example, retrieval-augmented generation (RAG) techniques.
HPE is also betting that given the sensitivity of the data being accessed by AI applications, many enterprise IT organizations will prefer to deploy these applications in an on-premises IT environment rather than in public cloud.
Regardless of approach, the one thing that is certain is that as AI becomes more pervasive, the number of applications infused with some type of language model is about to exponentially increase at a rate that is likely to overwhelm many internal IT operations teams.