
Hewlett-Packard Enterprise (HPE) in the first quarter of 2024 will make available a managed instance of the HPE Machine Learning Development Environment for building and deploying artificial intelligence (AI) models on public clouds.
Announced at the HPE Discover Barcelona 2023 event, the first managed instance of HPE Machine Learning Development Environment will be on the Amazon Web Services (AWS) cloud. Based on the open source distributed Determined AI platform, HPE already makes a managed instance of the HPE Machine Learning Development Environment available via the managed HPE GreenLake service for on-premises IT environments.
In addition, HPE has integrated its HPE Ezmeral Unified Analytics Software platform, based on containers, with the HPE Machine Learning Development Environment.
HPE has also extended its alliance with NVIDIA to provide HPE ProLiant servers and HPE Cray Supercomputers pre-configured with HPE Machine Learning Development Environment Software, HPE Ezmeral Software and the NVIDIA AI Enterprise software suite, including the NVIDIA NeMo framework for building and deploying artificial intelligence (AI) models.
Finally, HPE is making available AI consulting services to help organizations build and deploy applications.
HPE is making a case of an AI native architecture that is designed to both scale up and, by design, run workloads across hybrid IT environments, says Evan Sparks, chief product officer for AI at HPE. “AI requires a fundamentally different architecture,” he says. “These applications are extremely computational intensive.”
In effect, HPE is contending that the existing stack of infrastructure and software being used to run legacy applications does not lend itself to training and deploying AI models. IT teams, instead, will need to add additional platforms specifically optimized to run AI workloads.
Naturally, when it comes to building and deploying AI models, many of the decisions that organizations will be driven by are both where data is currently located and the degree to which privacy and sovereignty requirements might require AI models to be trained and deployed in specific locations. HPE is betting that, as a result, many organizations will find themselves taking advantage of compute resources in the cloud to train AI models in a way that ensures sensitive data remains stored in an on-premises or private cloud computing environment.
It’s still early days, so far as operationalizing AI models in the enterprise is concerned, but it takes a team of data scientists, developers, software engineers and cybersecurity professionals to successfully build and deploy AI models. Each organization will need to determine to what degree they might want to manage the building and deployment of AI models themselves versus relying on providers of managed services such as HPE. In some instances, organizations might opt to, for example, rely on HPE to manage the underlying infrastructure required while focusing their efforts on extending, customizing or building large language models (LLMs) of their own that will be embedded into either new or existing applications.
One way or another, AI models will eventually be pervasively applied across the enterprise. The challenge now is, given all the costs involved, determining how best to get started at a time when the number of potential use cases far exceeds the available resources.