Dell

Dell Technologies today unveiled a suite of infrastructure offerings optimized for deploying artificial intelligence (AI) workloads in on-premise IT environments managed by Dell or an internal IT team.

Launched at the Dell Technologies World conference, the Dell AI Factory includes PowerEdge servers that include graphical processor units (GPUs) and all-Flash PowerScale storage systems, along with updated networking infrastructure and software that can be managed using AI for IT operations (AIOps) tools to automate tasks and improve observability that Dell has added to its portfolio.

At the same time, Dell is expanding an alliance with Hugging Face through which it will make available generative AI models via the distribution platform managed by Hugging Face.

Dell is also extending its alliances with Meta to make it simpler to deploy Meta Llama 3 large language models (LLMs) in an on-premises IT environment and NVIDIA to provide additional servers, storage and services, including configuring AI assistants, based on the latest generation of Blackwell GPUs being developed by NVIDIA.

Finally, Dell also revealed it will be making available an extension of its managed APEX service that leverages generative AI capabilities to simplify the management of workloads deployed on the Microsoft Azure cloud. As part of that extension, Dell is also now offering consulting services to organizations looking to deploy instances of Microsoft Co-Pilot tools across the enterprise.

Given the concerns organizations have about exposing sensitive data to AI models hosted in the cloud, Dell views AI as an opportunity to drive refreshes of infrastructure across on-premises IT environments. For example, Dell is previewing Project Lightning, a parallel file system for optimized of AI workloads that typically process massive amounts of data.

A network Dell PowerSwitch Z9864F-ON switch, meanwhile, makes use of the Tomahawk 5 chipset from Broadcom to enable lower latency as more data moves across a distributed computing environment.

IT teams will be able to create their own templates for deploying AI applications across a multi-cloud computing environment, including bare metal servers that can be invoked via an application programming interface (API). “They can develop their own blueprints,” says Shannon Champion, vice president of product marketing for Dell.

It’s not clear to what degree AI will drive more organizations to deploy workloads in on-premises IT environments, but for as long as GPUs remain costly and challenging to find, the number of AI applications that can be built and deployed will remain constrained. Even in the cloud era, organizations are asked to make 6-month or longer commitments to consuming GPU resources. Given those constraints, some organizations will opt to deploy GPUs in a way that gives them more control over how they are used within the context of a modern on-premises IT environment.

There are, of course, other options for building and deploying AI models. The issue is that in terms of efficiency, the parallelization capabilities provided by GPUs still run AI models more efficiently. Until such time GPUs become a commodity, the pace at which AI applications are going to be created and deployed will remain generally slow.