
Broadcom today pledged to extend the reach of its VMware Cloud Foundation (VCF) platform to provide support for AMD Instinct graphical processing units (GPUs) in addition to making available a technology preview of an artificial intelligence (AI) agent that has been trained to automate specific IT management tasks.
Announced at the VMware Explore 2025 conference, Broadcom also announced that VCF Private AI Services, a previously made available add-on framework for running AI workloads, will now become a standard element of the core VCF 9.0 license starting later this year.
Broadcom also revealed it will add support for the Model Context Protocol (MCP) developed by Anthropic to make data found in VCF environments available to third-party AI agents. Broadcom also revealed it is adding support for the S3 interface to VMware vSAN to provide access to object store data alongside existing support for file and block storage.
Those capabilities will make it possible for IT teams to flexibly deploy AI models across a range of AMD and NVIDIA GPUs, including the latest series of Blackwell processors and networking platforms from NVIDIA, says Prashanth Shenoy, vice president of product marketing for the VCF division at Broadcom.
IT teams will also be able to securely share access to AI models across a multi-tenant IT environment that makes use of namespaces to isolate workloads and ensure data privacy, he adds. VCF Private AI Services will then make it possible to monitor GPUs in addition to accessing Model Store, a Model Runtime, Agent Builder, Vector Database and Data Indexing/Retrieval services
It’s not clear when Broadcom will deliver an instance of VCF that runs on AMD GPUs, but it will support the open source ROCm framework for building AI models that AMD created as an alternative to the CUDA framework provided by NVIDIA. It will also support a range of programming models, tools, compilers, libraries and runtimes for AI applications that AMD provides.
It’s still relatively early days when it comes to deploying AI workloads in production environments based on VCF, but Broadcom is extending its case for a platform that integrates compute, storage and networking, including Kubernetes clusters, into a single platform for running both existing and AI workloads. That approach promises to reduce the total cost of IT by eliminating the need for a separate infrastructure to run AI workloads. “It enables the unification of workloads,” says Shenoy.
As more organizations start to operationalize AI, it’s only a matter of time before internal IT teams assume more responsibility for managing AI workloads. Initially, many AI applications were built by dedicated teams that included IT professionals with IT operation expertise. As the number of AI workloads deployed steadily increases, the need to optimize shared infrastructure resources across multiple AI workloads will inevitably result in more AI application environments being managed in traditional IT teams.
The challenge, as always, is making sure those AI workloads at any given time have access to all the infrastructure resources needed to ensure performance requirements as the workloads dynamically scale both up and down.