business

Dell Technologies today became the latest major IT vendor to extend their professional services capabilities and make it simpler for organizations to embrace artificial intelligence (AI).

In addition, in collaboration with NVIDIA, Dell has defined a set of validated designs for systems spanning the PCs used by developers and data scientists to build AI models to the servers employed to run the inference engines that are used to apply rules within a production environment.

That approach will also make it simpler for organizations to deploy the right storage systems needed to aggregate all the data required to train AI models, said Varun Chhabra, senior vice president of infrastructure and telecom marketing at Dell. “We’re bringing the potential of generative AI to high-value data,” said Chhabra.

Dell’s professional services will further advance that goal by providing everything from help defining an AI strategy to implementing an AI model.

Dell previously revealed it was working with NVIDIA on a Project Helix initiative that would simplify the construction of large language models (LLMs) that drive generative artificial intelligence (AI) models within an on-premises IT environment.

It’s not clear whether AI models are being built in on-premises IT environments that require high-performance servers and storage systems to process the data that AI models require. However, as organizations investigate generative AI, many are discovering that the amount of data required to build a domain-specific LLM is not nearly as complex an endeavor as building a general-purpose generative AI platform such as ChatGPT. In fact, the less data used to build an LLM, the more accurate it becomes because there is not nearly as much conflicting data; therefore, there’s a lower probability that a generative AI platform experiences “hallucinations.”

There’s little doubt that organizations of all sizes are racing to take advantage of generative AI. The issue that most are trying to resolve is whether they will need to build or extend LLMs themselves or invoke applications that have these capabilities embedded. Given the limited amount of data science expertise available today, most organizations will initially rely on applications that have these capabilities built in.

However, there will undoubtedly be use cases where organizations will use their proprietary data to train AI models that will use inference engines and be deployed in a production environment. Given the sensitivity of that data, Dell is making a case for doing more of that development work in an on-premises environment rather than in the cloud.

Where AI models are trained will, of course, have a lot to do with where the data used to train them resides. Moving a lot of data from an on-premises IT environment adds to the cost of moving data and introduces additional security issues.

Regardless of the preferred approach, however, most legacy systems were not designed to process the amount of data AI applications require. One way or another, a lot more modern IT infrastructure will be required.

TECHSTRONG TV

Click full-screen to enable volume control
Watch latest episodes and shows

Tech Field Day Extra at Cisco Live EMEA

TECHSTRONG AI PODCAST

SHARE THIS STORY