AI news

Dell Technologies is betting heavily that artificial intelligence (AI) applications will soon be distributed across hybrid cloud computing platforms.

In addition to adding today a Dell Validated Design for Generative AI with NVIDIA for Model Customization offering that packages Dell servers configured with graphical processor units (GPUs) from NVIDIA with pre-trained large language models (LLMs), Dell is partnering with Starburst to make available a data lakehouse capable of spanning a hybrid cloud computing environment in the first half of next year.

At the same time, Dell is expanding the professional and managed services it provides to include a generative AI practice focused on strategy development, data preparation, platforms and operational efficiencies.

A survey of 500 IT decision makers that Dell published earlier this week finds more than three-quarters (76%) are increasing their budgets because of AI projects that have been launched, with 44% of respondents in the early to midstage of their generative AI journey. Well over a third (38%) are pursuing a hybrid cloud computing strategy compared to 16% that prefer to purchase cloud services and 14% that use models that run on inference engines in on-premises IT environments. Another 21% would prefer to train their models using their own data.

However, the survey also notes 37% are also very to somewhat hesitant to implement generative AI. The primary reason for that hesitancy comes down to security, complexity and governance, says Carol Wilder, vice president for cross portfolio software and solutions at Dell Technologies.

The best way to overcome that hesitancy is to bring AI to where the data already resides, she adds. “It requires a broad data strategy,” says Wilder.

Most organizations will eventually wind up employing a mix of LLMs that are accessed as services, customized or built from scratch for specific domains that are unique to their business.

Dell previously revealed it is working with NVIDIA on a Project Helix initiative that would simplify the construction of LLMs that drive generative artificial intelligence (AI) models within an on-premises IT environment. The biggest issue organizations face today is acquiring the level of expertise required to successfully implement a generative AI strategy.

In fact, as organizations investigate generative AI, many are discovering that the amount of data required to build domain-specific LLMs is not nearly as complex an endeavor as building a general-purpose generative AI platform such as ChatGPT. Given the sensitivity of that data, Dell is making a case for doing more of that development work in an on-premises environment rather than in the cloud. The less data used to build an LLM, the more accurate it becomes, because there is not nearly as much conflicting data; therefore, there’s a lower probability that a generative AI platform experiences “hallucinations.”

Regardless of approach, however, more organizations are going to soon realize it’s a lot easier to bring AI models to where the data resides than it is to move terabytes of data that, as they continue to expand, will soon be measured in petabytes.

TECHSTRONG TV

Click full-screen to enable volume control
Watch latest episodes and shows

Mobility Field Day

TECHSTRONG AI PODCAST

SHARE THIS STORY