Enterprise IT. Lenovo

Most of the artificial intelligence (AI) models being created today are trained in the cloud using massive amounts of graphical processor units (GPUs) but as more enterprise organizations begin to build their own large language models (LLMs) to embed generative AI capabilities into business processes, more AI models will be built in on-premises IT environments.

In fact, Lenovo is now committing to invest more than $1 billion to advance the development of AI. In total, Lenovo already claims that it generated more than $2 billion in the last year from customers using its servers and storage platforms to build and deploy AI applications.

Robert Daigle, senior Manager of AI for Lenovo, said many of the enterprise IT organizations building AI applications are leveraging investments in high performance computing (HPC) platforms to build custom models.

At the Sharjah Research Technology and Innovation Park (SRTIP) in the United Arab Emirates (UAE). Lenovo is also leveraging HPC investment to make HPC resources available to data science teams as a service.

The company has also created a Lenovo AI Discover Center of Excellence through which it provides access to Lenovo data scientists, AI architects and engineers along with guidance provided via a Lenovo Responsible AI Committee.

AWS

In total, Lenovo in involved with more than 150 AI solutions created in partnership with more than 45 independent software vendors (ISVs), noted Daigle.

Of course, the level of infrastructure investment required to build AI models is not trivial. One of the reasons so many AI models are built in the cloud is that the efforts are speculative. No one knows with any great certainty if the AI project will pan out as hoped. However, as more organizations gain confidence in their ability to build AI models, an on-premises approach becomes more economical. Massive amounts of data required to train most AI models and much of that data in the enterprise is sensitive. There are already a host of compliance issues that often make training AI models in the cloud problematic for many organizations.

Beyond that are all the challenges associated with deploying the AI inference engines that are need for run time environments. Those inference engines will increasingly be deployed at the network edge where data is being created and consumed in real time. Lenovo is betting that a significant amount of that infrastructure will be built by it rather than competitors. “We have a comprehensive AI portfolio,” says Daigle.

Regardless of which vendor ultimately provides that infrastructure, the amount of it required is going to be substantial. In fact, one of the major concerns when it comes to AI is how much carbon might ultimately be emitted processing all the data required to build and run them. Lenovo is trying to address those concerns via Lenovo Neptune direct water-cooling technology it has developed to both reduce, while increasing, power consumption.

It’s still early days so far as AI in the enterprise is concerned, but the one thing that should be clear is that AI will soon be running in one form or another everywhere.

TECHSTRONG TV

Click full-screen to enable volume control
Watch latest episodes and shows

AI Data Infrastructure Field Day

TECHSTRONG AI PODCAST

SHARE THIS STORY