AI news

Hewlett-Packard Enterprise (HPE) today, at an HPE Discover 2023 event, extended the reach of its services portfolio to now include large language models (LLMs) it will make available via supercomputers that it provisions and manages on behalf of customers.

Justin Hotard, executive vice president and general manager for high performance computing (HPC), artificial intelligence (AI) and HPE Labs at HPE, said the HPE for GreenLake for LLM service will provide organizations with access to supercomputers configured with thousands of graphical processor units and traditional CPUs that will be housed in co-location facilities made available by HPE partners such as QScale. Those supercomputer systems will be made available in North America later this year followed by additional facilities being opened in Europe next year.

The first provider of LLMs to partner with HPE is Aleph Alpha, a startup company that has created a Luminous LLM that organizations can fine tune using their own data.

AI, in general, is at an inflection point that will have as profound an impact on how IT is consumed and managed as Web 1.0 did, noted Hotard. The HPE GreenLake service will enable more organizations to take advantage of multi-tenant instances of supercomputers needed to train AI models without having to acquire them, he added.

In effect, HPE is aiming to democratize access to supercomputers at a time when many organizations need to come to terms with determining to what degree they are comfortable training LLMs on public cloud versus a managed HPE service. Both approaches require organizations to share infrastructure, but HPE is making a case for an approach that, at least in theory, ensures greater control over where the data needed to train AI models is stored. That’s going to be especially critical in an era where scrutiny of how AI models are trained to prevent bias is going to become a major compliance requirement.

At this point just about every organization is going to be applying AI across a wide range of processes. The bulk of those AI capabilities will likely be consumed via applications that are provided as a service, but there will be instances of custom applications that organizations will need to infuse with AI to differentiate themselves. In fact, one of the reasons so many organizations have invested in data science teams has been to achieve that goal. The issue, of course, is most of those efforts have met with mixed success. Generative AI promises to consistently provide a much better return on investment than machine learning algorithms.

Of course, that doesn’t mean machine learning algorithms are going to fade away, but organizations will need to determine when best to apply machine learning algorithms alongside LLMs and other classes of AI technologies.

AI is the future. The only issue is that, as William Gibson once noted, it is unevenly distributed. However, the pace at which AI is being infused into every business process has greatly accelerated, so organizations that are not making the right investments today will most certainly be left behind.

TECHSTRONG TV

Click full-screen to enable volume control
Watch latest episodes and shows

Qlik Tech Field Day Showcase

TECHSTRONG AI PODCAST

SHARE THIS STORY