Dell Technologies extended its effort to make it simpler to build and deploy artificial intelligence (AI) models in on-premises IT environments via an alliance with Hugging Face, a provider of a hub for accessing open source AI tools and models.
Previously, Dell has allied with other providers of LLMs, such as Meta, to advance the same goal. Dell is now expanding the number of models its customers can access by adding a Dell portal to the online Hugging Face community platform.
Organizations that pursue an on-premises approach to generative AI don’t need to be as concerned about data sovereignty, data privacy and intellectual property issues that might arise when relying on a general-purpose generative AI cloud service such as ChatGPT.
The challenge they face is streamlining the process of infusing AI into applications, says Matt Baker, head of AI strategy for Dell. Time to value and implementation complexity is something that we need to address.
Hugging Face, meanwhile, is partnering with Dell to provide enterprise IT organizations with more control over how they customize AI models, says Jeff Boudier, head of product and growth for Hugging Face. “We want to enable enterprises to take control of their AI destiny,” he says.
It’s not clear to what degree organizations will be building and customizing LLMs versus making use of retrieval augmentation generation (RAG) techniques to extend them. However, as LLMs continue to evolve, it’s apparent that it’s becoming easier to build or customize them using smaller sets of data that has been vetted. That approach generally leads to better outcomes, because unlike a general-purpose LLM that has been trained using massive amounts of data, the LLM an enterprise might develop is less likely to by subject to hallucinations that occur when conflicting data is referenced.
Most of those LLMs will also eventually have some type of derivative child relationship with a parent foundation LLM, and as each one is updated the need to meld machine learning operations (MLOps) and IT service management and DevOps workflows will become more pronounced. In the meantime, most organizations are at the very least experimenting with generative AI in a way that might result in hundreds of LLMs one day finding their way into production environments.
Less clear is how the teams of data scientists, data engineers, application developers and cybersecurity experts required to operationalize LLMs will be organized.
Each organization will need to determine how best to invoke various types of AI models. No one can be quite sure how best to apply it, but there is a general concern that rivals will gain a competitive edge that might prove challenging to surmount. Organizations are also struggling to determine where best to apply LLMs to their own data versus relying on providers of applications that are all moving to embed LLMs within applications.
One way or another, AI models are soon going to be pervasively applied across the enterprise. The challenge now is determining how best to govern them all once they are deployed.