AI news

One of the issues that organizations of all sizes are struggling with today is keeping control over the data used to train artificial intelligence (AI) models. Every organization wants to leverage AI infrastructure resources available in the cloud, but it doesn’t necessarily follow that they want their data to leave IT environments they control.

Nutanix today launched a Nutanix GPT-in-a-Box offering that seeks to provide a middle ground using the IT infrastructure it provides to build AI models that leverage, for example, a large language model (LLM) running in the public cloud. It’s a hybrid cloud computing approach enabled by IT teams to retain control of the data used to customize an LLM for use in a generative AI application, says Nutanix CTO Manosiz Bhattacharyya.

In addition, it also provides the tools needed to enable IT teams to deploy the inference engines needed to deploy AI models in, for example, edge computing environments, adds Bhattacharyya.

Customizing existing LLMs is becoming preferable to building one, in many cases, because that latter approach requires more data science expertise that is difficult to find. In contrast, a small team can download an LLM that it then integrates with a vector database to enable that LLM to be applied to a use case using fresh data, rather than relying solely on the outdated data that was used to create the original LLM.

The challenge organizations encounter is that setting up the infrastructure required is a complex task, an issue that Nutanix is addressing by providing all the software required, including open source deep learning and MLOps frameworks, an inference server, and a curated set of large language models such as Llama2, Falcon GPT and MosaicML, within a pre-configured platform, says Bhattacharyya.

In addition, Bhattacharyya contends Nutanix GPT-in-a-Box provides a more cost-effective approach to invoking the graphical processor units (GPUs) that are needed to run vector databases. Accessing GPUs via a cloud service will, over time, become prohibitively expensive, he notes.

Nutanix has developed AI expertise by participating in the MLCommons advisory board; co-founding and providing technical leadership in defining the ML Storage Benchmarks and Medicine Benchmarks and serving as a co-chair of the Kubeflow (MLOps) Training and AutoML working groups at the Cloud Native Computing Foundation (CNCF).

As a result, the company has been able to create an opinionated instance of an AI stack of software that can be extended to add support for additional LLMs, says Bhattacharyya.

It’s not clear to what degree AI models will be built using public clouds versus relying on some type of private cloud that could run either within a public cloud or in an on-premises IT environment, but given the sensitivity of the data being used, many organizations will be looking to maintain as much control as possible.

Regardless of the approach, it’s becoming apparent that best MLOPs and DevSecOps practices will need to become much more aligned as the number of AI models being built and deployed steadily increases. The challenge now, however, is defining a set of standards around which the respective teams can agree to maintain.

TECHSTRONG TV

Click full-screen to enable volume control
Watch latest episodes and shows

Qlik Tech Field Day Showcase

TECHSTRONG AI PODCAST

SHARE THIS STORY