insurance, compliance, AI Innovation

As usage of artificial intelligence (AI) continues to rapidly evolve, it’s becoming apparent most organizations are going to soon find themselves regularly invoking multiple types of AI models. In some cases, those AI models might be based on a general-purpose AI model such as the ones created by OpenAI. In an increasingly larger number of instances, AI models will be based on multiple LLMs that were each trained using data from a specific domain.

The challenge is that as organizations find themselves using multiple AI models it becomes more challenging to orchestrate them. Looking to address that issue, Domo has created an AI framework that includes an AI Service Layer that makes it simpler at a higher level of abstraction to manage models constructed using both LLM and previous generations of machine learning algorithms.

The goal is to make it simpler for organizations to employ any AI model as they best see fit without finding themselves overly committed to one versus another, says Ben Schein, senior vice president of data curiosity at Domo. “No one wants to get locked in.”

The Domo.AI framework is an extension of the AutoML and AI model management capabilities that Domo already provides to makes it simpler for business analysts to create their own AI models without necessarily requiring any direct help from a data science team. While those AI models might not be as complex as an LLM, they do make AI more accessible to a wider range of end users that don’t necessarily have deep data science expertise.

Domo is also working toward making it simpler to invoke AI models as code via its Code Engine Model Endpoints tool that makes it simpler to write, test and distribute functions across multiple AI models. Domo also makes available a set of governance and observability tools that make it easier to maintain data integrity, adhere to security protocols and certify compliance with regulatory requirements.

AWS

It’s not entirely clear which members of an IT team will be responsible for managing AI models as they become increasingly operationalized, but data science teams are the most likely to assume that responsibility, says Schein. There will, however, be a need in some organizations be a need for specialists that have a deeper set of prompt engineering expertise as organizations operationalize AI, at deeper process levels, he adds.

Regardless of who or how AI models are managed, there will undoubtedly be a need for more collaborations between data science teams using best machine learning operations (MLOps) practices to collaborate with DevOps teams that typically assume responsibility for building and deploying applications.

AI models over time will need to be retrained and updated to address everything from configuration drift to hallucinations that might occur as they are exposed to additional data sources. The issue now is finding a way to bridge a cultural divide between DevOps teams and data scientists that is already conspiring to slow down the pace at which applications infused with AI models might otherwise be deployed.

TECHSTRONG TV

Click full-screen to enable volume control
Watch latest episodes and shows

Edge Field Day

TECHSTRONG AI PODCAST

SHARE THIS STORY