One of the major challenges organizations encounter as they look to embrace artificial intelligence (AI) is simply determining where to get started. Domino Data Lab, to help organizations navigate the plethora of AI options available today, added an AI Project Hub that provides access to customizable reference projects.
The goal is to reduce the time and effort required to get started building AI applications at a time when the number of tools that would otherwise need to be mastered to build a similar project is overwhelming, says Ramanan Balakrishnan, vice president of product marketing for Domino Data Lab.
The overall goal is to reduce the time required to derive a return on AI investments that represent a significant expense for organizations, he adds.
Templates found in the AI Project Hub pre-package models with environments, source code, data, infrastructure and best practices for natural language processing, computer vision, generative AI and other use cases using foundation models from Amazon Web Services (AWS), Hugging Face, OpenAI, Meta and others, along with templates based on the NVIDIA NeMo framework and other NVIDIA AI software to build, customize and deploy generative AI applications anywhere.
Additionally, there are templates to enable organizations to build machine learning models for classic regression and classification tasks.
Domino Data Lab is also inviting customers, and partners are also being invited to contribute templates, with AWS, Fiddler AI, Deci.ai, Artefact and KSM Technology Partners among the first contributors.
Finally, the company is also providing governed access to data sources such as Databricks clusters, IBM DB2 and Netezza databases and SAP HANA databases to streamline the movement of data in a way that is consistent with a defined set of policies in addition to providing a Data Audit Logging tool.
The challenge organizations face today is they need to mobilize a small army of data scientists, data engineers, application developers and cybersecurity professionals to safely build and deploy AI models. The AI Project Hub is designed to eliminate much of the toil encountered by making the starting point for building an API application a series of reference implementations rather than, for example, a Jupyter notebook tool that data scientists employ to create code and integrate disparate components.
It’s too early to say what might become the preferred level of abstraction for building AI applications, but given the chronic shortage of skills, most organizations are naturally looking for the path of least resistance. Most business and IT leaders are keenly aware there is now a race to embed AI into processes to not only gain a competitive advantage but also simply keep pace with rivals making similar investments.
Ultimately, however, it’s not going to be a question of whether organizations will find themselves orchestrating the management of AI models but rather how many. In fact, AI in time will evolve into the management of multiple models as organizations increasingly realize that each use case will require different approaches, says Balakrishnan. “It will be a system of systems,” he adds.
In the meantime, most organizations will continue to experiment with AI. There are, of course, plenty of instances where general-purpose LLMs, such as one embedded into ChatGPT, can be extended to automate a task, but there will come a time when most organizations are going to want to customize or build an LLM using their own data. The issue is finding a way to jumpstart that process using data and code that has been vetted to reduce the number of hallucinations that are otherwise created when relying on models that have been trained on data of varying quality.