ServiceNow today made good on a promise to add support for multiple large language models (LLMs) to provide generative artificial intelligence (AI) capabilities across its Now software-as-a-service (SaaS) platform.

Previously, ServiceNow incorporated generative AI capabilities using LLMs provided by OpenAI and Microsoft. Additional generative AI capabilities are now being added using LLMs provided by Hugging Face and NVIDIA that ServiceNow has trained to address workflows ranging from IT service management (ITSM) and human resources to customer service.

In addition, ServiceNow has added Now Assist for Creator, a tool trained on code provided by ServiceNow that enables an LLM to convert natural language text into code suggestions that are more reliable and of a higher quality than what a general-purpose LLM is likely to surface.

These domain-specific LLMs are plugged into a controller capability that has been added to the Now platform that ServiceNow provides to host a range of software-as-a-service (SaaS) applications. That controller makes it possible for organizations to swap LLMs, including ones they may develop themselves, in and out of the platform, says Jon Sigler, senior vice president for the Now platform at ServiceNow. “When it comes to domain specific knowledge that we need for ServiceNow, we build out our own LLMs,” says Sigler.

Most organizations are going to wind up employing a mix of domain specific and general purpose LLMs for different use cases. A domain specific LLM, however, is likely to prove to be more trustworthy because the corpus of data used to train it has been more closely vetted. General purpose LLMs are typically trained on massive volumes of data that in comparison is of an uneven quality. As a result, it becomes more likely an AI model based on a general-purpose LLM is more likely to surface incorrect suggestions and recommendations, otherwise known as hallucinations.

Every provider of SaaS applications is now naturally racing to embed generative AI capabilities into their respective platforms. In fact, most end users are likely to first experience the productivity gains enabled by generative AI via a SaaS application that has embedded these capabilities.

Less clear is the degree to which organizations might decide to swap one SaaS platform out for another based on the generative AI capabilities provided. There is little doubt that generative AI will provide a major productivity boon, but at this point every SaaS application provider is well down the path to providing these types of capabilities. The only thing that remains to been seen is the extent and pace at which those capabilities will be provided.

In the meantime, business and IT leaders should make sure employees understand the capabilities and limitations of various types of LLMs that might be embedded within these applications. A hallucination that inadvertently winds up corrupting a workflow could have a potentially devasting impact. In the age of generative AI, there’s no substitute for a little training that could go a very long way to ensuring the best possible outcomes in the age of generative AI.