ServiceNow

ServiceNow today at its Knowledge 2024 conference announced it is adding integrations with the IBM watsonx artificial intelligence (AI) platform in addition to adding support for IBM’s Granite large language models (LLMs).

Integration with the IBM watsonx portfolio of tools will be accomplished via the ServiceNow Integration Hub to provide access to, for example, multilingual capabilities, expanded summarizations and virtual agents.

Support for Granite LLMs will be added to the company’s Now platform alongside existing support for LLMs from Open AI, Microsoft, a range of open source LLMs and LLMs that ServiceNow has developed.

That approach enables IT teams to mix and match LLMs as needed without becoming locked into a specific instance of an LLM at a time when advances continue to be rapidly made, says Jeremy Barnes, vice president over product of platform AI at ServiceNow.

ServiceNow is not allowing customer data to be used to train any of those LLMs. Instead, the company is making use of multiple techniques to customize those LLMs by exposing data that enables generative AI to be applied to a wide range of workflows, said Barnes. “It’s much more about grounding the data,” he says. “We know how to prompt them.”

AWS

Each LLM is plugged into a controller capability that has been added to the Now platform that ServiceNow provides to host a range of software-as-a-service (SaaS) applications. That controller makes it possible for organizations to swap LLMs, including ones they may develop themselves, in and out of the platform.

It’s still early days so far as the operationalization of AI, but as advances in generative AI continue to be made the reasoning engines that are at the core of these platforms are going to be able to automate more complex tasks. The challenge is integration those capabilities with a ServiceNow automation framework that organizations are already relying on to automate workflows, notes Barnes.

Most organizations are going to wind up employing a mix of domain specific and general purpose LLMs for different use cases. A domain specific LLM, however, is likely to prove to be more trustworthy because the corpus of data used to train it has been more closely vetted. General purpose LLMs are typically trained on massive volumes of data that in comparison is of an uneven quality. As a result, an AI model based on a general-purpose LLM is more likely to surface incorrect suggestions and recommendations, otherwise known as hallucinations.

It’s not clear what impact AI will have on existing workflows, but much of the toil that conspires to perform routine tasks will be eliminated. The challenge will be determining which tasks can be consistently automated using generative AI technologies that are probabilistic rather than deterministic. As such, the way a task is automated may vary widely unless there are governance policies in place to minimize errors.

After all, the only thing worse than performing a task in a suboptimal fashion is compounding that error at the level of scale that AI enables.

TECHSTRONG TV

Click full-screen to enable volume control
Watch latest episodes and shows

AI Data Infrastructure Field Day

TECHSTRONG AI PODCAST

SHARE THIS STORY