ServiceNow

ServiceNow today at its Knowledge 2023 conference announced it has allied with NVIDIA to help add generative artificial intelligence (AI) to its software-as-a-service (SaaS) platform.

CJ Desai, president and chief operating officer of ServiceNow, told conference attendees that generative AI capabilities will begin to manifest themselves within the Now platform starting with the Utah release of the platform and continuing through to a Vancouver release of the ServiceNow platform later this year. 

ServiceNow this week unveiled ServiceNow Generative AI Controller based on connectors to the general-purpose platforms developed by OpenAI and Microsoft. A Now Assist for Search tool takes advantage of ServiceNow Generative AI Controller to make it possible to surface search results of ServiceNow data using a natural language interface (NLI).

The generative AI capabilities that will be added to the Vancouver release of the Now platform will include NavAssist, a tool for using natural language to navigate the ServiceNow platform, and GlideAssist, a tool that will enable developers to use natural language to invoke no-code/low-code tools to build custom applications on the Now platform.

ServiceNow will then follow that up with a Washington, D.C release of its platform that will add support for more domain-specific large language models that will automate a much wider range of IT service management and application development and delivery tasks.

At the same time, ServiceNow intends to make it possible for organizations to build and deploy their own large language models (LLMs) on the Now platform, said Desai. “You can bring your own LLMs,” he said.

Those LLMs will prove to be catalyst for driving further adoption of the Now platform by making it more accessible to a wide range of end users and developers. ServiceNow has previously committed to training one million individuals to manage everything from IT processes and workflows to building and deploying applications on the Now platform by the end of 2024.

NVIDIA CEO Jensen Huang told conference attendees that generative AI has become the killer application for IT infrastructure than enables parallel processing, also known as accelerated computing. In effect, generative AI is reinventing what a computer is, said Jensen. “This is the biggest platform transition of our generation,” he said.

The development of LLMs is accelerating as data scientists begin to train them using smaller amounts of data. Data science teams are now leveraging large LLMs to build medium-sized LLMs that will then be used to create small and even tiny LLMs that can be employed in use cases that have limited access to compute resources, he noted.

ServiceNow Research Large Language Models Lab has also partnered with Hugging Face and more than 600 other members of the open-source community on the BigCode project, an open and transparent scientific AI research collaboration that champions the responsible development of LLMs for code. ServiceNow and Hugging Face recently made available a 15 billion parameter StarCoder LLM on behalf of the BigCode project.

No one can predict with great precision what impact these capabilities will have on organizations but the one thing that is certain is they are about to become pervasively embedded within every application.

TECHSTRONG TV

Click full-screen to enable volume control
Watch latest episodes and shows

Cloud Field Day

TECHSTRONG AI PODCAST

SHARE THIS STORY