IBM at its THINK 2024 conference today launched a platform dubbed IBM Concert that leverages generative artificial intelligence (AI) to identify, predict and fix problems using data collected from a range of platforms, repositories and DevOps workflows.

Based on watsonx, IBM Concert, when made available next month, will provide a 360-degree view of application environments, says Kareem Yusuf, senior vice president for product management and growth at IBM.

In addition, it is extending its portfolio of AI Assistants to include, starting next month, watsonx Assistant for Z to enable IT teams managing mainframes to use natural language to manage those platforms, in addition to extending its watsonx Code Assistant for Z to include a code explanation capability. IBM, in October, is also committing to make available watsonx Code Assistant for Enterprise Java Applications.

IBM also today made generally available the Granite large language models it uses to build these tools available under an open source license. IBM is expanding its NVIDIA GPU offerings to now include GPU L4 and L40s models that can be used to run, for example, Red Hat Enterprise Linux AI (RHEL AI) and OpenShift AI platforms that Red Hat last month launched. IBM and Red Hat also recently launched InstructLab, a framework that makes it simpler for multiple individuals to contribute code to open source LLMs.

Finally, IBM also launched IBM Data Product Hub, a platform for aggregating reusable data sets, and Data Gate for watsonx, a tool for securely exposing data residing on a mainframe to an LLM. Those latter capabilities are crucial as organizations look to operationalize AI using their own data, notes Yusuf. “The fuel for AI is data,” he says.

In general, IBM is making a case for an AI portfolio that is based on Granite LLMs it has been developing for several years. IBM Granite code models range from 3B to 34B parameters and come in both base and instruction-following model variants. Testing by IBM on benchmarks including HumanEvalPack, HumanEvalPlus and reasoning benchmark GSM8K showed Granite code models do well on code synthesis, fixing, explanation, editing and translation across most major programming languages, including Python, JavaScript, Java, Go, C++, and Rust

The 20 billion parameter Granite base code model was used to train IBM watsonx Code Assistant (WCA) and also drives watsonx Code Assistant for Z. That model was also tuned to generate SQL code using a natural language interface.

It’s not clear exactly what LLM exceeds the capabilities of any other, but it’s clear that some LLMs are better suited for specific use cases than others. Overall, the more parameters an LLM has, the more expensive it is to support in terms of IT infrastructure resources required. In time, most enterprise IT organizations will find themselves invoking a mix of LLMs that will be ripped and replaced as additional advances are made.

In the meantime, IT leaders will need to determine which providers of LLMs make it simpler to invoke multiple LLMs without getting locked into a specific platform.