AI news

SAP today added generative artificial intelligence (AI) capabilities to the low-code tools it makes available to build Java and JavaScript applications designed to extend the core SAP application portfolio across multiple cloud computing environments.

Announced at the SAP TechEd conference, these SAP Build Code tools extend the generative AI capabilities that SAP has been making available via Joule, a co-pilot the company developed that is specifically trained using the SAP data model to automatically create code. Previously, SAP made available low-code tools that were optimized for the cloud platforms it provides.

SAP also unfurled AI Foundation, a central repository for accessing tools that developers can use to create AI extensions to applications using the SAP Business Technology Platform (BTP) for customizing SAP application environments. In addition, SAP is providing access to multiple large language models (LLMs) via AI Hub, including an LLM that SAP is building based on data that it is curating.

Finally, SAP is adding a vector engine to its multimodal SAP HANA database that can be used to extend a large language model (LLM) without having to load data into it. That approach enables IT organizations to retain control of data being used to update an LLM with more current information.

In general, AI is now core to the SAP strategy, says Juergen Mueller, CTO and member of the Executive Board of SAP. “It’s built into our solutions, not bolted on,” he says.

SAP has been adding AI capabilities to its application portfolio for several years, with more than 24,000 customers currently taking advantage of some type of AI capability provided by SAP. The company claims to have more than 130 applications in its portfolio, with partners providing an additional 360 AI applications.

SAP, along with every other provider of software, is racing to enable organizations to leverage AI in all its forms. Each organization will need to decide for themselves to what degree they may want to extend an existing LLM using a vector database, customize an LLM using their own data or build one themselves. Most organizations will initially rely on vector databases to enable LLMs to use data residing in an external database to surface more relevant recommendations and explanations.

Organizations that make extensive use of SAP applications are naturally going to be anxious to leverage AI models that have been trained using data that resides in platforms they already widely employ versus having to move that data into an external IT environment. Of course, organizations have data residing in multiple platforms, so how best to centralize the management of data that will be used to train multiple AI models is an issue many have yet to resolve.

One way or another, however, it’s now only a matter of time before AI models are embedded across application portfolios. The challenge and the opportunity now is to determine where best to apply predictive and generative AI capabilities within mission-critical applications where there is little to no tolerance for hallucinations that are typically generated by general-purpose LLMs such as ChatGPT.