IBM

IBM this week pledged to make available the watsonx.governance and watsonx.ai next generation studio framework for building and governing artificial intelligence (AI) applications on the Amazon Web Services (AWS) cloud in early 2024.

At the same time, IBM is also making its Db2 database available as an option for the managed Amazon Relational Database Service (RDS) service.

Announced at the AWS re:Invent 2023 conference, both efforts are part of a larger IBM push to make its software portfolio available on multiple cloud computing platforms, which now includes 25 software-as-a-service (SaaS) offerings running on the AWS cloud.

The overall goal is to make it simpler for IT organizations to embrace hybrid cloud computing whether they are accessing IBM software or building their own applications, says Nick Otto, head of strategic partners at IBM. Once watsonx.governance toolkit and watsonx.ai are made available the entire IBM watsonx portfolio will be available on the AWS cloud.

Given the volumes of data and security issues that organizations building and deploying AI applications at scale it’s not often feasible for organizations to move all their data on to a single platform, notes Otto. As AI applications become more pervasively deployed, data gravity issues at the edge, in the data center and in the cloud will make the IT environments running AI applications inherently more distributed, he adds. “It’s going to be about getting the right compute to the right part of the application, says Otto.

The watsonx.ai next generation studio tool makes it simpler to build applications that can be deployed anywhere while watsonx.governance provides the framework for applying polices to ensure they are secure, notes Otto. IBM this week, as part of that strategy, demonstrated how Llama 2 large language models (LLMs) running on the managed AWS Sagemaker could be used to meet regulatory guidelines that require organizations to address security and transparency requirements.

The most critical issue for organizations is to identify the use cases where AI is likely to have the biggest immediate impact on the business. There is no end of opportunity to build proof of concepts, but organizations have limited resources so they will need to prioritize their efforts.

It’s not clear to what degree organizations will opt to simply consume applications infused with AI models versus opting to extend LLMs using vector databases or customizing a foundational LLM or building their own. Regardless of approach, just about every application will one day be augmented by AI models. They just can’t all be augmented at the same time, so organizations will need to determine which use cases might have the most business value.

In the meantime, organizations will also need to ensure they are AI ready. The quality of the AI model directly correlates to the quality of the data used to train it. Organizations that have not implemented best practices for managing data are not prepared for AI. As such, rather than building AI models, there will be many organizations that are going to be spending much of the coming year getting their data organized to the point where it could be used to train an AI model instead.

 

TECHSTRONG TV

Click full-screen to enable volume control
Watch latest episodes and shows

Next Gen HPE ProLiant Compute Deep Dive

TECHSTRONG AI PODCAST

SHARE THIS STORY