Nearly six decades after the initial launch of the IBM mainframe, artificial intelligence (AI) is now coming to a platform that organizations still employ to process and analyze trillions of transactions. IBM is now making available an enhanced version of Machine Learning for z/OS, which it developed for building AI models and integrating them within mainframe applications that many organizations are using for online transaction processing (OLTP) and associated analytics applications.
IBM, with this new version of Machine Learning for z/OS, is making it possible to apply a wide variety of AI use cases to applications written in COBOL, Java, Python or PL/1 programming languages that are processing data in real time, says Elpida Tzortzatos, CTO for AI for IBM Z & LinuxONE and CTO for IBM z/OS.
That capability is crucial for organizations that need to be able to examine transactions as they occur to, for example, identify fraud versus applying AI after an event has already occurred, she added. The goal is to surface actionable insights as data is being processed, noted Tzortzatos.
IBM is also taking advantage of a dedicated processor it developed to run AI inference engines on the mainframe to achieve the level of performance AI models need to keep pace with OLTP transactions. The IBM z16 platform, for example, can process up to 300 billion deep learning inference requests per day with 1ms of latency.
In addition, IBM has made available AI System Services for IBM z/OS to bring AI to IT operations (AIOps) to the mainframe.
IBM is also providing organizations with an ability to either invoke AI models from within a mainframe application via either a REST application programming interface (API) that is similar to how most AI models are invoked or via a watsonx.ai extension for z/OS at runtime that IBM has developed.
In general, IBM is committed to supporting the entire lifecycle for building AI-infused applications including providing tools to build and secure them and DevOps frameworks to automate deployments and refreshes, says Tzortzatos. That’s critical, because the only way to ensure AI models are trusted is to make sure the appropriate governance and security tools are part of the application development lifecycle, she adds.
Ultimately, the building and deployment of AI models within applications is going to require increased levels of collaboration between data scientists that use machine learning operations (MLOps) platforms to build AI models and DevOps teams that deploy them alongside other software artifacts. Most data scientists would prefer to focus their time and effort on building AI models rather than managing the deployment process. “It’s going to be a team sport,” says Tzortzatos.
Each organization running IBM mainframes will need to determine to what degree to embed AI models within applications running on either zOS or any of the three distributions of Linux that IBM supports. The one thing that is certain is it’s more a question of when applications running on a venerable IBM mainframe in the years ahead will have AI models embedded within it, rather than if.