A global survey of 503 professionals with responsibility for artificial intelligence (AI) adoption in enterprise organizations finds nearly all (98%) are deploying additional AI applications in the next 12 months, with 86% planning to adopt multiple agentic AI technologies.

Conducted by Broadcom, the survey also finds that organizations are encountering multiple challenges, including lack of expertise (61%), security (59%), and data quality issues (55%). Other challenges include accessing data from legacy systems (42%) and data pipelines (40%) and governance (45%) as top challenges.

A total of 96% also noted data pipeline performance issues are having an impact on their AI objectives, the survey finds. In fact, 83% share that AI and data experts spend 10% or more of their time managing data pipeline tools and a third (34%) say it takes more than a quarter of their day.

The survey makes it clear that operationalizing agentic AI at scale is not an event but rather an ongoing journey that will require more investment in automation platforms, says Aline Gerew, head of automation for the Agile Operations Division at Broadcom.

In fact, many organizations will need to revisit previous investments in automation to ensure they now incorporate what will soon be thousands of AI agents trying to access data, she adds. A full 98% of respondents note that centralizing data pipeline orchestration would improve data reliability. Additionally, 93% plan to use orchestration solutions to manage agentic AI applications.

It’s not clear yet to what degree organizations are going to revisit their data management strategies because of AI, but nearly a third said building AI capabilities is their single most important objective. Achieving those goals, however, may prove challenging. A Futurum Group survey identifies data quality, trust and governance as the biggest challenges facing data professionals, with 20% of respondents citing it as a key concern. Top areas of investment are blending data platforms (41%) and data quality/observability” (40%).

Additionally, most organizations today are already struggling with navigating multiple islands of automation rather than putting in place a small number of automation platforms that will make it possible to manage agentic AI applications at scale, notes Gerew.

Ultimately, the success of any AI initiative is dependent on the quality of the data exposed to the AI models. The fundamental issue that organizations will need to address is ensuring the right data gets to the right AI model at the right time. Otherwise, the chances the AI model will generate suboptimal outputs starts to rise, especially if that AI model has been programmed to generate a response no matter how sparse, or conflicting, the limited pool of data exposed to the AI model might be.

Even before the rise of AI there was already a chronic shortage of data management and engineering expertise, which is an issue that might only be alleviated by relying more on AI agents to build and manage the data pipelines that drive AI applications. In effect, there is a classic chicken-and-the egg debate emerging over the relationship between AI and data management platforms. The one thing that is certain is that AI will work no matter what type of data is consumed. It’s just that the output generated isn’t likely to be meaningful if the data presented isn’t especially useful or even accurate.

TECHSTRONG TV

Click full-screen to enable volume control
Watch latest episodes and shows

Tech Field Day Events

TECHSTRONG AI PODCAST

SHARE THIS STORY