
A global survey of 700 IT decision makers finds that while three quarters (75%) work for organizations, a full 72% are encountering data quality issues and inability to scale data management.
Conducted by F5, more than half of respondents said their organization did have a defined data strategy in place, but more than three quarters (77%) report they lack a single source of truth for their data.
Not surprisingly, only 24% of respondents said their organizations have been able to implement generative AI at scale, with the most common use cases involving employee productivity tools (40%) and customer service tools and workflow automation at 36% each. More than a quarter of organizations specifically report using some type of AI Copilot capability with organizations that have adopted generative AI accessing on average 2.9 large language models (LLMs). More than half of organizations (50%) use Llama 2, followed closely by Microsoft at 43% and OpenAI at 22%, for office productivity, but for customer chatbots Microsoft’s Bing Chat model (42%), OpenAI GPT models (32%) and Google Gemini (16%) are preferred.
More than two-thirds (69%), however, are still researching AI use cases, with nearly as many (63%) working now on completing proofs of concept. Workflow automation (39%) ranks as the highest use case priority for AI in 2024, but more than half (54%) said a lack of AI skillsets is a challenge. Security (64%), followed by artificial intelligence of IT operations (AIOps) (61%) and line of business applications (51%) are expected to be at the forefront of those use cases.
Given those issues, it might be 2026 before enterprise IT organizations overcome these challenges to a point where generative AI is pervasively deployed across an enterprise, says Lore Mac Vittie, a distinguished engineer in the Office of the CTO for F5. Many organizations can’t even yet gain enough access to infrastructure because there is a general shortage of graphical processor units (GPUs), she adds.
Nearly two-thirds of respondents (62%), however, said they are worried about compute costs, followed next by security (57%) and performance (55%). On average, AI expenditures represent nearly one-fifth of the overall IT budgets in 2024. Nearly all (94%) expect that number to rise to more than one-quarter (26%), on average, by 2026. A total of 44% expect to spend more on security over the next few years as they scale deployments.
Today, AI models are most widely deployed in public clouds (65%), compared to software-as-a-service (SaaS) applications (49%) and on-premises IT environments (40%), but the expectation is that in two years that number will shift to 56% for public cloud, 64% for SaaS and 36% for on-premises.
In the meantime, the biggest barriers to AI adoption cited are budget (46%), followed by lack of interoperability among tools and too many vendors/application programming interfaces (APIs) tied at 39% each. More than a quarter are relying on APIs to integrate disparate data sources. Another quarter (25%) are adopting cloud data platforms, while 17% are focused on implementing new data schemas.
Overall, it’s clear there’s a lot of hype surrounding all things AI that needs to be worked through, notes Mac Vittie. “It’s hyped to the sun,” she says.
However, as IT teams increasingly master the infrastructure skills to deploy AI at scale, it’s now more a question of how long it will take before AI is pervasively employed across the enterprise, rather than a question of if.