AI Survey

A survey of 500 software developers, engineers, managers and directors finds less than half (47%) have a designated machine learning operations (MLOps) team in place, with only 6% of those teams operating in a dedicated silo.

Conducted by Civo, a provider of cloud computing services, the survey also finds only 28% of respondents reporting their organization has their ML projects run by dedicated ML engineers using ML tools. Instead, those tasks are more commonly handled by the wider IT team (33%), improvised ensemble teams of domain experts and IT staff (25%), or non-technical domain experts using ML tools (7%).

Among those relying on ensemble teams, more than a third (37%) admitted these cross-functional teams lack specific ML expertise.
Nearly two-thirds (65%) also conceded that ML adoption would be easier if more employees had the right expertise, but just over a third (36%) noted their organizations does not provide any skills training or ML education courses.

There is also a need for better tools, with more than half (53%) reporting ML adoption would be easier if they had better access to the tools and resources they require. Right now, two-thirds (66%) said their organization is relying on varying degrees of open source machine learning tools, but among the application developers surveyed (28%) nearly half 48% said they feel ML projects require too much time.

Not surprisingly, more than half of respondents (53%) confessed to abandoning between 1 – 25% of their projects, with almost a quarter (24%) abandoning between 26 – 50%, with 5% admitting they have never completed an ML project. Only 11% claimed they have never abandoned a project.

Either because of the strategic importance of artificial intelligence (AI) or fears of being left behind, organizations are pressing everyone they can into service to build AI models, notes Josh Mesout, chief innovation officer at Civo.

The challenge that many of them will encounter is making sure AI is being applied to a use case that is applicable. Many AI projects are being launched as a technology initiative rather than to, for example, solve an issue a sales team faces, adds Mesout. “AI needs to be applied on top of existing domain expertise,” he says.

Less clear is how broadly organizations are applying ML. Some organizations are allowing every business unit to experiment with AI in the hopes that a thousand flowers will bloom. However, given the cost of deploying AI models in production environments, other organizations are opting to narrow their focus on use cases that present the best potential for generating a return on investment (ROI).

The one benefit of open source tools in the age of the cloud is it does reduce the cost of experimentation with AI, so more organizations should be able to pursue the former than latter strategy, said Mesout.

Regardless of approach to ML, it’s now only a question of when most applications will be infused with some type of AI capability. The issue, as always, is determining how best to prioritize which use cases have the most merit, given the level of ML expertise available; much of which is clearly now receiving a lot more on-the-job training than many would care to admit.