
Shadow artificial intelligence (AI) has IT managers anxious.
In fact, nearly half of them said they are “extremely worried” about security and compliance impact of unauthorized and unsanctioned use of AI tools, according to a recent survey of 200 IT directors and executives at U.S. enterprise organizations with at least 1,000 employees.
Nearly 80% of IT leaders say their organization has experienced negative outcomes from employee use of generative AI, including false or inaccurate results from queries (46%) and leaking of sensitive data into AI (44%), concluded the Komprise IT Survey: AI, Data & Enterprise Risk.
“As enterprises are starting to get real about AI as part of their business strategy, the cracks are starting to show,” Krishna Subramanian, chief operating officer and co-founder of Komprise, said in a statement announcing the survey results. “With most reporting that they have experienced negative and even damaging consequences from using corporate data with AI, it’s time to create the right AI data governance strategy. Unstructured data management will play a central role by giving users automated tools for data classification, sensitive data management, data workflows and AI data ingestion.”
A vast majority (90%) of those surveyed said they are concerned about shadow AI from a privacy and security standpoint, with 46% reporting that they are “extremely worried.” Most (79%) IT leaders report their organization experienced negative outcomes from sending corporate data to AI, including personally identifiable information data leakage and inaccurate or false results.
Three-fourths are planning to use data management technologies to address risks from shadow AI; 74% intend to use AI discovery and monitoring tools.
The greatest challenge in preparing unstructured data for AI, the IT directors shared, is finding and moving the right data to locations for AI ingestion (54%) followed by a lack of visibility into data across data storage to identify risks (40%).
The top tactic for preparing data for AI is classifying sensitive data and using workflow automation to prevent its improper use with AI (73%), they said. Nearly all of them (96.5%) are classifying and tagging unstructured data for AI, with a mix of manual and automated methods.
More than half (56%) say that IT is moving data to AI processes for users manually, or with free tools, with 40% saying that users are manually copying data to AI on their own. Supporting AI initiatives is the top priority for IT infrastructure (68%).
Most IT leaders (45%) follow a multi-faced strategy for investing in storage for AI, with equal priority to acquiring AI-ready storage, increasing capacity of existing storage and acquiring data management capabilities for AI.
Despite the potential dangers of AI, research continues to show a willingness to take security risks: An astounding 96% of IT pros view agents as a security risk but 98% report that their employers plan to expand their use of agents anyway, according to a new SailPoint Inc. report on AI agents.