In the race to work faster and smarter, employees across America are quietly adopting artificial intelligence (AI) tools their employers never approved, and managers are looking the other way.

A new survey reveals a troubling trend that could cost businesses far more than saved time. According to research conducted by Cybernews in August 2025, 59% of employees use unapproved AI tools at work. Among those workers, 75% admit to sharing potentially sensitive information such as employee data, customer details, and internal documents.

The practice, known as shadow AI, has created what security experts call a dangerous gray zone. Of employees using unapproved tools, 57% claim their direct managers actually support the practice, while another 16% say their managers simply don’t care.

“If employees use unapproved AI tools for work, there’s no way to know what kind of information is shared with them,” said Mantas Sabeckis, a security researcher at Cybernews. “Since tools like ChatGPT feel like you’re chatting with a friend, people forget that this data is actually shared with the company behind the chatbot.”

The risk is staggering. IBM Corp. recently found shadow AI can increase the cost of a data breach by an average of $670,000.

What makes this crisis particularly puzzling is that employees aren’t ignorant of the dangers: 89% said they understand risks associated with AI tools, with data security breaches being their top concern. Yet they use unapproved tools anyway.

Executives and senior managers are the worst offenders, with 93% admitting to using unauthorized AI at work.

What is more, institutional failures run deep. Nearly a quarter of companies don’t have any official policy regarding AI tool use at work. This absence of guidance leaves employees to make judgment calls on their own, with “faster” often winning out over “safer.”

“Shadow AI thrives in silence,” said Žilvinas Girėnas, head of product at nexos.ai. “When managers turn a blind eye and there’s no clear policy, employees assume it’s fine to use whatever tool gets the job done. That’s how sensitive data ends up in places it should never be.”

Survey participants were generally receptive to change. Only 57% said they would stop using unapproved tools following a data breach, suggesting that clear policies and approved alternatives might shift behavior before catastrophe strikes.

The path forward requires action at multiple levels: companies must establish clear AI policies, provide secure alternatives employees actually want to use, and leaders must model responsible behavior. Without intervention, the convenience of shadow AI will continue to quietly transform company data into a public liability, the survey concluded.

TECHSTRONG TV

Click full-screen to enable volume control
Watch latest episodes and shows

Tech Field Day Events

TECHSTRONG AI PODCAST

SHARE THIS STORY