Workers Believe AI-powered Automation Improves Job Fulfillment

Workers across the U.S. and UK are ready to entrust artificial intelligence (AI) agents with nearly half their workload within three years, according to new research from Asana Inc.’s Work Innovation Lab.

But a trust deficit threatens to derail these ambitious plans, with most employees questioning whether AI can deliver on its promises.

The 2025 Global State of AI at Work Report, based on a survey of 2,025 workers, paints a picture of a workforce caught between optimism and skepticism. While employees currently delegate about 27% of their work to AI agents, they expect this figure to jump to 34% next year and 43% within three years.

Yet adoption is racing ahead of confidence. Nearly two-thirds of employees (62%) describe AI agents as unreliable, and more than half report that these digital assistants ignore feedback or confidently share incorrect information. Despite these concerns, 77% of workers are already using AI agents in some capacity.

“The enthusiasm is there, but the infrastructure isn’t,” said Mark Hoffman, work innovation lead at Asana’s Work Innovation Lab. “When no one is responsible for mistakes, employees hesitate to hand over meaningful tasks, even though they’re eager to delegate.”

The reliability concerns aren’t unfounded. Workers report that AI agents frequently create more work than they save — 54% say they’re forced to redo or correct AI outputs instead of benefiting from time savings. Nearly half worry that agents don’t understand their team’s priorities or workplace context.

This trust deficit is compounded by a glaring lack of accountability. When AI agents make mistakes, there’s no consensus on who should be held responsible. Workers split the blame between end users (22%), IT teams (20%), and the agent’s creator (9%), while a third either say no one is responsible or admit they don’t know who to blame.

The organizational response has been equally scattered. Only 14% of companies have established clear ethical frameworks for AI agents, just 15% have deployment processes, and a mere 12% review employee-created agents. Nearly a third of organizations allow employees to create agents without any management approval.

Perhaps most telling is what organizations aren’t tracking. Despite 64% of workers saying accuracy should be the top metric for evaluating AI agents, only 19% of organizations measure AI agent errors. Without consistent oversight, mistakes repeat unchecked, further eroding employee confidence.

This oversight gap is creating what researchers call “AI debt,” the compounding costs of unreliable systems, poor data quality, and weak governance. Seventy-nine percent of organizations expect to accumulate this debt, suggesting widespread recognition that current approaches are unsustainable.

Despite the challenges, workers have identified specific areas where AI agents outperform both generative AI tools and human colleagues. The top preferences include meeting notes (43% favor AI agents), organizing documents (31%), and scheduling meetings (27%). Notably, 70% of workers would prefer to delegate some of these administrative tasks to AI rather than humans.

The research suggests three-fourths of workers view AI agents as representing a fundamental shift in how work gets done, not merely another productivity tool. This perspective indicates the potential for deeper integration once trust and accountability issues are resolved.

One of the starkest gaps revealed in the research involves training. While 82% of employees say proper training is essential for effective AI agent use, only 38% of organizations have provided it. This mismatch leaves workers ill-equipped to provide oversight or course correction, perpetuating the cycle of errors and mistrust.

More than half of employees are calling for clearer boundaries between human and AI responsibilities (52%) and formal usage guidelines (56%). The message from the workforce is clear: they’re ready to collaborate with AI, but they need structure to do so effectively.

The research suggests that organizations treating AI agents as teammates rather than tools are more likely to see genuine productivity gains. This approach requires providing agents with proper context about work processes, defining clear responsibilities, establishing feedback loops, prioritizing accurate metrics, and training employees to work effectively with AI.

“AI agents are already reshaping the way teams approach work, but our research shows trust and accountability haven’t kept pace with adoption,” Hoffman said. “To succeed, organizations must treat agents like teammates by informing them with the right context and structure of work.”

TECHSTRONG TV

Click full-screen to enable volume control
Watch latest episodes and shows

Tech Field Day Events

TECHSTRONG AI PODCAST

SHARE THIS STORY