board room, plans

The U.S. Department of Labor published a guidance document recommending organizations solicit “genuine input” from their employees concerning adoption of artificial intelligence (AI) and conduct that implementation with transparency.

The principles outlined by the Department for AI systems in the workplace are intended to guide their development and deployment throughout the entire lifecycle, from design to oversight and auditing.

These principles apply across all sectors and should be tailored to the specific context of each industry, serving as a framework rather than an exhaustive list. AI developers and employers are encouraged to review and adapt these best practices with input from workers.

The DoL’s AI Principles for Developers and Employers emphasize centering worker empowerment, ensuring that workers, particularly those from underserved communities, have genuine input in the AI lifecycle stages.

It recommended AI systems be designed and trained ethically to protect workers’ interests, with organizations implementing clear governance, oversight and evaluation processes for workplace AI systems.

Transparency is crucial, and employers should be open with workers and job seekers about the AI systems in use.

Furthermore, AI systems must respect labor and employment rights, supporting workers’ right to organize, ensuring health and safety, and protecting against discrimination and retaliation.

The document said AI should assist and enhance job quality, enabling workers rather than replacing them, and employers should also support or upskill workers during job transitions related to AI.

It noted responsible use of worker data is essential, with data collected, used or created by AI systems being limited in scope, used only for legitimate business aims, and handled responsibly to ensure privacy and security.

Narayana Pappu, CEO at Zendata explained beyond equity, worker empowerment in the design of AI systems ensures the maximization of business value.

“Take a customer service agent for a fintech payments company with buyers and sellers worldwide, operating in multiple currencies, channels, flows, and legal requirements,” he said. “Building an AI system to increase their efficiency without involving them would result in missing out on use cases and reduces utility.”

Stephen Kowski, Field CTO at SlashNext, said he agreed it’s important to empower workers when using AI so they can understand the technology and use it to enhance their skills.

“When workers are involved in designing AI systems, it helps build trust and ensures the AI is used in a way that benefits employees,” he said.

He said ethical AI development should prioritize fairness, non-discrimination, transparency, accountability and respect for workers’ rights.

“Key considerations include using unbiased training data, enabling human oversight, and clearly communicating the role of AI to workers,” Kowski said.

Pappu added avoiding bias and providing transparency should be two major pillars of AI systems to protect works and companies from legal liabilities.

“Transparency builds trust and comfort with AI systems and it starts from design, validation and continues to ongoing monitoring,” he said. “Increased trust, comfort, utilization and efficiencies associated with this would be the key benefits of adhering to the AI principles laid out by the DoL.”

Kowski pointed out human oversight is critical for ensuring AI systems operate as intended, identifying potential issues, and maintaining accountability.

“Organizations should establish clear governance structures, regular auditing and testing procedures, and escalation protocols for AI anomalies,” he said.

Pappu also highlighted the role human oversight plays in AI governance, noting it can help with protection against current weaknesses of AI systems from hallucinations, bias and transparency issues.

“Addressing supply chain risks from third parties before they turn into major business problems is a major benefit of human oversight,” he added.

Kowski cautioned AI systems can perpetuate biases in hiring, promotion and compensation decisions, as well as enable invasive worker surveillance and data collection.

“Risks can be mitigated through rigorous testing for bias, establishing usage limitations, and involving workers in AI governance,” he said.

He noted the DoL’s principles emphasize using unbiased data, testing for disparate impact, and ensuring human oversight of high-stakes decisions.

“Adhering to these guidelines throughout the AI lifecycle can help identify and correct bias before systems are deployed,” Kowski said.