Hackers, GenAI, cyber, AI, enhancing, cybersecurity, digital age, AI, AI threats and cybersecurity

While a growing number of employees use or plan to use generative AI at work, concerns are mounting that a lack of skills in data security could put organizations at risk.

This information is among the findings of a Salesforce survey of more than 4,000 full-time employees, which found nearly three-quarters (73% of) employees believe generative AI introduces new security risks.

Nearly 60% of those who plan to use this technology said they don’t know how to do so using trusted data sources or while ensuring sensitive data is kept secure, and 54% worry generative AI outputs are inaccurate. The study also indicated there is a broad disparity between executives and employees when it comes to understanding generative AI, with 83% of C-suite leaders claiming they know how to use generative AI while keeping data secure, compared to just 29% of individual contributors.

Kathy Baxter, principal architect, ethical AI practice for Salesforce, says generative AI is set to impact workers’ day-to-day activities more than executives.

However, the research shows leadership is overconfident in their ability to leverage generative AI responsibly — leaving individual contributors exposed to risk.

AWS

“Companies need to ensure they are embedding ethical guardrails and guidance, so employees are using and innovating the technology responsibly, safely and ethically,” she says.

These include protections to protect people’s privacy, developing checklists, resources and features that help customers manage their data ethically.

She adds senior executives must create a culture where building and using AI responsibly is incentivized and there are consequences when they do not.

“As generative AI becomes more widely adopted and businesses race to bring their business to market, it’s critical that leadership does so inclusively and intentionally,” Baxter says.

She notes executives often set the standard for how to design and apply ethical technology — and they in turn must prioritize responsible innovation to guide how generative AI can and should be used.

“The underlying principle here is trust,” she says. “People–employees and end users–won’t use technologies they don’t trust.”

Alex Babin, co-founder and CEO of Zero Systems, a generative AI company for enterprises, says because of the potential for AI to be misused or to unintentionally cause harm, strong security measures should be put in place.

“These may include protocols for data protection and secure use of the technology, ” he says. “It’s essential to monitor the efficiency of the AI system and evaluate its outcomes. This can help prevent hallucinations.”

He adds that for any AI system, human oversight is crucial, which means people should be involved in the decision-making processes, especially in situations where the AI’s output has significant consequences.

Babin agrees the C-suite role is critical for the effective use of GenAI, as it should provide strategic alignment, risk management, policy establishment, and oversee employee training on how to use AI.

“AI is just another tool, very powerful, but without strategy and training it can be dangerous,” he says. “Leaders play a key role in ensuring the right use of AI and regulatory compliance, to maximize benefits and to minimize risks.”

Caroline Carruthers, CEO of data consultancy Carruthers and Jackson, points out that organizations are being outpaced by their employees, and if they’re not careful there could be a number of governance and privacy problems caused by general AI use in the workplace without a proper framework.

“Responsibility will depend on who creates the majority of regulatory policies within the organization – it will often be either the data department or the IT department,” she says. “In businesses where the two currently work in silos, it will be critical that they bang their heads together and work out who owns this – AI is not slowing down, so there’s no time for departments to quarrel over who is responsible for it.”

From her perspective, the biggest concern is that employees will inadvertently share data that shouldn’t be shared with an AI model.

“Aside from data governance concerns, businesses should also be concerned about over-confidence in GenAI answers,” she cautions. “It’s a great tool, but these models can often come out with false or misleading answers to prompts, so there must always be a level of human-oversight and skepticism when using these platforms.”

TECHSTRONG TV

Click full-screen to enable volume control
Watch latest episodes and shows

Edge Field Day

TECHSTRONG AI PODCAST

SHARE THIS STORY