Legal, compliance and privacy leaders are increasingly concerned about the rapid adoption of Generative AI (GenAI), according to the results of a Gartner survey.
Seventy percent of the 179 leaders surveyed identified rapid GenAI adoption as a significant concern, while the results also indicated several key challenges surrounding GenAI implementation.
The primary concern revolved around the fact that the ease of GenAI adoption and its extensive applicability could pose a problem for assurance teams, limiting their visibility into potential risks.
Secondly, the report indicated a lack of clarity persists among employees regarding the acceptable use of GenAI due to their unfamiliarity with the rules governing its usage.
The report urged legal leaders to establish consensus on prohibited outcomes, institute controls to minimize these occurrences, and endorse acceptable use cases within organizational policies and guidance.
In addition, the growing ubiquity of GenAI tools calls for robust AI governance, but integrating this governance into existing organizational structures presents challenges due to the dispersed expertise and accountability for negative outcomes.
The report recommended legal leaders define clear roles and responsibilities for approvals, policy management, risk assessment, and GenAI training.
Finally, while GenAI holds promise in streamlining repetitive legal tasks, such as research, contract drafting, and summarizing legislation, its output may contain errors.
By aligning AI governance discussions with the identified risks, legal leaders can effectively advocate for comprehensive, cross-functional committees that address security concerns alongside legal and ethical considerations.
The report also recommended legal leaders emphasize instituting mandatory human review of GenAI output, banning the inclusion of enterprise IP or personal information in public tools like ChatGPT, and implementing policies that mandate clear identification of GenAI origin in public outputs.
Elad Schulman, CEO of Lasso Security, says legal leaders can stress the urgency of overcoming risk, security and trust challenges, highlighting that the risks require a unified effort.
“They can argue for the inclusion of security experts in steering committees to address security and safety controls,” he explains.
Schulman says to bridge the interim period until new risk management processes are in place, legal leaders can institute temporary controls.
“These may include enhanced scrutiny of GenAI-related activities, awareness campaigns, specific guidelines for employees engaging with GenAI tools, and implementation of new security tools,” he says.
Nicole Carignan, vice president of strategic cyber AI at Darktrace, says she agrees cross-functional collaboration is crucial for organizations to be able to harness the benefits of generative AI while ensuring safety and responsibility.
“One of the critical challenges in establishing AI governance is that there is not a one size fits all approach, every business will approach this differently,” she says. “Generative AI ultimately requires nuanced usage policies to help manage the risk.”
She notes a variety of stakeholders across the business including risk and compliance teams, chief people/HR officers, CIOs, CISOs and chief AI officers, should be working together to create and implement AI policies.
“Each role will bring a unique view to the issue and collaboration will ensure the benefits of AI can be safely and securely realized while managing and mitigating risks,” Carignan says.
Patrick Harr, CEO at SlashNext, points out given the absence of government AI regulations, organizations are developing corporate governance and management processes to create guardrails for the adoption and use of GenAI within the organization to ensure privacy and compliance is maintained.
“There is a lot of uncertainty with the impact of GenAI and legal, compliance, and privacy leaders concerned with how the usage of GenAI tools is exposing organizations to increased risk of data breaches and regulatory violations,” he says.
He adds AI governance committees are a great way to manage AI usage and establish policies and standards.
“We see these policies vary from usage guidelines to complete restrictions of any AI tools,” Harr explains. “To navigate the uncertainty, legal and privacy leaders can access industry and trade association groups forming AI committees to help industries with similar concerns address concerns and establish policies.”
Schulman adds organizations should oversee data flow and records of processing activities related to GenAI, both inputs and outputs, what is now known as “Shadow AI”.
“This involves actively monitoring and observing, meticulous tracking of every user interaction with GenAI tools to maintain a comprehensive audit trail, as well as proactive, continuous observation and monitoring of data flows associated with GenAI tools, and in every touchpoint,” he explains.