AI has been powering a variety of enterprise applications for some years now, helping companies drive business strategy and decision making based on sophisticated data-driven insights. But this year generative artificial intelligence (GenAI) took the world by storm, with applications including ChatGPT, GPT-4, Bard and Claude putting this technology in the hands of the general public for the first time. Many organizations have accelerated their exploration of how they might safely integrate AI into their tech stacks to increase worker productivity and future proof their competitive edge. That said, there are some “kinks” in GenAI that must be ironed out, including well-publicized hallucinations and ethical concerns.

Indeed, increasing concerns about the safety of GenAI, and the ability for bad actors to use AI to create and launch sophisticated cybersecurity attacks, including nearly undetectable spear phishing emails, must be addressed for the benefits of GenAI to be realized at scale. The security threats it poses to public sector organizations and businesses across the globe has led both the E.U. and U.S. governments to aggressively look for ways to police and regulate it.

GenAI Red Flags

Many organizations are very aware of the cybersecurity threats AI poses, as well as potential benefits to their security posture. According to a recent Gartner, Inc. study, 34% of organizations are either already using or in the process of implementing AI application security tools while 58% are exploring various options.

CISOs and CIOs have plenty of reasons to be concerned about GenAI and should take steps to establish policies that dictate how it will be used across every line of business before problems arise. If guardrails aren’t established, and GenAI is given a degree of autonomy to act on behalf of the enterprise by interacting with external parties – including customers, partners, or vendors — these tools could expose the enterprise to serious risks. Meanwhile, GenAI used by malicious actors could expose the enterprise to risks associated with employees who unknowingly open infected files, access malicious websites and inadvertently share sensitive data in response to AI-powered phishing schemes.

GenAI may also end up eroding an organization’s existing security posture by modifying security settings on enterprise applications, providing access to data storage, or otherwise disrupting security operations.

In a theoretical worst-case scenario, and with a nod to Stanley Kubrick’s 2001: A Space Odyssey, GenAI could even “choose” to turn on its human supervisors by taking deliberate actions to harm the very enterprise which deployed it.

AI’s Impact on the Data Lifecycle

When it comes to protecting data from GenAI-enabled or -supported attacks, it is imperative to reduce the data attack surface on an ongoing basis. It’s worth noting that GenAI may become a major contributor of exposed sensitive data as well as redundant, obsolete and trivial (ROT) data. For example, GenAI may piece together clues to generate factually accurate personal Identifiable Information (PII) — which is already protected by many regulations — and make it available without appropriate security controls, thus exposing the enterprise, and its customers, to new cyber risks. AI could also make data management more difficult and increase the data attack surface through the sheer volume of new data it creates.

Beyond reducing the attack surface of existing stored data, and all the ROT data within these stores, companies should implement data classification at the time that data is created, with policy enforcement commensurate with the sensitivity of that data.

Maximizing AI’s ROI

To the extent that GenAI can accelerate the reach and effectiveness of data discovery, classification, loss prevention and sanitization, companies should carefully consider investments in tools that specialize in these controls based on their level of GenAI sophistication. For instance, data loss prevention (DLP) solutions can limit the spread of data beyond authorized users, accounts and network segments, providing another layer of data protection.

GenAI depends on the size, scope and quality of the data that is fed into it, making it imperative that companies using it for daily work processes consider how their data sources may guide the ultimate course and output of its GenAI tools.

At the same time, organizations should remember that GenAI’s effectiveness is heavily dependent on the strength of its generative engine, the size and quality of the data sets fed into it, and the number of iterations the engine must learn from and improve upon. In essence, much of the hype around GenAI in data lifecycle management should be examined with caution and tempered by reality.

The Future of GenAI: Both Promising and Uncertain

If we assume that GenAI’s overall impact on cybersecurity is to elevate the sophistication and speed of attacks, while also (hopefully) elevating the sophistication and speed of cyber defenses, enterprises will need to embrace GenAI as a potentially powerful security tool.

Organizations should not wait on government regulations to protect them from AI cybersecurity threats; instead, they should implement company policies that dictate the usage of GenAI in ways that provide guardrails to create a safer, more secure usage environment. Finally, companies should not shy away from leveraging AI for a competitive edge but must be realistic about how it can help them achieve business goals while mitigating security risks.