
Technology leaders are no strangers to hype, booms and busts. These waves of industry noise and rapid growth followed by abrupt correction aren’t limited to market trends like the dot-com crash—they also play out in cybersecurity, where each technological leap introduces noise to weed through and potential security gaps to address.
Consider the shadow IT boom a decade ago, when many employees accessed third-party, consumer-grade tools to improve their efficiency and experience at work. Users had no malicious intent, yet they exposed organizations to new risks as corporate data began flowing into third-party services without oversight or formal agreements. Users moved faster than security teams could respond, forcing IT to play catch-up.
While the “old school” shadow IT challenge has largely been tamed, an even greater rush for speed and productivity is upon us. AI’s surge has unleashed incredible innovation and, along with it, very unique risks: an even more complex dynamic of “shadows within shadows,” with applications increasingly driven by AI models. Once again, employees are using new tools faster than IT can respond, resulting in risks of data exposure, compliance violations, the enterprise IP compromise, and more.
Yes, the AI gold rush presents unprecedented opportunities and challenges for enterprises. However, as with shadow IT, behind the “shadow AI” challenge are well-meaning employees simply trying to do great work faster, using whatever tools they feel will help them the most. With the right strategy, organizations can embrace AI fever securely and effectively.
AI Innovation is Outpacing Oversight
Lines of business, executives, and employees across industries are experimenting with a growing array of generative AI (GenAI) platforms that promise to make work faster and easier.
However, most are arising faster than security teams can get their hands around them, leaving practitioners struggling to keep pace with how sensitive data is used, stored, or shared. Without clear visibility or policies, organizations lose control over their data—which can end up fueling external models beyond their reach..
What can enterprise leaders do? User education is important, but people still make mistakes. Phishing remains a successful tactic despite more than a decade’s worth of employee training and tests.
Denying access via blunt-force tools – blocking ChatGPT outright, for example – usually backfires, as users will find workarounds or comparable tools that carry the same risks.
Further, existing applications will continue to evolve using AI at their core, which will blur the lines between the application and AI. This requires different thinking on application adoption and governance.
Empower Users, Secure the Enterprise
Employees will find ways to use GenAI. Amid the rush for productivity, enterprise leaders should want them to use it, or else risk falling behind the competition. The most successful professionals of the future will be those who master AI use within the scope of their roles and professions.
Leaders should guide employees to sanctioned, enterprise-grade AI tools that meet the organization’s standards for compliance and risk. Ideally, this redirection happens in an elegant manner that feels seamless and avoids blame.
They can also create controlled environments that protect corporate data, even when employees access personal AI tools. For example, new solutions like enterprise browsers allow organizations to embed governance, visibility, and security directly into the browser itself – controlling data interactions and governing what employees can copy into these tools based on corporate policies, identity, device posture, location, and more. Enterprises can embrace AI experimentation without compromising security or cluttering the IT stack.
Consider Context
When organizations allow users to access GenAI, context is critical. The same search or interaction can carry vastly different implications depending on the user’s role and intent.
For instance, a CEO using GenAI to research a potential acquisition could unintentionally reveal strategic intent, letting outsiders infer moves and eroding competitive advantage. Yet an IT practitioner performing research on that same organization may be trying to learn about new technologies.
Or in a medical context, is GenAI being used to correct a document, to prescribe drugs, or make a diagnosis based on patient data? Again, context matters deeply as each of those interactions could have wildly different implications.
Without proper controls, context – combined with the data being exposed – could inadvertently lead to the loss of sensitive IP or even impact patient care. Organizations must implement robust governance, tailoring access and monitoring based on contextual clues around the sensitivity of their tasks. By understanding that the “who” and “why” behind AI interactions are as revealing as the data itself, organizations can better safeguard their interests.
Leadership in the Age of AI
Of course, embracing AI goes far beyond protective tools.
First, leadership and practitioners need a clear view of their organization’s supply chain – how both vendors and partners incorporate AI in their own platforms – and respond with appropriate oversight and alignment. For example, does the vendor application use a shared AI model with other organizations, or does your organization have its own segregated instances?
Second, leaders must keep pace with regulatory shifts. As AI governance and industry-specific rules evolve, enterprises will need to stay agile and compliant. Adaptive governance frameworks allow for dynamic controls that evolve alongside shifting regulations, tools, and user behaviors. Regulated organizations should also pay close attention to the impact of the regulatory climate and the liability of usage.
Finally, executives should do what they do best: lead from the front. Enterprise leaders face the same pressures as their employees – to perform, innovate, and gain a competitive advantage. They should model secure, sanctioned use of AI, setting the tone for the entire organization.
Harnessing AI Securely
The organizations that thrive in this new era will be those that place lightweight boundaries around their applications, not around their people. This doesn’t have to be an all-or-nothing debate; in a well functioning organization, users can safely use even personal applications and AI resources without risk of organizational data spillover.
With the right mix of user experience, smart tooling, and proactive governance, organizations can unlock all the possibilities of the AI rush without compromising their values or their security.