Generative AI will produce explosive growth in new SaaS apps in 2024. CISOs need to make sure their risk exposure doesn’t go “boom” with it.

That growth prediction, made by Valence Threat Labs in its 2023 State of SaaS Security report, is going to make an already challenging SaaS security situation much worse. A torrent of new products is already beginning to emerge, thanks in part to AI, and more on the horizon will foster widespread integrations and low/no-code providers. This growth, along with growing business interest in AI, threatens to overrun security teams and make SaaS security analysis and threat mapping even more challenging.

Other experts agree. McKinsey’s 2023 State of AI report found that one-third of all respondents are already regularly using generative AI in at least one function, with 13% using it for product or service development. Yet recognition of associated business risks are trailing behind, with only 21% of organizations having established policies governing employees’ use of GenAI technologies in their work. We expect adoption to accelerate, as enterprise-focused products, like OpenAI’s ChatGPT Enterprise, are released.

The security issues stem from the fact that SaaS platforms are extensive ecosystems that connect applications, identities and data in ways that can put an organization at risk. Unfortunately, IT and security leaders often have a false sense of security, trusting in CASB solutions and native SaaS security controls to protect them.

Current SaaS Challenges

Attackers are taking advantage of OAuth token abuse, multi-factor authentication (MFA) fatigue and misconfigurations to gain unauthorized access to business-critical SaaS applications such as GitHub, Microsoft 365, Google Workspace, Slack, Okta and more.

Valence Threat Labs’ own research shows how organizations expose themselves to attacks or data loss with “the great garbage patch” of SaaS apps. Shockingly, 30% of the time SaaS data is shared to personal accounts. Other red flags: Over half of SaaS third-party integrations are inactive, 1 in 8 employee accounts are dormant and 10% of shared integrations and data can be traced back to ex-employees, long after they’ve left the organization.

Perhaps the most concerning threat is the trend of attackers stealing SaaS credentials that completely bypass strong security measures. Organizations have moved to MFA to increase security. The move to SaaS has created a dichotomy, however. While MFA is difficult for attackers to bypass, it also increases friction and impacts user productivity, which is a drawback. In response, SaaS vendors generate access tokens, which allow users to remain logged in for weeks, months or indefinitely.

These tokens exist for one simple reason: To make SaaS easier to use. Asking users to log in every time they want to use a service presents an unacceptable level of friction. Imagine having to log into nine different Slack instances once an hour, all day long. Initially, only consumer SaaS services issued tokens that would keep users logged in for months or years. These consumer design principles were carried forward to enterprise SaaS, and this ‘log in once’ principle came with it.

The problem is that these tokens are trivial for attackers to steal. A stolen token lets an attacker log in without needing to know the username, password or any second-factor authentication. As one example, the Slack credentials that led to the June 2021 Electronic Arts breach cost the attackers only $10.

Compounding the problem is SaaS to SaaS token abuse. Tokens aren’t just created when users log into a SaaS application, they work just as well for non-human identities. Sometimes an automated service needs access to a SaaS platform as well. These tokens are both very powerful and very easily abused. Once again, no username, password, or multifactor authentication is necessary to use them. Once stolen, they just work. Since they’re typically used to access the service via an API, there’s usually no indication to the user or the vendor that the token was stolen and is being actively abused.

The Looming GenAI Impact

Now fast forward to 2024 and the GenAI-driven explosion of new SaaS app integrations and low/no-code extensions. Experience tells us, these new SaaS apps, integrations and extensions will be adopted by lines-of-business or individual users with no oversight by IT.

Just as it has been common for businesses to accidentally expose private data due to cloud misconfigurations and poor security posture management, we expect to see the growth in SaaS apps using GenAI and LLMs to bring with it a rise in compromised SaaS app privileges or even accidental exposures as well.

Compounding this problem is the dangerous practice being considered by many technology companies to use customer information to train AI models. Zoom, for example, recently had to reverse its course due to negative customer feedback about its plans to use customer information for AI training. But that doesn’t mean this problem goes away because there are many other technology companies that are considering giving customer data to AIs or are already doing so. This presents another serious risk that is emerging from SaaS apps.

As we approach the 2024 planning cycle, this is the time for IT leaders to recognize the risks associated with SaaS and the expected spike in new risks stimulated by the widespread use of GenAI. Prioritization of more effective SaaS security is essential for every organization in the coming year. At least those that don’t want to go “boom.”