AI

The most effective cybersecurity programs center on adaptability, and recent innovations in artificial intelligence (AI) are certainly keeping today’s security teams on their toes. With new AI-based tools and services hitting the market at breakneck speed, their potential to both help and harm cybersecurity efforts is becoming increasingly apparent. While security teams and cybercriminals have long been engaged in a push-and-pull relationship, AI is being used on both sides of the fence.

AI is being leveraged to defend organizations against new and existing threats, but it’s also being employed by hackers — often using the exact same capabilities — to breach these defenses faster and with less effort. For this reason, it’s helpful for security leaders to be aware of the common ways AI is being used to propel these conflicting agendas. Security leaders then have the challenging task of weighing the opportunities presented by AI with its risks in order to protect their organizations and their data.

How AI is Enhancing Cybersecurity

There are a nearly endless number of use cases for AI to accelerate a company’s cyber defense strategy. It’s particularly beneficial in helping security teams answer questions like, “If I were a hacker on the outside looking in, where would I find gaps in our security program?” The potential for AI to proactively identify critical vulnerabilities and support remediation is one of its most compelling benefits. For example, AI can be used to crawl a company’s network perimeter to explore which systems or applications are internet-facing and what risks they may carry. With its ability to analyze massive quantities of data quickly, well-trained Large Language Models (LLMs) can augment manual security processes to find and fix issues at a speed that was previously impossible.

How a company takes advantage of AI for cybersecurity will depend on its priorities. Security teams that are concerned about potential issues lurking undetected or being missed during manual reviews may deploy red teams to conduct penetration testing. These teams go on the offensive to test the company’s security measures and can use AI to identify weaknesses and write custom exploits. For other teams, it may be a greater priority to leverage AI’s behavioral monitoring capabilities to identify and address insider threats, such as employees’ attempts to exfiltrate sensitive data. Or they may seek AI’s help in maintaining compliance with constantly shifting industry standards and regulations. Finally, they may even use AI to fight AI-powered attacks. For instance, they can use it to identify automated attack tools or expose malicious code.

While there are traditional methods of tackling all these security issues, AI is taking defensive efforts to the next level. Unfortunately, it’s also elevating what’s possible on the attack front, so despite all its benefits, threats from AI should not be underestimated or ignored.

How Attackers are Leveraging AI

First, there are risks stemming from the fact that many new AI tools are being pushed to market before developers fully understand how to secure them. With this in mind, companies should thoroughly vet all AI tools before approving their use across the enterprise. In reality, however, employees often use AI-based platforms or services that haven’t been authorized by the IT department, including content creation aids like ChatGPT or Google Bard. Security leaders need to plan for this. In order to configure their security controls effectively, they need greater visibility into how these solutions are actually being used within their environments — whether they are approved or not.

There are also more intentional threats posed by bad actors. AI is being used to power many different types of attacks — and, even more worryingly, it is lowering the technical barrier to entry for hackers. Today’s hackers can use AI to write highly believable phishing emails or collect and analyze customer data for credential-stuffing attempts. It can also be employed in the initial stages of an attack, when hackers are researching the organization they’re targeting and gathering intel. LLMs can power open-source intelligence (OSINT) to craft comprehensive dossiers on a company’s weak spots, such as publicly-facing assets with known exploits. These capabilities are reducing the upfront lift for hackers, allowing them to carry out well-researched attacks with ease.

Other Best Practices for Responsible AI Use 

While it’s critical for security leaders to understand the pros and cons of AI in cybersecurity, these are only the tip of the iceberg. They’ll also need to stay closely tuned into new government frameworks and strategies around this technology, such as the Biden Administration’s recent Executive Order on AI and the first global AI Safety Summit recently held in the United Kingdom. These regulatory efforts aim to address big-picture concerns around AI, including pressing privacy concerns, ethical considerations, potential biases in training models, and more.

Organizations need to better understand and secure their AI-based tools through acceptable use policies, strong security controls, and training for employees. Education raises awareness on how to take advantage of AI’s productivity and time-saving benefits without putting the company at risk or losing data. In some cases, product and engineering teams will need to consider how to securely integrate AI into their offerings. Organizations must ensure they’re not blindly trusting the outputs of AI and that they have the right balance of automation, human judgment, and integrity.

As today’s companies continue to digitize, they’re exposed to a growing number of cyber threats. Thankfully, this same technology can be used to bolster their defensive mechanisms. As AI adds a new dimension to the ongoing battle between cyber defenders and attackers, security leaders who understand its capabilities on both sides of the coin and take a holistic view of AI in the enterprise can make AI work for them, rather than against them.