Hackers, GenAI, cyber, AI, enhancing, cybersecurity, digital age, AI, AI threats and cybersecurity, AI strike

There’s no better exemplar of the cybersecurity arms race than AI. On one side, network defenders and security vendors are using the technology to overcome skills shortages, supercharge phishing detection and automate remediation. On the other, threat actors are taking advantage of AI to create malware and craft more convincing impersonation scams.

Up until now, the good guys have generally benefited from this emerging technology. But as regulations are implemented across the globe, cybercriminals—with no such compliance mandates to hold them back—may pull ahead. The best way the cybersecurity sector can minimize the potential impact of this scenario is by creating voluntary codes of conduct to pre-empt any heavy-handed legislation.

The Advantage is Ours

Most experts working in AI and cybersecurity agree that the benefits from this technology currently outweigh the risks. Security vendors like Trend Micro have been using AI for over a decade for spam filtering and other purposes. Today it can be used for more advanced email security, such as computer vision technology designed to better spot phishing or algorithms designed to learn the writing style of executives to detect business email compromise (BEC) attempts.

AI is being used elsewhere, too. In 2024, generative AI and other technologies will be combined to produce digital assistants capable of upskilling security operations analysts. They could carry out a range of helpful tasks: Explaining and contextualizing alerts, triaging and recommending actions, decoding complex scripts and assisting with threat hunting search queries.

More AI-powered automation is sorely needed in cybersecurity at a time when threat volumes are at record levels and humans simply can’t manage the sheer complexity and dynamism of IT environments. Intelligent systems are needed to continuously monitor cloud and other digital systems for vulnerabilities and configuration errors and then automatically remediate without the need for human input.

AWS

Why Rushed AI Regulation Could Stifle Security

Carefully thought-out AI regulations are critical. Many of the regulations for AI being developed currently are being rushed to match AI’s rapid growth. In many countries, unfortunately, there is little consultation with the developers of this technology by lawmakers and regulatory agencies. These rushed regulations could unintentionally hinder the use of AI for defense when attackers are freely using it in attacks.

The good news is that there is significant movement on the development and acceptance of general codes of conduct for AI by groups of AI companies. The bad news is that most of these agreements either don’t address cybersecurity meaningfully or do so in a very non-specific way. For example, many specify “robust security controls across product development lifecycle” without explaining what that means.

In Europe, AI regulation is already at an advanced stage. The EU AI Act is now set to get formal approval after lawmakers recently made revisions. These proposals put a significant burden on general-purpose AI systems deemed “high risk”. General-purpose AI includes large language model-based tools which are already proving incredibly useful to the cybersecurity community.

According to one legal analysis: “High-impact general purpose AI models with systemic risk will be subject to additional obligations, including model evaluations, the obligation to assess and mitigate systemic risks, conduct adversarial testing, report to the European Commission on serious incidents, ensure cybersecurity, and report on their energy efficiency.” Systems involving “biometric identification and categorization of natural persons,” which may include identity and access management (IAM) tools, could also be subject to the same obligations and restrictions. This appears to be written with the excellent intention of preventing bias – however, the potential impact on biometric systems used in support of cybersecurity was likely not considered.

A Better Way Forward

The risk is that quickly developed and/or vague regulatory requirements for the general use of AI may stifle cybersecurity innovation at exactly a time when we need intelligent automation more than ever. The industry is still suffering from skills shortages amounting to an estimated four million professionals globally, including 522,000 in North America, and AI is a potential game-changer in providing assistance to security operations.

What is likely over the coming year is that the industry will lead with self-regulation, given that specifics on cybersecurity in AI haven’t been given much emphasis in the draft regulations to date. Aside from an October 2023 Executive Order, most discussion on AI outside of the EU has been around ethics and responsible use rather than cybersecurity. A voluntary code of conduct for cybersecurity in AI could set the stage for clearer regulations and illustrate what acceptable rules should look like. Ideally, this would also help with standardization and harmonization of AI regulations regarding cybersecurity across jurisdictions, as the current trend towards very different rules will only help attackers.

We have to remember that the bad guys don’t need to follow the rules when designing their AI models. By thinking proactively and strategically, cyber defenders can retain a critical advantage.

TECHSTRONG TV

Click full-screen to enable volume control
Watch latest episodes and shows

AI Data Infrastructure Field Day

TECHSTRONG AI PODCAST

SHARE THIS STORY