The arms race in cybersecurity is ramping up quickly as bad actors use generative AI tools to create more sophisticated and difficult to detect phishing, fraud and other campaigns while defenders use those same technologies to protect against such attacks.
Credit card giant Visa, in its recent Biannual Threats Report, laid out a scenario that others in cybersecurity also are seeing. The rapid mainstreaming of large language models – what VISA calls advanced language models, or ALMs – and generative AI technologies like OpenAI’s ChatGPT and Google’s Bard are being exploited by hackers for phishing campaigns, modifying or creating malicious code and making malware harder to spot.
It’s also allowing less skilled cybercriminals to launch more complex cyberattacks, increasing the number and sophistication of threats.
“Threat actors can use advanced language models to create global phishing campaigns and deploy AI-modified malware capable of avoiding detection to obtain sensitive information in financial and retail systems, including payment account information,” Michael Jabbara, vice president and global head of fraud services at Visa, told Techstrong.ai. “Bad actors can use ALMs to generate snippets of code which can be combined and modified to create malware that can retrieve stored sensitive information, such as administrator credentials or payment account data from a victim’s device.”
There also are other ways generative AI is making its mark in the cybercrime world. The technology can help bad actors generate realistic speech that can sound human, mimicking human emotions and logic, Jabbara said. This can be used to impersonate financial institutions to get a victim’s one-time password – as happed recently to software development platform provider Retool – or run vishing campaigns to steal payment account credentials or other sensitive data.
Technologies like AutoGPT and other ALMS – including hacker-created WormGPT and FraudGPT – can be used with bots to automatically create and run phishing campaigns, generate file stealer malware, and create crypters and polymorphic malware that obfuscates the malicious code, according to Visa’s report.
“We may see business email compromise (BEC) attacks become more commonplace as threat actors use AI to manipulate employees through voice and email messages by impersonating a company’s management or leadership executives,” Jabbara said. “Threat actors can also request ALMs generate messages using specific language to convince the victim the phishing messages were sent from the spoofed entity or person.”
Lowering the Bar to Entry
AI’s ability to lower the bar for hackers to run sophisticated campaigns increases the level of threat to victims, making it more difficult to spot a scam by hiding imperfections like poor grammar, spelling errors, or “wonky formatting,” he said.
This makes it harder to detect fake emails, making targets more susceptible to clicking on malicious links or attachments and sharing personal information.
“Fraud doesn’t require much technical skill and threat actors continue to utilize AI technology to create more sophisticated tactics and forms of fraud,” he said. “It also enables them to scale their operations at a faster pace. As a result, for consumers and businesses, AI is making it more difficult to spot a scam than ever before.”
The uptake by bad actors and defenders alike using AI comes with the backdrop of an already rapidly evolving cyberthreat landscape. The Visa report found that the number of ransomware attacks continues to jump, with March seeing almost 460 such attacks, a 91% increase over February and 62% rise year-over-year.
In addition, in the first six months, the number of enumeration attacks – where hackers use brute-force methods to gain unauthorized access into a system using data like usernames and passwords – rose 40% in the first half this year over the last half of 2022. The report also talked about false spoofed merchants, growing numbers of malvertising campaigns, and flash-fraud and free gift scams.
More AI-Based Threats on the Way
Organizations can expect the onslaught of AI-driven threats to continue, according to Visa.
“Threat actors could use ALM technology in phishing campaigns to target victims around the world using easily created and effective phishing lures, regardless of the threat actors’ or victims’ locations or native languages, and deploy malware modified by ALMs capable of avoiding detection to obtain sensitive information, including payment account information,” the Visa report authors wrote.
Social engineering techniques will become more effective and BEC attacks more commonplace. Given this increasing popularity of ALMS, security protocols used to detect phishing campaigns need to be improved to identify possible fraud or malicious emails to go beyond looking at spelling and grammar for errors.
“Employees and consumers should be continuously educated on how ALMs can be exploited by fraudsters to socially engineer victims through phishing or vishing campaigns to gain access to their accounts or obtain payment account credentials,” they wrote.
Defenders also will have to continue expanding the use of AI technologies in their portfolios. Jabbara said Visa uses massive datasets for its AI-enabled defenses, noting that Visa Advanced Authorization analyzes up to 500 risk factors to help detect fraud in real time.
“We’re still in the early stages of understanding the impact generative AI will have on fraud detection and prevention, but we’re excited about the opportunity it presents,” he said.