Financial services institutions may be at a higher risk of fraud in these relatively early days of generative AI as threat actors leverage the emerging technology in their attacks, a disadvantage that the industry needs to address, according to the U.S. Treasury Department.

In a 52-page report this week, the agency noted that while the industry for years has adopted AI capabilities in their operations, including risk management platforms and other tools for defending against fraud, the rapid innovation fueled by the rise of generative AI and large-language models (LLMs) likely is changing the playing field.

Meanwhile, cybercriminals are quickly incorporating generative AI into their attacks.

“As access to advanced AI tools becomes more widespread, it is likely that, at least initially, cyberthreat actors utilizing emerging AI tools will have the advantage by outpacing and outnumbering their targets,” the report’s authors wrote.

The bulk of the report’s findings come from 42 in-depth interviews the agency conducted late last year. It comes in response to President Biden’s executive order in October 2023 outlining the need for the safe and secure development of AI technologies.

The report arrived a day before Vice President Kamala Harris outlined new policies for the federal government’s use of AI.

“Artificial intelligence is redefining cybersecurity and fraud in the financial services sector, and the Biden Administration is committed to working with financial institutions to utilize emerging technologies while safeguarding against threats to operational resiliency and financial stability,” Nellie Liang, under-secretary for domestic finance, said in a statement.

Institutions Rethinking AI’s Role

Experts suggest that most cyber-risks from AI can be managed like other IT systems, the agency wrote. To address the advantage bad actors have now, financial institutions can expand and strengthen risk management and cybersecurity practices, expand integration of AI into their own practices, and collaborate with other organizations, particularly by sharing threat information.

Like organizations in other industries, financial services institutions for more than a decade have used predictive AI and machine learning technologies for everything from increasing the productivity of employees to making overall operations more efficient to arming their businesses with more timely and accurate information.

For financial companies, that also has included protecting the business against cyberattacks. However, the Treasury report said that many of those early adopters likely are reassessing the strategic potential of AI, given the accelerated development and adoption of generative AI, which exploded on the scene less than two years ago when OpenAI launched its ChatGPT chatbot in late November 2022.

“Some of the financial institutions that Treasury met with reported that existing risk management frameworks may not be adequate to cover emerging AI technologies, such as Generative AI, which emulates input data to generate synthetic content,” the authors wrote. “Hence, financial institutions appear to be moving slowly in adopting expansive use of emerging AI technologies.”

Generative AI Comes with Risks

There are growing numbers of ways that bad actors can use generative AI and the risks they present. It can be used for more sophisticated social engineering efforts – the technology can help create more convincing messages, for example – generating malicious code, finding vulnerabilities in software and spreading disinformation.

By using generative AI, threat groups can corrupt the data used for training or model weights to sabotage the training process or change the model’s output, leak data during inference, or steal a model by building another one with the same functionality. Risks also come through third parties.

Data also makes the risk with AI higher, which helps fuel the need for better protections, according to the report.

“Applying appropriate risk management principles to AI development is critical from a cybersecurity perspective, as data poisoning, data leakage and data integrity attacks can take place at any stage of the AI development and supply chain,” the report says. “AI systems are more vulnerable to these concerns than traditional software systems because of the dependency of an AI system on the data used to train and test it.”

The Gap Between Large and Small

Other issues that need to be addressed are the widening gap between what larger institutions can do compared with smaller ones. Many larger ones are developing their own AI systems, and those that have moved to the cloud can leverage AI systems the larger cloud providers already have in place. In addition, larger firms have more data and expertise for training models and building anti-fraud AI systems.

A number of the steps that are necessary to help financial institutions adapt to a new cybersecurity world that includes generative AI involve collaborating with others in the public and private sectors, the agency wrote. It includes more sharing of anonymized cybersecurity information with vendors to enhance their anomaly detection AI.

Financial institutions also are collaborating with regulators to address oversight concerns, Treasury wrote.

That said, collaboration within the industry needs to be better coordinated, with limited fraud information being shared right now, which tends to hurt smaller companies more than larger ones.

“A clearinghouse for fraud data that allows rapid sharing of data and can support financial institutions of all sizes is currently not available,” the report’s authors wrote. “With their broader set of client relationships, large firms have a wider base of historical fraudulent activity data they can use to develop fraud-detection AI models.”

The Bank Policy Institute and the American Bankers Association both are trying to close the fraud information-sharing gap in the banking sector, with the latter’s efforts focused on closing the gap for smaller institutions.