business

The U.S. Consumer Financial Protection Bureau earlier this year was one of four federal agencies to affirm their role in ensuring that the emerging AI-based automated systems used by businesses do not discriminate against consumers.

In a three-page joint statement released in April, the CFPB, along with the Department of Justice’s Civil Rights Division, Equal Employment Opportunity Commission, and Federal Trade Commission noted that while AI systems can be useful in churning through massive amounts of data to find information for performing tasks or making recommendations, they also can produce outcomes that are discriminatory and illegal.

“These automated systems are often advertised as providing insights and breakthroughs, increasing efficiencies and cost-savings, and modernizing existing practices,” the agencies wrote. “Although many of these tools offer the promise of advancement, their use also has the potential to perpetuate unlawful bias, automate unlawful discrimination, and produce other harmful outcomes.”

They pledged to monitor the development and use of automated systems to ensure “responsible innovation,” adding that they will “protect individuals’ rights regardless of whether legal violations occur through traditional means or advanced technologies.”

Reasons for Credit Decisions

The CFPB earlier this month took a step in that direction, telling lenders using AI systems and algorithms in their underwriting models that they must offer specific and accurate reasons for denying consumers credit or changing the conditions of their credit. They can’t simply use sample averse action forms and checklists from the agency, particularly if they don’t reflect the actual reasons.

AWS

They also can’t “rely on overly broad or vague reasons to the extent that they obscure the specific and accurate reasons relied upon,” the CFPB wrote in a circular released September 19.

The bureau’s requirements around averse action notifications “apply equally to all credit decisions, regardless of whether the technology used to make them involves complex or ‘black-box’ algorithmic models, or other technology that creditors may not understand sufficiently to meet their legal obligations,” the agency wrote. “As data use and credit models continue to evolve, creditors have an obligation to ensure that these models comply with existing consumer protection laws.”

Bias Can Creep In

The problem, according to the CFPB, is that the algorithms used in the AI-based systems underwriters use can rely on data that is not usually in a consumer’s credit file or application and may not factor into the likelihood that a consumer will repay a loan. Such data “may create additional consumer risk,” the agency wrote, adding that it’s up to the financial institutions to ensure the data and advanced technologies they use comply with legal requirements, including not illegally discriminating against consumers.

In their April joint statement, the CFPB and other agencies listed three ways AI-based systems may generate illegally discriminatory responses and decisions, including the use of datasets that may include historical bias, unrepresentative data, or other kinds of errors.

In addition, the agencies noted that the inner workings of automated systems often can be unclear, creating “black boxes” that are difficult to know whether they are fair in how they operate. Developers also “do not always understand or account for the contexts in which private or public entities will use their automated systems,” they wrote.

AI in Banking is Booming

This comes as AI in the banking industry is expected to expand rapidly in the coming years. According to Allied Research, the market will grow more than 32% a year, increasing from $3.88 billion in 2020 to more than $64 billion by 2030.

Global consultancy McKinsey and Co. earlier this year laid out in a 12-page report the different ways AI can affect banking and financial services, from creating better customer experiences and lowering costs through greater efficiency to improving decisions regarding credit. That last one touches everything from assessing whether a customer qualifies for a load and how much to lend to loan pricing and managing fraud.

In a column in Forbes in May, Alex Kreger, a user experience strategist and founder of UXDA, a financial user experience design agency, included other areas, such as answering account inquiries, customer service, managing accounts and preventing fraud. He also wrote that banks can use chatbots – think ChatGPT or Bard – to help users apply for loans and guide them through the application process.

“To secure a primary competitive advantage, the customer experience should be contextual, personalized and tailored,” Kreger wrote. “And this is where I think AI will become the breakthrough technology that supports this goal.”

Challenges with AI

But while there are advantages that AI and large-language models can bring to banking, there also are challenges financial institutions will have to address. A key one is ensuring that the chat interface is secure and that customer data cannot be accessed by bad actors or disclosed. Banks also need to help customers adopt the AI systems – such as chatbots – by making sure they’re comfortable using them.

He also stressed the need to properly train the AI models, in this case making sure they were attuned to the financial services sector. Banks need to train “an AI model to understand the language and terminology specific to the banking industry. Banks should provide relevant training data and integrate the model with their existing systems to ensure that it can provide accurate and appropriate responses to user queries.”

Banks themselves also are stressing the need to guard against bias. Greenfield Savings Bank, with branches throughout Western Massachusetts, in a blog post in July outlining how AI can improve banking for both the organization and its customers, also warned of possible problems.

“Increasing reliance on AI necessitates thorough data protection measures to safeguard customer information from potential breaches and misuse,” the bank wrote. “Additionally, AI algorithms are trained on historical data that may contain inherent biases, leading to potential discriminatory outcomes unless businesses stay vigilant.”

Trying to Stamp Out Bias

In an interview with AI software maker Jasper AI in June, Rosie Campbell, program manager at ChatGPT maker OpenAI and a member of the company’s Trust and Safety team, talked about several ways biases are being mitigated in generative AI, but added that technical steps won’t guarantee the elimination of all bias.

“Having an accountable ‘human in the loop’ to verify the outputs and decisions of AI systems is a useful mitigation,” Campbell said. “At OpenAI, our use case policies restrict the use of models for certain applications where we know the model may not perform adequately, and we collect user feedback on the quality of our model outputs in order to continuously improve them.”

TECHSTRONG TV

Click full-screen to enable volume control
Watch latest episodes and shows

AI Field Day

TECHSTRONG AI PODCAST

SHARE THIS STORY