As we move deeper into the 2023 holiday shopping season, will the investments online retailers have made into AI-powered anti-fraud technologies prove to payoff?
Over the years, combating online retail fraud, like fighting any crime, has proven to be an endless struggle to identify, stop, or at least reduce its prevalence. While machine learning has long been used to fight online fraud, recent improvements in data management, pattern identification and the ability to flag suspicious activities have promised to enhance significantly these technologies.
“Machine learning can be invaluable in addressing the full range of ways criminals and abusive customers attack retailers,” Katherine Wood, staff data scientist at anti-fraud vendor Signifyd, said. As Wood explains, the best of these AI/ML technologies use vast amounts of data to build an understanding of the identity and the intent behind each transaction. Wood contends that this creates a streamlined experience for good customers, enabling instant approvals and no delays in order processing while protecting the rest of the shopping experience.
That experience is protected by not only blocking bots from buying up all of the hot holiday items for sale but also identifying abusive returns and false claims of damage or delivery failure. “[This] helps merchants sift the legitimate claims from the false ones and continue to offer returns and refunds to good customers. Machine learning solutions can do this at an enormous scale, ensuring a smooth shopping experience no matter how busy it gets during the holiday rush,” Wood says.
For anyone who doubts the effectiveness of AI in this fight, data from a PYMTS Intelligence-AWS study found that 66% of financial institutions that turned to AI/ML witnessed a decrease in their fraud rates.
“AI/ML algorithms can identify intricate patterns in data that may be indicative of fraud, even when these patterns are not easily discernible by humans or traditional rule-based systems,” Asim Zaheer, former CMO at Glassbox, explains. “Additionally, AI/ML models can learn the normal behavior of users, accounts, or transactions and flag anomalies that deviate from the established patterns, helping identify potentially fraudulent activities.
In conjunction with ML capabilities, AI-powered systems can also analyze data in real-time, allowing for quick detection of fraudulent activities as they occur and minimizing potential damage,” Zaheer adds.
Prevalent Online Retail Fraud
There are currently several types of prevalent online retail fraud. This includes account takeover fraud, when criminals access a customer’s e-commerce account and make fraudulent purchases. Another common fraud is “card testing fraud.” This occurs when stolen credit card information is used to make small purchases to test a card’s viability before buying more expensive items. Friendly fraud is also common. This happens when customers make legitimate purchases and then file a credit card chargeback with a false claim, such as not receiving the item.
There’s also so-called “friendly fraud.” Friendly fraud occurs when a customer purchases online, receives the product or service, and then falsely disputes the charges with their credit card company. This creates a chargeback. The transaction is cancelled, the customer gets their money back, and the merchant incurs a chargeback fee.
AI/ML is being used to combat each of these fraud categories. For instance, it is used to streamline the chargeback dispute workflow caused by friendly fraud. This has historically been a long and manual process for credit card companies. AI is also aiding merchants in identifying the best evidence they have to prove a product was delivered — thereby gaining more wins against fraudulent chargebacks.
AI is Becoming Woven Into the Fabric of E-Commerce
According to Dustin White, vice president of US Risk at Visa, AI is woven into the fabric of securing Visa’s payments ecosystem in the service of making the movement of money smarter and safer. “Security teams at Visa leverage AI to protect eCommerce websites and transactions from fraud attempts. For example, behind every transaction, Visa Advanced Authorization technology analyzes up to 500 unique risk factors to detect fraud in real-time, making fraud detection faster, more efficient, and far more accurate,” White says.
White says that through AI and advanced data analytics, Visa has helped to prevent about $32 billion in attempted client fraud.
Karl Cama, senior chief architect for the office of the CTO at Red Hat, says retailers increasingly use AI to analyze shopper patterns, buying habits, return frequency, return timeframes and failed home delivery patterns. “The intention is to reduce the amount of theft or fraud through consumers falsely reporting issues such as failed deliveries, lost deliveries, defective or unsuitable goods, lost returns, and of used goods which are non-defective,” Cama says.
In these situations, there is often a pattern of behavior from a selection of consumers, specific geographic locations, or certain high-value product items. “AI can help to detect these patterns and introduce additional security measures or procedures for selective products or consumers. AI can also spot and flag actions considered “out of character.” An example may be purchasing large or expensive household items from remote locations when local merchants are available,” Cama explains.
“Moreover, with access to systems of record that hold personal information about the consumer, AI can be used to mitigate identity fraud on eCommerce systems. Is the consumer really who they claim they are, just because they had the right credentials,” Cama adds.
AI as a Force Multiplier for Fraudsters
Rob McDougall, CEO at contact center software provider Upstream Works, and most experts, explain that AI will also be a boon for fraudsters. “AI is a true double-edged sword,” McDougall says. “It is still just complex pattern analysis. AI can be used to determine the patterns of fraud that are used and can help detect those patterns to help combat fraudulent activity. Conversely, AI can also use those patterns to propagate fraud more easily. Bad actors can also use AI to determine the trigger points for AI-based fraud detection and avoid them. It will be an ongoing game of cat and mouse, but generally, I suspect the available AI models will probably be used more for harm than good,” says McDougall.
Time will tell if AI/ML brings more benefits to the fraudsters than the defenders. However, Visa’s White is also concerned about what he terms weaponized AI. “Everyone is talking about how they can use AI in everyday life , including fraudsters. New AI use cases have the potential to significantly lower the barrier of entry for less technical threat actors for things like malware development and phishing campaigns,” White says.
White adds that fraudsters are using GenAI to more easily create customized phishing emails or business email compromised messages that can be manipulated to conduct malicious activities. “The ease with which these tools generate written results in any language means phishing campaigns will likely become more complex and difficult to detect as the AI technology evolves. And Visa’s Payment Fraud Disruption team continues to monitor the use of emerging AI by fraudsters and cybercriminals,” he adds.