
We are at an interesting inflection point with AI as new opportunities start to take shape, with life-saving advances in healthcare, transportation and even replicas of public figures, including the late Suzanne Somers. At the same time, AI security concerns have never been more prevalent. There is a tremendous need for greater awareness and protection against cybercriminals breaking into AI systems and tools. While AI can be used to combat these fraudsters, taking a blended approach by coupling traditional tools with AI is a more effective strategy.
According to a recent survey, consumers’ trust in organizations is declining amidst a greater concern over identity fraud. In fact, 97% have concerns about their personal data being online, with only 8% having full trust in organizations that manage their identity data, which is lower than 10% last year.
Digital experience is at the heart of customer trust, and as expectations continue to evolve, brands must prioritize creating a more secure and intuitive online environment. There’s an immense opportunity for organizations to leverage AI and decentralized identity to achieve this goal. These technologies are the future of identity, and early adopters will stand apart from non-adopters by achieving best-in-class consumer experiences, but it requires addressing consumer concerns head-on while ensuring adoption is gradual and approachable.
We need growing protection against the new wave of generative AI-enabled threats like deepfakes, impersonation scams and digital media manipulation. A recent AARP report found that identity fraud and scams cost Americans $47 billion in 2024, with those figures only anticipated to rise each year. The impact at the enterprise level has the potential to be catastrophic. We are already seeing a growing volume of these threats in the workplace, with spikes in:
- Executive Impersonation: Fraudsters use deepfakes of CEOs and CFOs to authorize fraudulent wire transfers, exploiting trust in hierarchical communication. The use of realistic video or audio adds credibility to fraudulent requests, bypassing traditional verification methods.
- Onboarding Scams: Fake identities are used to gain employment, often to access sensitive systems or data. Deepfake-enhanced resumes and interviews make it easier for fraudsters to infiltrate organizations undetected.
- Privileged Access Breaches: Impersonating employees with high-level access allows fraudsters to infiltrate critical infrastructure, often leading to widespread data breaches or operational disruptions.
What our eyes see and our ears hear can no longer be relied upon, thanks to generative AI being exploited by adversaries. AI technology is enabling video and audio deepfakes to be created and deployed into mainstream digital interactions at rapid speed. Fraud departments already struggle to keep up with the number of cases that need their attention, and AI is likely to make this problem much worse.
We are at a crucial moment to address this new frontier of enterprise risk management, as bad actors are already using synthetic content generation capabilities of generative AI for purposes of fraud, insider threat, supply chain compromise and brand damage. Trust is at stake if synthetic digital content remains mostly unchecked for businesses, governments and media. We must blend new AI with traditional security to stay vigilant and flexible in order to deter this new era of AI-powered fraud.