AI is fueling an explosive rise in fraud and digital identity crime


AI-powered fraud is evolving faster than most organizations can detect it. That’s the message from Proof, the identity authorization company, in its new report, The Trust Ledger: Transaction & Identity Fraud Bulletin.
The research reveals how synthetic identities, stolen credentials, and generative AI are fueling a surge in digital impersonation and fraud across industries.
The Trust Ledger draws on proprietary platform data, threat research, and surveys of fraud leaders.
The overall picture it paints is one of a changing threat landscape, where identity is easier to fake than ever and defenses are struggling to keep up.
“Fraud today doesn’t look like it did five years ago. It’s synthetic, it’s autonomous, and it’s scaling,” said Pat Kinsel, CEO of Proof. “We’re seeing high-risk interactions involving billions in assets -- across industries that never considered themselves fraud targets before. Therefore, trust must now be engineered. In a world where identity can be convincingly faked and monetized at scale, businesses, consumers, and policymakers must urgently adapt.”
According to the FBI, internet crime losses reached $16 billion in 2024. That marks a 33 percent increase from the year before. Proof suggests the real number could be even higher, as new tactics are harder to detect and often go unreported.
Generative AI is helping attackers fake documents, clone voices, and bypass outdated security tools.
Proof’s data also shows a mismatch between risk and readiness. In a recent survey of enterprise customers and fraud leaders, almost 30 percent said they do not have a reliable way to measure fraud across their systems. Most reported a rise in attack attempts, with AI-driven forgeries and impersonation listed as top concerns.
“The threat landscape has changed,” said John Heasman, Chief Information Security Officer at Proof. “We’re not just seeing more fraud -- we’re seeing a different kind of fraud. AI tools are making it easier to fake documents, mimic voices, and defeat legacy systems. We need to modernize our defenses around real-time detection, high-assurance identity, and smarter fraud signals.”
Fraud targets a broad range of industries
Fraud is also spreading into new areas. Property managers, HR departments, and utility providers are now reporting an uptick in identity-based attacks.
What used to be seen as a problem for banks or retail platforms is now impacting a much broader range of industries.
Proof’s findings suggest older adults are taking fraud prevention more seriously than younger users. According to the report, there are nearly twice as many identity verification users aged 60 to 64 as there are aged 20 to 24.
Although older users remain a high-risk group, they are also more likely to engage with verification and protective tools.
The report also says fraudsters are increasingly using synthetic identities that blend real and fake data. These false profiles often pass Know Your Customer (KYC) checks by submitting AI-generated documents and matched selfies.
On top of that, legitimate data tools designed for financial institutions or law enforcement, like TLOxp, are being misused by attackers to enrich stolen identity profiles.
Criminal groups are now offering stolen identity “fullz” for as little as $3 on the dark web.
Malware known as infostealers harvest credentials from infected devices, creating a steady stream of data for commercial identity markets. Some of these underground platforms now function as fraud-as-a-service networks, complete with customer support and monthly plans.
Tools such as FraudGPT and WormGPT are being marketed for phishing, social engineering, and malware creation use.
These subscription-based generative AI tools start at around $200 per month and offer attackers a faster path to success by lowering the barrier to entry.
The Trust Ledger also calls for updated policies to address the growing speed and scale of AI-driven fraud.
Recommendations include increasing access to critical technologies, improving collaboration between private and public sectors, and redesigning fraud prevention models in sensitive industries such as finance, healthcare, and energy.
You can download The Trust Ledger: Transaction & Identity Fraud Bulletin here.
What do you think about the rise in AI-driven identity crime? Let us know in the comments.
Image credit: Rawpixelimages/Dreamstime.com