AI boosts growth in 'synthetic' identity fraud
A new survey of 500 financial executives in the US shows a 17 percent increase in 'synthetic' identity fraud cases over the past two years, with more than a third of professionals reporting a significant surge of 20 to 50 percent.
Synthetic identities use a combination of legitimate personally identifiable information, such as social security numbers, and fabricated information designed to pass as a real human being. This type of identity fraud has been in use for decades but has been systematically growing in complexity. Thanks to the introduction of generative AI technologies, fraudsters are now improving synthetic identities to such a degree they will circumvent any existing fraud prevention solutions and escalation policies, including document verification and even manual reviews.
"Synthetic identity fraud has long been a significant challenge for the financial industry, but the advent of AI technology has accelerated the problem," says Ari Jacoby, Deduce CEO. "Fraudsters are now able to create identities at an unprecedented pace, allowing them to play the long game with these personas. They can open accounts, make deposits, and engage in seemingly human-like interactions that pose an immense challenge for technology-based detection methods. Without better fraud prevention solutions, we can anticipate a spike in the financial impact associated with these identities."
The research also reveals that the technique clearly works, 53 percent of respondents say that they had proactively offered credit to synthetic customers. 87 percent of experts anticipate the problem worsening before an effective solution is found.
The full report is available from the Deduce website.