How AI can supercharge fraud in the enterprise [Q&A]

The FBI reports that complaints around deepfake AI videos have more than doubled, and financial losses have nearly tripled this year.
Agentic AI is set to accelerate this process, making it even easier to commit fraud with deepfakes. We spoke to Alix Melchy, VP of AI at Jumio, to discuss the threat and how to combat it.
BN: What’s fueling this dramatic rise in deepfakes, and how is agentic AI accelerating the trend?
AM: We’re witnessing a significant shift in the cyber threat landscape, as deepfakes are no longer a niche tool for isolated bad actors. With agentic AI, the cost and complexity of creating and deploying deepfakes has dropped dramatically. These AI systems now provide the contextual intelligence needed to personalize attacks at scale. It’s not just about visual deception, it’s about using AI agents to simulate convincing human interactions, carry out real-time social engineering, and even automate identity theft.
Agentic AI is making fraud scalable and hyper-targeted. It’s fueling what cybersecurity experts call the industrialization of fraud. Organized fraud rings are operating in packs, targeting industries from gaming to the sharing economy, and even to high-value financial institutions. This is problematic for enterprise security teams, as they are operating as a lone company against major fraud organizations.
BN: We’ve heard a lot about the 'industrialization of fraud.' Can you explain how agentic AI is lowering the cost and complexity of executing large-scale identity fraud campaigns?
AM: Agentic AI changes the economics of fraud: it commoditizes the low-end of the market and increases the operating margins on the high-end. Given the complexity of sophisticated attacks, fraudsters have adopted the same approach as software companies with third-party APIs and cloud-based tools. With agentic AI, they can automate the workflows integrating such services, in the same manner as legitimate businesses automate their workflows. They’re essentially assembling modular fraud pipelines, and if one tool doesn’t work, they can easily switch to another. Rapid iteration and accessibility are transforming cybercrime into a very adaptable and scalable business. Similarly to the idea of the one-person startup now possible with agentic AI, the one-person fraud enterprise is becoming a reality.
BN: Given the rising prevalence of deepfakes and AI-generated personas, how is Jumio evolving its identity verification strategy to stay ahead of these increasingly convincing threats?
AM: At Jumio, we recognize that point-in-time verification is no longer enough. We’re shifting toward continuous adaptive trust -- an identity intelligence model that continuously evaluates user behavior, device reputation, biometric signals, and contextual risk to determine trustworthiness.
This involves combining advanced AI techniques with proven data structures like knowledge graphs and layered biometric defenses. For example, our systems leverage multimodal signals such as facial recognition, document analysis, and behavioral biometrics. We then analyze those signals across transactions. If something seems out of context or inconsistent, that insight helps trigger an appropriate response, whether it’s escalating friction or denying access.
We’re also deploying networked AI models for enhanced risk analysis. These systems evaluate identity behaviors across transactions, devices, and IP addresses. This cross-network view provides a much stronger layer of defense and helps us secure identities while streamlining onboarding and login processes. Think of it like safer drivers getting lower insurance rates; low-risk users experience fewer barriers, while high-risk behaviors (e.g., multiple logins from unknown devices) trigger additional verification, maintaining security without compromising trust. Organizations can surface risk signals that reveal larger fraud clusters operating in the shadows.
Our goal is to not just detect fraud when it happens, but to anticipate and prevent it with real-time, risk-based intelligence.
BN: With generative AI blurring the line between real and fake identities, what role does continuous adaptive trust play in securing digital ecosystems in 2025 and beyond?
AM: Continuous adaptive trust is foundational to modern identity intelligence. We must verify identities dynamically, based on shifting risk signals and contextual behavior.
This also allows us to apply personalized friction. Trusted users with consistent behavior may experience a frictionless journey. Meanwhile anomalies, say, a new device or unexpected velocity pattern, can trigger more rigorous checks. AI enables us to tailor the user experience based on the confidence we have in their identity at that moment.
And importantly, all of this has to happen with strong privacy safeguards in place. Concepts like zero-knowledge proofs are becoming more relevant, empowering users to share only data that's needed. For example, conducting age verification measures without gathering unneeded sensitive data from the individual.
Success will belong to organizations that treat trust as a living, adaptive process in the years ahead. They will leverage AI to anticipate fraud before it strikes, rather than taking a reactive approach. These are the organizations that will build brand trust with users, securely scale, and stay ahead in their markets.
Image credit: Rawpixelimages/Dreamstime.com
