Responding to the challenge of deepfakes [Q&A]


Recent advancements in publicly available AI models have made it incredibly easy to generate deepfakes -- so much so that attackers don’t need to be technical experts to pull off convincing deception campaigns.
Attackers can scrape public content to recreate someone's voice or face, and inject that fake identity into virtual meetings, phone calls, or even training videos.
We spoke to Camellia Chan, CEO and co-founder of X-PHY Inc, to learn more about these challenges and how businesses can repond to them.
BN: How have threat actors exploited deepfakes in practice or theory, and to what extent are these methods becoming more feasible?
CC: We’ve already seen AI-generated voice and video clones used to impersonate senior executives and political figures. A well-known example involved a deepfaked CFO’s voice, which led to a scam that stole nearly a quarter of a million dollars in a video conference. The potential damage of deepfakes, though, goes far beyond financial theft. Take, for example, the fabricated audio clip of London Mayor Sadiq Khan, which falsely had him making inflammatory remarks ahead of Armistice Day. That deepfake almost led to serious public unrest.
What makes deepfakes particularly dangerous is their ability to bypass traditional security defenses. They’re designed to exploit human trust -- our natural tendency to believe what we see and hear. With reports showing a 550 percent rise in deepfake content on social media between 2019 and 2023, and an estimated eight million deepfakes expected to circulate in 2025, the threat is no longer hypothetical. It’s here, it’s growing, and it’s a real problem that we must address.
BN: Are there potential legal or compliance implications for organizations deceived by deepfakes, and how should this shape their preparedness plans?
CC: Yes, the legal and compliance risks tied to deepfake incidents are becoming harder for businesses to overlook, particularly as compliance expectations are still evolving. When an organization acts on AI-generated misinformation -- whether its transferring funds based on a fake executive announcement or inadvertently sharing a manipulated announcement -- it opens itself up to serious financial, reputational and regulatory consequences.
Regulators expect clear evidence that companies are actively managing the risks posed by synthetic media. Under UK GDPR and the Data Protection Act 2018, there’s a duty to apply appropriate technical and organizational safeguards to protect personal data. If a deepfake results in that data being mishandled, regulators such as the ICO may interpret it as a failure to comply with Article 32.
While the UK and EU have made some progress in regulating deepfakes, particularly in areas like intimate image abuse and online safety, there’s a lack of guidance when it comes to AI-driven deception in corporate contexts. The EU AI Act introduces important transparency rules for synthetic content, but enforcement mechanisms and practical advice for businesses are still vague. Here, the burden falls on organizations to define what ‘reasonable protection’ looks like.
That’s why companies need to act now, building internal processes that treat deepfakes not just as a reputational risk but as a compliance one. It’s crucial to establish a digital chain of custody for verifying the origin of important audio, video, or image files. Risk and compliance teams should review existing frameworks to find any gaps in content authentication. This is the ultimate compliance test of how seriously businesses are taking digital trust.
BN: What role does AI play in defending against deepfakes? Are there promising models or architecture specifically designed for this?
CC: Think of AI like homeopathy, applying the principle of 'like cures like'. In the case of deepfakes, AI is both the poison and the antidote. The same AI-driven technologies that create synthetic media are also being used to detect them. Advanced machine learning models -- multi-modal AI in particular -- are becoming increasingly effective at spotting manipulative signs. This can include unnatural blinking, facial inconsistencies, or mismatched audio-visual elements.
Like how homeopathy aims to trigger the body’s natural healing processes, the future of deepfake defence lies in empowering the endpoint. The value of AI comes from its ability to deliver real-time protection, enhanced privacy, and quicker response times -- all essential as threats grow more targeted and dynamic. Promising AI models include Convolutional Neural Networks (CNNs), Long Short-Term Memory networks (LSTMs), and Gated Recurrent Units (GRUs). CNNs analyze fine details in visual data, while LSTMs and GRUs are memory-based models designed to track audio-visual synchronization across short to long term dependencies.
Another key development in deepfake defence is the ‘security by design’ approach. It ensures detection systems are embedded into wider security ecosystems. From hardware to data to content, each layer acts as a checkpoint for authenticity, building trust. By merging deepfake detection with strong endpoint data security, organizations are not just reacting to threats; but ensuring that each device can swiftly verify the integrity of its digital communications, privately and without the need to send sensitive content to the cloud.
BN: How should organizations update their incident response plans to include deepfake scenarios?
CC: Treat deepfakes as you would any other cybersecurity threat by adopting a zero-trust approach. Don’t assume authenticity based solely on how convincing something appears, whether it’s video or audio.
Revise your response plan to include steps for verifying multimedia content, particularly when it’s being used to request sensitive actions. Develop a risk model that accounts for the potential use of deepfakes to target key business operations, such as executive communications, financial approvals, or customer relations. Ensure your team knows how to identify warning signs, who to report them to, and how to document the event.
Implement detection tools that can scan media in real-time and store flagged content for later review. The quicker you can spot and respond to these threats, the more damage you can mitigate. Now, it’s better to question first and verify before trusting.
BN: How can organizations shape their internal policies to better protect against deepfake attacks?
CC: Organizations need to promote a culture of zero trust thinking, one that challenges what’s perceived as authentic, especially when it comes to sensitive or high-stakes communication. A practical approach is implementing multi-channel confirmation for any major requests, particularly those involving finance and access rights. This means verifying information through a secondary method, like via a phone call or a secure platform to ensure its authenticity.
Most importantly, regular employee education is essential. Deepfake threats are psychological in nature. Internal policies must encourage employees to pause, verify, and report suspicious content. This will transform natural skepticism into a valuable organizational asset.
Image credit: Wrightstudio/Dreamstime.com