Deepfakes: the next frontier in digital deception

Machine learning (ML) and AI tools raise concerns over mis- and disinformation. These technologies can 'hallucinate' or create text and images that seem convincing but may be completely detached from reality. This may cause people to unknowingly share misinformation about events that never occurred, fundamentally altering the landscape of online trust. Worse -- these systems can be weaponized by cyber criminals and other bad actors to share disinformation, using deepfakes to deceive.

Deepfakes -- the ability to mimic someone using voice or audio and to make them appear to say what you want -- are a growing threat in cybersecurity. Today the widespread availability of advanced technology and accessible AI allows virtually anyone to produce highly realistic fake content.

Many businesses care deeply about ransomware defences, but the FBI estimates that the Business Email Compromise / CEO Fraud attacks cost businesses worldwide three times as much as ransomware attacks do. And deepfake technologies only make these attacks more plausible, and therefore more effective. A notable example from earlier this year involves a finance worker who was tricked into paying out $25 million to fraudsters who used deepfake technology to impersonate the company's CFO during a video conference call in Hong Kong.

As deepfake technology becomes more advanced, it becomes harder to spot the fakes. Microsoft’s recent announcement of VASA-1 shows how this tech can help boost educational equity, but it also makes it easier for scammers to create deepfakes for dishonest purposes. That is why companies need to prioritise educating employees on the warning signs of deepfakes to avoid them falling victim to a scam, in much the same way they are educated about social engineering. Unfortunately, many companies are not yet doing this.

The long arm of the law

The positive news is that legislation aimed at helping people deal with deepfakes is expected to come into effect soon. This summer, the EU has put forward the AI Act, the world’s first comprehensive AI law to regulate artificial intelligence. Likewise, in April 2021, the European Commission proposed the first EU regulatory framework for AI. It stated that AI systems that can be used in different applications are analysed and classified according to the risks they pose to users. Depending on risk levels, there will be an increase or decrease in regulation.

In the UK the King's Speech in July also discussed AI, in particular the need for the new government to "establish the appropriate legislation to place requirements on those working to develop the most powerful artificial intelligence models." This includes labeling materials as 'Produced by AI', though there are already doubts about the usefulness of such labeling -- cyber criminals are unlikely to use them, and even for legitimate use, there is currently little guidance on how much content in a piece would need to be generated by AI for the label to apply.

The US Government are also looking at similar measures, with a presidential Executive Order in November 2023 and a 2024 Act of Congress looking at the labeling of content and other AI issues.

Don't panic; be prepared

It's important to approach the hype surrounding deepfakes with caution. While deepfakes can aid in providing context for scams, it's important to refrain from contributing to unnecessary panic. For instance, the influence of nation-state interference in elections, often associated with deepfakes, may be overstated. Quick, firm responses can mitigate the impact. Many aspects of this issue are manageable.

The rise of generative AI and deepfakes has undeniably diversified and evolved the threat landscape. While the volume and sophistication of the attacks may increase, the underlying principles of these attacks remain unchanged. This means that defensive strategies can also equally stay largely the same.

The warning signs

Detecting deepfake scams hinges upon scrutiny of the content, whether audio or visual. Slowing down and taking a moment to notice subtle irregularities is key. It's almost always better to act safely rather than swiftly to reduce the chances of being rushed into making an error.

  • Talk the talk: Be vigilant about awkward pauses, odd pronunciations, or speech patterns that seem unnatural within projected audio -- these may hint at AI manipulation.
  • Pressure point: Peculiar or inappropriate phrasing often signals AI involvement, along with any calls for immediate action or conveying undue urgency, which are hallmark tactics employed in scams to pressurize the recipient.
  • Movements: Visual cues include unsynchronized mouth movements, repetitive gestures, and incongruous facial expressions.

By familiarising themselves with these indicators, employees can better spot potential deepfake threats.

As an individual, when faced with any social engineering scam, whether through audio, video, email, or AI deepfakes:

  • Pause: Take a moment, don’t feel pressured to act quickly or if emotions are being manipulated.
  • Assess plausibility: Question the request's normalcy. Is it an unusual task, or a common one presented unusually?
  • Review procedures: Consider if the request violates an organization's protocols. Understand why this action is needed.
  • Verify information: Seek confirmation from others or directly from the source. Avoid responding solely via the initial communication channel; opt for a call or direct message for validation. If a voice message is received, cross-verify through instant messaging.

If something doesn't feel right -- escalate it to your security or IT team.

As an organization, the best thing that can be done is to establish robust procedures and strictly adhere to them. For example, giving an employee sole discretion to authorize a £25 million payment, without oversight is a recipe for financial loss. Implementing and following clear payment authorization protocols, will significantly reduce the financial risk.

Image credit: Skorzewiak/depositphotos.com

John Scott is lead security researcher at CultureAI

Comments are closed.

© 1998-2024 BetaNews, Inc. All Rights Reserved. Privacy Policy - Cookie Policy.