AI-ttack of the Clones: The rise and risks of AI scams
Used for productivity, generative AI offers unprecedented potential to improve the performance and impact of modern software solutions. One of its major capabilities is that it lowers the barriers to entry for those without software development knowledge and experience. While this has its advantages, in the wrong hands, it can also be dangerous.
GenAI has also raised the stakes for those looking to protect users against social engineering, with increasingly sophisticated and compelling scams making it more difficult than ever to filter genuine communication from fake.
The commoditization of digital crime
Take the issues associated with deepfake AI voice clones, for example. According to a 2023 study, 1 in 4 adults have already experienced a voice scam, with over three-quarters of victims losing money as a result. In one case, a CEO was tricked into transferring $243,000 following a fraudulent call in which the scammer used voice cloning. Similarly, these AI-driven capabilities are used to conduct fake kidnapping calls or fraudulent requests for sensitive information by impersonating friends or family, illustrating a significant rise in advanced phishing attacks.
But how do they work? AI voice cloning tools are trained on large datasets of audio recordings and can learn to mimic different voices and accents with amazing levels of authenticity. Scammers can pull the audio from live calls, social media or a range of other publicly available sources.
The sample is then uploaded into a specialized AI tool designed for voice cloning, which is "trained" on the person's voice to simulate it accurately. The technology also allows threat actors to steer live conversations via text inputs or even speak directly into the tool, prompting it to generate audio in the cloned voice, complete with possible inflections or pauses to enhance realism. A process that would have seemed like science fiction just a few years ago can now be completed in minutes, underscoring how the barriers to entry for this kind of sophisticated crime are quickly disappearing. In the hands of experienced threat actors, the risks are even greater.
Despite the fact that AI tools such as ChatGPT and others have built-in guardrails to limit misuse, there are already examples of these being bypassed. Flooding AI tools with examples of wrongdoing, for example, can ‘convince’ the AI to provide potentially damaging outputs. Some AI organizations have taken a proactive approach to understanding the risks linked to flooding tools. According to Anthropic, the developer of Claude, this kind of ‘many shot jailbreaking’ has a wide range of malicious use cases, from deception and disinformation to discrimination. As their recent study concludes, “The ever-lengthening context window of LLMs is a double-edged sword. It makes the models far more useful in all sorts of ways, but it also makes feasible a new class of jailbreaking vulnerabilities.”
Mitigating the risks
So, where does that leave organizations and their teams who want to limit their exposure to these risks? Clearly, there is an urgent need for constant testing and robust security measures, which include a major role in user training and awareness initiatives. On a practical level, preventative techniques to protect against falling victim to a voice cloning scams should focus on:
- Identifying anomalies. Employees should look for unusual speech patterns and odd or unexpected phraseology, which could indicate that an AI scam is being attempted. However, we shouldn’t over-rely on this as the technology is rapidly advancing to a point where these unusual speech patterns (or artifacts) will not be perceptible.
- The content of calls. If a call is received from an unexpected source, requests confidential information or that the recipient should carry out financial transactions, there should be a process available to employees to verify its veracity.
- Confirming the caller's identity. Organizations should implement robust processes, such as a safe word, so employees can legitimize the call. Establishing a word or phrase as a form of verification is a simple but effective measure to counteract this scam.
As for securing the guardrails of the models themselves, an increasingly popular service ethical hackers can support with is AI Red Teaming. Not only does AI Red Teaming provide powerful testing mechanisms, but it also empowers organizations to build trust in its AI deployments. Additional research has shown leading organizations are recognizing AI red teaming as a method to reduce AI security risk. This solution tackles both AI safety and security to ensure any AI in development, at any stage, is being built with safety and security front of mind, limiting the likelihood of a cybercriminal being able to use it maliciously in future.
At a time when cyber threats are evolving at an alarming rate and threat actors are building a huge criminal industry based on extortion and theft, organizations must continually evolve their approach to security to keep pace with the speed of change. Without the human touch, network infrastructure and data assets will remain extremely vulnerable, even with a comprehensive suite of automation technologies in place.
Dane Sherrets is Solutions Architect at HackerOne,