Deepfake fraud calls target consumer cash


AI-generated voice impersonation scams are an increasing threat, with 31 percent of US consumers, 27 percent of Canadians, and 26 percent of UK consumers reporting encounters with deepfake fraud calls.
A new report from Hiya, based on data from the company's Voice Intelligence Network and a January 2025 survey of 12,000 consumers across the US, Canada, the UK, Germany, France, and Spain, shows that more than 30 percent of those targeted fell victim, suffering significant financial losses. In the US, the average reported loss was $539, while UK victims faced the steepest losses at £1,479 ($1,867).
Deepfakes pose growing fraud risk to contact centers


Deepfake attacks, including sophisticated synthetic voice clones, are rising, posing an estimated $5 billion fraud risk to US contact centers, according to the latest Pindrop Voice Intelligence and Security Report.
Contact center fraud has surged by 60 percent in the last two years, reaching the highest levels since 2019. By the end of this year, one in every 730 calls to a contact center is expected to be fraudulent.
How your voice could be used by phone scammers


We've all become used to scam phone calls of various kinds, but it seems AI is allowing them to become more sophisticated.
According to NordVPN, scammers are now able to create realistic voice clones from samples collected online and use them to extort money and data from victims' friends, relatives, or colleagues.
End of the line: How UK businesses can prepare for the telecoms Big Switch Off [Q&A]


With the UK's national 'Stop Sell' having commenced in September this year and the Public Switched Telephone Network (PSTN) switch off due by December 2025, business owners need to have all the facts and critical information that they need to act fast and confidently to make the right decisions about their digital alternatives.
With the switch off deadline looming how can businesses in the UK ensure that they are prepared for these significant, but exciting changes to communications?
Cloning voices: The opportunities, threats and needed safeguards


Microsoft recently made headlines by announcing it is working on a form of artificial intelligence (AI) called VALL-E that can clone voices from a three second audio clip. Imagine that now AI can have anyone’s voice say words without that individual actually speaking! Even more recently, Samsung announced that its software assistant, Bixby, can now clone users’ voices to answer calls. Specifically, Bixby now lets English speakers answer calls by typing a message which Bixby converts to audio and relays to the caller on their behalf.
Technologies like VALL-E and Bixby are bringing voice cloning to reality and have the potential to be industry game changers. The term voice cloning refers to the ability to use AI to build a digital copy of a person’s unique voice, including speech patterns, accents and voice inflection, by training an algorithm with a sample of a person’s speech. Once a voice model is created, plain text is all that’s needed to synthesize a person’s speech, capturing and mimicking the sound of an individual. In fact, many different types of voice cloning companies are now launching, making this technology much more accessible.
Recent Headlines
Most Commented Stories
BetaNews, your source for breaking tech news, reviews, and in-depth reporting since 1998.
Regional iGaming Content
© 1998-2025 BetaNews, Inc. All Rights Reserved. About Us - Privacy Policy - Cookie Policy - Sitemap.