Think you can spot a deepfake? Think again


A new study from Jumio reveals that 52 percent of global respondents believe they could successfully detect a deepfake video.
However, the report's authors believe this reflects over-confidence on the part of consumers, given the reality that deepfakes have reached a level of sophistication that prevents detection by the naked eye.
Deepfakes become the leading form of fraud


New research from verification platform Sumsub shows that the proportion of deepfakes in North America more than doubled from 2022 to Q1 2023.
The proportion of deepfakes jumped from 0.2 percent to 2.6 percent in the US and from 0.1 percent to 4.6 percent in Canada, respectively. Simultaneously, printed forgeries, which represented four to five percent of all fraud in 2022, dropped to zero percent in the last quarter.
How AI is being used to spread misinformation


A new research paper from ShadowDragon examines how AI, such as ChatGPT, is being used to spread hate and misinformation via fake reviews and deepfakes.
Written by Nico Dekens, director of intelligence, collection innovation at ShadowDragon, the paper looks at how to identify AI-generated materials online that are intentionally spreading false information or worse.
Deepfake content rises 900 percent


The number of deepfake videos online is increasing at an annual rate of 900 percent according to the World Economic Forum.
In the light of this Kaspersky researchers have revealed the top three fraud schemes using deepfakes that people should be aware of.
Deepfake fraud hits a third of businesses


Deepfake fraud is on the rise, with 37 percent of organizations experiencing voice fraud and 29 percent falling victim to deepfake videos, according to a survey by identity verification specialist Regula.
Fake biometric artifacts like deepfake voice or video are perceived as real threats by 80 percent of companies, with businesses in the USA most concerned, about 91 percent of organizations considering them to be a growing threat.
Harnessing the power of AI to combat fake reviews [Q&A]


There's a lot of talk about how artificial intelligence (AI) is changing the world. And it's true -- AI has already started transforming the healthcare, finance, and manufacturing industries. But there's one area where AI is causing some severe problems: fake reviews.
We spoke with Truely CEO JP Bisson about AI technology and how companies can use it to protect their interests.
Deepfakes and crypto scams are on the rise


The first quarter of 2022 has seen a rise in cybercriminals deceiving victims through new deepfakes and crypto scams to gain access to their financial or personal information.
The latest quarterly Consumer Cyber Safety Pulse Report from Norton Labs reveals deepfakes -- computer-generated images and videos -- are on the rise and being utilized by bad actors to spread disinformation.
New threat intelligence system helps fight deepfakes


Concern about deepfakes is on the rise and earlier this week Microsoft announced its own video authentication tool ahead of the US elections.
To help counter the threat from increasingly sophisticated cyber attacks, including the use of deepfakes, biometric authentication company iProov is also launching its own Security Operations Centre (iSOC).
In a world of deepfakes, who can you trust?


Though they seem like something out of a futuristic sci-fi movie, deepfakes are very much a reality. In fact, developers have been experimenting with deepfake technology as far back as the late 1990s. Today, deepfakes have become so advanced and believable that they can cause some serious damage in the wrong hands.
"Deepfake" is a term used for the process of replacing someone in an existing image or video with someone else’s likeness. If you’ve watched Former U.S. President Barack Obama giving a lecture about cybercrime or follow the hilarious thread of videos in which various actors have their faces replaced with that of Nicolas Cage, you’ve seen a deepfake. As entertaining as they are, deepfake algorithms are getting more sophisticated every day. When we stop being able to differentiate this synthetic media from reality, a real problem arises.
30 percent of Americans not confident they could spot a machine voice pretending to be human


With deepfake voice fraud an increasing threat, new research shows that 30 percent of Americans are not confident they would be able to detect the difference between a computer generated voice and a human one.
The study from ID R&D, a provider of AI-based biometrics and voice and face anti-spoofing technologies, shows only just over a third (36 percent) are confident they could spot a fake.
Embracing our deepfake future


"Is it live? Or is it Memorex?" Those of us who’ve been around a while will recall those iconic TV commercials where the announcer challenged the audience to tell the difference between a recorded performance and a live one (or as "live" as a pre-recorded TV spot can be). The idea was that the recording medium -- in this case, Memorex brand audio cassette tapes -- reproduced the full fidelity of the original so faithfully that, in one case, a singer’s high note recorded on one of their tapes literally shattered a nearby wine glass.
I like to think of the above as the first, crude precursor to what today we call "deepfake" technology. But whereas faithfully reproducing audio content has been a net positive for humanity (you wouldn’t be enjoying your MP3s or Spotify streams without those pioneering first steps), deepfake -- or the ability to recreate and/or completely simulate (using AI) both the audio and video representations of a live person -- has been universally panned because of its potential for abuse. After all, in a world full of fake news, it’s not hard to imagine some bad actor deciding to put out bogus recordings (or recreations) of high-profile individuals for personal or political gain.
Privacy concerns mount over Chinese face-swap app Zao


Zao -- a Chinese face-swapping app with the potential to be used to create deepfakes -- went viral over the weekend, shooting to the top of the App Store download charts. But concerns have been raised not only over the potential for the app to be abused, but also over its privacy policies.
Of particular concern are clauses which grant the developers "free, irrevocable, permanent, transferable, and relicense-able" rights over users' photos. Zao responded by tweaking its privacy policy, but complaints are still flooding in.
Recent Headlines
Most Commented Stories
BetaNews, your source for breaking tech news, reviews, and in-depth reporting since 1998.
Regional iGaming Content
© 1998-2025 BetaNews, Inc. All Rights Reserved. About Us - Privacy Policy - Cookie Policy - Sitemap.