The changing face of digital identity verification [Q&A]
As we've grown more and more reliant on the internet to carry out everyday transactions, proving who we are has become a major issue.
We are still heavily dependent on IDs, passwords and supplementary security questions, but all of these are open to abuse leading to the risk of identity theft.
Rick Song, CEO of Persona believes that there's a better way of accurately verifying users, so we spoke to him to find out more.
BN: Identity theft has been a problem for ages, but it's certainly taken off as we've gone deeper into digital. What do you see as the biggest challenge for companies in this area?
RS: There are several challenges for organizations that need to confirm the identity of every user, consumer, or applicant. Companies today are rightfully worried about long-term trust, financial peril, privacy, compliance, and effectiveness of IDV (identity verification) against attacks. One mistake, and they can lose the trust of the public -- at a heavy cost. If an impostor gets through and hijacks a customer’s account or takes a mortgage out in their name, it can both cost the company financially and devastate customer relationships -- even if the actual identity theft occurred in a data breach outside their organization.
Consumers know and care more about their personal data than ever before. They don't want random strangers reviewing (and potentially stealing) their sensitive data during the verification process. With automated identity verification, companies can minimize access to PII and ease such concerns.
When it comes to liability for mishandling personal information, organizations can shift the handling of private data -- and the responsibility -- to a trusted external third party. In other words, an online store can let a trusted verification provider -- like Persona -- handle the process and the data. This reduces the risk of a store employee stealing and selling customer data, and makes it easy for the organization to comply with regulatory frameworks such as KYC, AML, as well as privacy regulations such as GDPR and CCPA.
The security arms race is also a recurring challenge; that is, staying ahead of fraudsters’ ability to deceive ID verification. It’s critical to use multiple lenses to verify each individual, like hunting discrepancies between a video selfie and government ID, matching personal data and location, behavioral signals, and more.
Finally, organizations must keep verification simple and quick to provide the best user experience. Consumers are extremely aware of how they are treated, and whether their time is wasted. We find that many users prefer a fully automated identity verification approach over a manual approach -- not just because it's more secure, but also because it's much faster.
It's also crucial to tailor the flow for each user based on the risk associated with the user and the transaction, which requires automation plus easy reconfigurability. There is no one-size-fits-all approach to identity verification; a single business may need different IDV workflows and approaches for each product line and types of user. Even businesses offering similar products need different verifications and workflows because their customers are different.
BN: Facial recognition comes in for a lot of criticism and the incidence of deepfakes and fraud with these technologies raises questions. How can companies providing this technology combat bad actors?
RS: Bad actors keep evolving their tactics, so successfully intercepting fraudsters at login requires a holistic approach -- combining active, passive, and behavioral signals and cross-referencing data.
If the user is opening a new financial account, we might add protection by requiring the selfie to match the portrait on the official ID -- or even have the applicant move their head in a live video. But deepfake videos today might even be good enough to pass head movement tests, which is why it's important to get a holistic view of the user by combining active, passive, and behavioral signals. If John the applicant is based in Kansas, but logs in from Mongolia, that should trigger scrutiny.
At Persona, we continuously extend our infrastructure and methodology to help businesses implement a dynamic, multiple-lens approach to deter impersonators.
BN: Do you feel that biometrics/facial recognition-based identity technology has gained more trust or less trust in the last few years?
RS: We just talked about privacy and trust; the other side of trust, of course, is what level of security biometrics provides. We believe biometrics works best in conjunction with other forms of security, such as ID and selfie matching, because that multiplies the challenges for imposters.
Organizations and their users understand biometrics is one line of defense; it’s not perfect, but it’s very valuable and getting better.
Also, consumers may use biometrics for convenience even when they aren't totally comfortable with it. One study by the University of Texas at Austin found that even though consumers weren't sure whether biometrics are reliable, their second-most common use of biometric access was logging into their financial accounts. Should a glimpse of your face secure your life savings? Probably not when it comes to high risk, high value transactions, and that’s why Persona offers a multi-layered verification approach.
BN: Right now there is no one company or entity that controls this market. What needs to happen in order for this technology to truly become mainstream?
RS: From a security standpoint, it's actually good that no one technology owns ID verification, because diversity makes life harder for attackers. Biometric technologies have achieved mainstream status, but will they become ubiquitous? This morning, half a billion people unlocked their phones or other devices with their face or fingerprint. Biometrics are fast and accurate enough for most consumers, and users trust it enough to use it to unlock their investment accounts.
But since there are so many attack vectors, we must build a holistic picture of the identity of the individual and boost biometrics with AI and machine learning. Automating this combination provides a better customer experience.
There are always questions around what the technology can do. For example, can consumer-device biometrics detect and reject a sleeping face (the answer is yes, by the way) and a latex mask made from photographs? These are especially relevant in two common examples where financial loss can be enormous: fraud committed by romantic partners and theft by domestic employees.
For ID verification, combining facial recognition with AI and machine learning is increasingly important. Another reason we emphasize automation: the greatest risk for any infrastructure is often human error, not a technology weakness. The higher-value the potential target, the more safeguards -- the classic tradeoff.
The need for ID verification grows as criminals scale up their ability to attack on multiple fronts. We have made it incredibly easy for even a small business to integrate IDV with just 10 lines of code. We also have a no-code version of our product that a business can start using within minutes.
Will consumers embrace IDV and biometrics everywhere? It's hard to say. But they welcome it more when they understand how broadly their online life is under attack -- ransomware headlines help them understand if somebody is after their assets, and it's our job to continually make defenses more sophisticated and capable, using automation and AI to keep it reliable, fast, and frictionless for the consumer.