A KYC rule for social media: What social networks can learn from banks
Banks around the world abide by know-your-customer (KYC) rules to block crime. When it comes to finances, governments agree that it’s appropriate to capture clear identification on a person, in order to hold them accountable for their actions. But when it comes to crimes against social infrastructure, there are no checks in place and malicious actors around the globe leverage social media platforms to justify human rights abuses, steal elections, or simply to create division.
Ever since the Mueller investigation proved that Russia used its Internet Research Agency to conduct disinformation in the United States in order to sow social discord around the 2016 presidential election, more nefarious groups seem to be taking advantage of the opportunity to cause disruption. And with 1 in 5 Americans getting their news only from social media, the impact is huge. The reason social media is such a powerful medium for this illicit activity is that perpetrators are able to hide behind fake accounts, creating ties to communities and causes in a seemingly legitimate manner.
However, there is a tried solution to this problem that researchers have been using for ages. Peer-reviewed journals have always been credible sources of information because the credibility of work and the author have both been established by experts in the field. Similarly, journalists work to validate the sources of their information. Thus organizations following journalistic standards are treated as information source standards.
Some social platforms have started to take steps to address misinformation. Twitter, for example, is now labeling posts with misleading information about COVID-19 vaccines. The platform has also added labels such as "spreading misleading and potentially harmful information," to Tweets, the most notable example being when former President Trump suggested COVID-19 was no more harmful than the flu. Unfortunately, these labels are usually added after the damage has already been done.
A possible solution to social misinformation could be a "KYA" -- or Know Your Author -- solution, which can vet author identities by validating their credentials, such as their expertise and experience. This can be operationalized at several levels. The entry-level can be operated in a semi-anonymized way with the speaker’s identity withheld but their vetted credentials provided to the audience. At the next level, as the content becomes more sensitive, the author’s vetted identity can also be provided to the reader. At the highest level, a peer review can be instituted. While this can vet the author, the complete process must also include vetting the facts and claims in the content itself.
One can also envisage a KYA score -- much like a credit score -- with a number of factors such as peer review, article credibility, and number of incorrect statements in articles, to establish the credentials of the author. If a KYA score is presented with a given post, the readers can take that into consideration as they view the post. This could be an effective deterrent to those authors who do not have an established KYA score, and try to pass opinion as fact. The readers can then filter to view only credible authors that exceed a minimum KYA score threshold. A visible KYA score would help users identify false or misleading content, such as the deepfake videos of Tom Cruise that recently went viral on TikTok. As the audience of these uncredentialed authors diminishes, their motivations are likely to as well, yet this approach steers clear of stifling information in a partisan way.
But there is no panacea to this problem, as we have repeatedly seen misinformation coming from journalists as well as social and political leaders. However, a KYA-like approach would not allow misinformation disseminators to hide behind anonymity. Financial institutions understand the value of this vetting and use it effectively to prevent anti-money laundering on their platforms. Just as their KYC solution connects the digital identity with a physical identity, the KYA would connect the digital identity with a physical identity and the credentials of the author.
In order for this type of KYA solution to work, all social media companies must come together and agree on a standardized approach that ties a physical person to their digital identity and then agree on a universal scoring methodology. This would give the audience the data they need to determine whether a story or piece of content is authentic.
There are several existing models that can be used to build a KYA solution. This can be viewed as two-process: one for first-time authors, relying on the content of their current post, and a second for frequent authors that can take into account earlier posts written and the content of the current post. Once a set of criteria is established, a rubric for the review process can be instituted. This could be very similar to the current peer review process for academic journals today. There is a strong role for technology to play in this process from fact-checking to augmenting the human expert in article assessment. The actual implementation of KYA solution should be flexible as it can also include local variants to truly serve the end consumer.
Whatever the solution might be, we will also need stronger regulation to prevent social media companies from hosting unsavory forms of expression. Layering on top of these self-regulated behaviors, driven by credit-like KYA scores, along with the current solutions of filters and labels, will put us on a path to success. Only then will we see the problem of misinformation truly addressed and a better, more informed world emerge.
Mohan Mahadevan is Vice President Research, Onfido. The former Head of Computer Vision and Machine Learning for Robotic Applications at Amazon lends his extensive expertise to Onfido’s 250-strong team of technologists across San Francisco, New York, London, Lisbon, Delhi and Singapore and Paris.