The surprising state of social media background checks in 2020
Your friends and followers aren't the only ones looking at what you Like. In 2017, CareerBuilder revealed that 70 percent of employers review social media during the hiring process. Today that number may be even higher: one survey by First Advantage found that nearly two-thirds of employers choose to screen all employees and applicants for "red flag" behavior on websites such as Facebook and Twitter.
The primary peril of vetting candidates by examining their online profiles remains the same as when this trend first rose to prominence: employers performing "social media background checks" may inadvertently uncover sensitive or protected information which compromises their ability to make an unbiased hiring decision. Not hiring a candidate based on such information, from sexual orientation to religion to race, could expose a business to accusations of unfair or discriminatory hiring practices.
To avoid these concerns, many businesses now choose to outsource this process to individuals who won't have a say in hiring decisions. Dedicated social media screeners and outside services are alternatives to hiring managers viewing an applicant's social media pages personally. New technologies, including AI-powered platforms, have make such screenings more accessible than ever in 2020.
Despite their significant potential pitfalls, social media background checks will continue to increase in scope, complexity, and adoption over the next decade.
Machine Learning Meets Its Match on Social Media
From AI-controlled stock trading to self-driving cars and online content curation, new applications for machine learning algorithms are developing at a rapid rate. The rise of third-party business services claiming to use AI to power new and more accurate social media background checks should come as no surprise.
How useful are these solutions? Any social vetting algorithm faces an immense challenge based on the huge volume of information that it must parse. Each individual generates a massive amount of personal data by using social media, and with the average person now operating about seven accounts, the scope of a social background check is potentially immense. Turning the problem over to an algorithm promises to speed up the process, but it leaves its users vulnerable to faulty assumptions baked into its programming.
One Twitter user, sharing the results of a social background check conducted on their own account, revealed some of the shocking issues which can occur.
The service provider, Fama, claims to use "natural language processing" and machine learning to identify red flags in a user's social media accounts. After applying for a position with a religious organization, the Twitter user received a copy of their background check over more than 340 pages of content "flags." One look at the results generated by the check makes it clear that the algorithm operates without any care for context.
Among other things, Fama’s technology flagged every instance of the applicant liking any Tweet that included a four-letter word or mention of alcohol. The algorithm also flagged words such as trans, cis, and homosexuals as "bad language." While flagging a broad range of "bad" phrases, the algorithm also labeled certain Tweets "positive" as indications of the applicant’s character -- many of which clearly reflected irony or sarcasm rather than sincerity.
While most employers engage in character vetting, social media background checks introduce noise into the dataset. Without careful correction and continual tweaks, these algorithms could produce false positives or, as in the reported case, a volume of mixed results too vast and separated from context to be practically useful to any employer.
Social Checks Expanding in Scope
Services such as Fama have revealed one way that social media background checks have evolved: they're no longer limited to what users create or post on their own. Reactions to content posted by other users are on the table for employers to pick apart.
For job applicants, likes and shares represent an area of their online activity that they may easily overlook -- and many aren't aware of just how much information social websites record. Pew Research found in a 2019 study that an astonishing 74 percent of survey respondents did not know that Facebook tracked their interests.
Such facts might be why, according to MarketWatch, two-thirds of employers have uncovered problematic information about applicants on Facebook. To employers, even the frequency with which a user posts may be seen as troubling: CareerBuilder reports that 12 percent of surveyed employers had disqualified candidates based on a high social media posting frequency.
As employers adopt new technology to vet candidates through social media, more applicants are taking steps to hide their past and future activity from view. In a JDP survey reported by HRDive, 43 percent of people used private settings to stop employers from viewing their content. An equal number reported maintaining a separate, private account under an alias.
The expansion of social background checking shows that despite all their potential prejudices and problems, these tools are here to stay in 2020 and beyond. Compared to more traditional background checks concerned with criminal history, references, and credentials, social media background checks represent a complex and confusing minefield for businesses and prospective employees to navigate.
Michael Klazema is Chief Marketing Technologist at VODW.com and has over two decades of experience in digital consulting, online product management, and technology innovation. He is the lead author and editor for Dallas-based backgroundchecks.com with a focus on human resource and employment screening developments.