Motive doesn't matter: The three types of insider threats

insider threat

In information security, outside threats can get the lion's share of attention. Insider threats to data security, though, can be more dangerous and harder to detect because they are strengthened by enhanced knowledge and/or access.

Not only is it vital, therefore, to distinguish and prepare for insider threats, but it is just as vital to distinguish between different types of insider threats. A lot has been written about the different profiles for insider threats and inside attackers, but most pundits in this area focus on insider motive. Motive, however, doesn't matter. A threat is a threat, a breach is a breach. A vulnerability that can be exploited by one party for profit can be exploited by another for pleasure, by another for country, and so on. Instead of analyzing motives and reasons, it is far more useful to compare insider threats by action and intent.

Insider threats come in three flavors:

  • Compromised users,
  • Malicious users, and
  • Careless users.

Here's an explainer on understanding -- and dealing with -- each type respectively:

Compromised Users

It seems counterintuitive to consider compromised users as true "insider" threats; they have been compromised by an outsider (usually). Fundamentally, a compromised user is one whose access is being leveraged by a third party. They might contract a virus or other bit of malware on one of their devices. They might visit a website that delivers a malicious payload and/or captures sensitive information. Or a third-party might co-opt their password, biometrics, and/or other login credentials.

The compromising attack can come in low-tech form, as well. A clever social engineer may trick an employee into accessing and providing proprietary data. An extortionist may coerce an employee into doing his or her bidding. Even a court order for organizational data can be considered to be a compromising insider attack -- because it compels an employee to do things with data and network access that he knows he ought not and would not do otherwise.

The defining distinction here is that, regardless of the attack method, the user himself (or herself) has become compromised -- and, from there, is being used as an attack vector for compromising data and/or access. Whether he knows it or not, he is a living, breathing malware program. Acting neither with malice nor with carelessness (although carelessness may have initiated the compromise). He is a puppet with no free will. This is why a compromised user must be considered to be just as serious an insider threat as other, more "traditional" insider threats. It's as if the bad guys have walked into your network disguised as members of your team.

One of the first considerations of perimeter-security strategy is distinguishing your compromised from your non-compromised users. Fortunately, absent an attacker with patience, sophistication, and/or special insider knowledge, compromised users are typically easy to spot because they (or, rather, their puppetmasters) usually don't know what they're looking for. Compromised users will very suddenly break their work pattern, and it's off to the races as they try to access all kinds of databases and file servers -- setting off all sorts of bells and whistles in any half-decent security solutions.

Malicious Users

For most people, malicious users are the classic insider threat -- but the word "malicious" is a tricky label.

Archetypes like the embezzler and the disgruntled employee clearly demonstrate malice. But "malicious users," for our purposes, is a broader term. A malicious user might be a soon-to-be ex-employee who starts pulling valuable sales leads or other intellectual property to help him in his next job. Or a malicious user might be an activist or whistleblower, acting on personal principles of higher good. Or a malicious user might fall into one of a million different shades of gray in between. Again, it doesn't matter what the user's motives or reasons are. The elements that make a malicious user are (1) means, (2) knowledge, and (3) intent.

Typically, nobody has forced or tricked the malicious user into doing something; he acts independently and intends the consequences of his actions (unlike a compromised user). More to the point, he knows what he is doing -- and that what he is doing is contrary to organizational data-security interests (unlike a careless user; see below). These distinctions inform how to detect malicious users. Often, they will try to cover their tracks in the way they access, steal, modify, or destroy data; some are better at it than others.

Nonetheless, if you (and your network-monitoring solutions) pay attention to more than the basics (like login attempts), you can catch them in the act -- even if all of their data access is technically fully authorized. Perhaps they've begun regularly accessing a server or dataset that they have hardly touched in five years. Or perhaps they're accessing their usual data in different ways and/or from different locations or devices. Or perhaps it's a volume issue -- like the salesperson who suddenly begins accessing 1,000 contacts a day. You're looking for behavior that, while itself probably not prima facie evidence of wrongdoing, is pretty weird. From there, you can question the user, investigate further, and determine if they are indeed malicious.

Careless Users

Many compromised users were once careless users, but careless users expose their organizations to outlier attacks as well.

They might be negligent -- clicking on every link that comes their way, failing to exercise the most basic protection of their credentials and devices.

Or they might simply exhibit a lack of optimal vigilance -- using only single-factor authentication, failing to guard against shoulder surfing when they log in, or speaking a little too loudly on their work calls.

Perhaps they are violating policy, but without intending to compromise organizational data. Examples include removing sensitive data from the premises to work on later, changing network-security settings, implementing undocumented shadow IT solutions, or even using a work device to access a porn site (which may have fallen prey to a cross-site scripting attack).

But, again, motives and reasons don't matter. What defines the careless user is that, through his own intentional actions or omissions, he has inadvertently made himself a threat to the organization and its data.

Because he is acting intentionally but causing unintended results, detecting the careless user could be very easy or very difficult. Like malicious users, careless users can often be discovered with attentive and sophisticated systems monitoring (as with the employee who takes bucketloads of data home with him to work on over a long period, or the employee who visits websites that none of his coworkers do). Most organizations, however, don't look at any of their network or systems data. Moreover, many organizations lack effective security training for non-tech employees. Many more have cultural problems. Any one of these factors can lead to a disaster of trust and care.

In 2003, a U.S. Department of Veterans Affairs employee took home "vast amounts" of data containing veterans' personally identifiable information (PII), to spend years working on a pet project. In 2006, the employee's home was burglarized, and the external hard drive containing VA data was stolen. The VA's Office of the Inspector General found (1) that the VA's then-existing policies and procedures were inadequate to protect data or mitigate its loss, and (2) that multiple cultural failures led to a delayed and lackluster incident response.

Organizations who are already doing things right don't have as many careless-user incidents to worry about. In this way, the careless user is perhaps the most dangerous insider threat; he is almost always a symptom -- and product -- of a dysfunctional security organization. And dysfunction can be the hardest-to-detect security threat of all.

Image Credit: Andrea Danti/Shutterstock

Terry Ray is SVP at Imperva. He was Imperva’s first U.S.-based employee and previously served as Imperva’s chief technical officer, chief product strategist, and vice president of security engineering. Terry has worked closely with customers on hundreds of application and data security projects to meet the security requirements and demands of regulators in every industry. Terry holds a B.A. in management information systems from the University of North Texas.

Comments are closed.

© 1998-2024 BetaNews, Inc. All Rights Reserved. Privacy Policy - Cookie Policy.