Human-driven AI can improve threat detection

machine-learning

Hackers and criminal syndicates are attacking enterprises with increasingly stealthy and sophisticated techniques. In response, companies are deploying a new generation of firewalls, IDS appliances, and Security Information and Event Monitoring (SIEM) servers to detect suspicious activity as quickly as possible.

Two problems are undermining these recent investments in IT security.

First, SIEM systems -- even new ones -- are generating too many false positive alerts, making it difficult for analysts to detect and mitigate real threats quickly.

Second, the hype around applying Artificial Intelligence (AI) to threat detection, when not implemented carefully, is generating even more false positives and further eroding the effectiveness of Security Operations Center (SOC ) analysts.

To understand the importance of these problems and how they might be solved, let’s take a look at both SIEM systems and new AI-based security solutions.

Challenges with SIEMs

SIEM systems collect event data from software agents, devices, and servers across an organization’s network in order to analyze that data for indications of possible security threats.

To perform this analysis, SIEM systems apply rules and search for conditions in the data being collected. When network or system events match conditions in the rules, SIEM systems raise alerts and notify analysts that something needs to be investigated.

One of the challenges with SIEM systems is these rules tend to be simplistic. Even when they include a nested hierarchy of conditions, many rules are unable to account for the context in which events occur.

With a better understanding of context, a SIEM system might be able to recognize that a certain confluence of events does not represent a real threat. Similarly, a better understanding of context would also help a SIEM recognize that multiple(seemingly benign) events can actually constitute a real threat.

Another challenge is that these rules are often based on intelligence sources, such as IT security data feeds, that can become outdated quickly. For example, they might warn about threats that pertain only to systems that IT administrators have already patched.

It’s not unusual for a SOC to receive hundreds of SIEM alerts per day, nor is it unusual for the vast majority of these alerts to be false positives. While analyzing false positives, SOC analysts may overlook the one-two percent of alerts that are true positives or early indications of a genuine threat.

Challenges with pure AI-based security solutions

To help SOC analysts sort through alerts more quickly, some enterprises have begun investing in new security solutions that use AI for threat analysis.

Unfortunately, these systems can bring their own set of problems. Many of the techniques touted by various security vendors are based on academic theories that have yet to be fully tested and proven in real-world conditions. These AI systems, like the SIEM systems discussed above, are also missing critical intelligence about the context in which events are occurring.

Human security analysts are good at intuitively understanding the context of a situation. On the other hand, AI systems lack intuitive understanding and instead will detect "anomalies" from a baseline.

A key challenge is an "anomaly" isn’t necessarily a threat, in fact it often isn’t. Hence, like SIEM systems, these AI systems tend to generate lots of false positive alerts that security analysts have to triage.

Another challenge is when the AI systems make decisions, SOC analysts are often left scratching their heads because many AI systems can’t explain why they took a particular action.

Human analysts are held accountable for their decisions, but it’s difficult to hold an IT system responsible when it offers no explanation for its decisions. Without explanations, it’s also difficult to correct or tune these systems, disabusing them of faulty assumptions.

The problem with these systems is they rely on "unsupervised AI." This type of analysis trains itself to recognize patterns by processing large volumes of labeled training data without human intervention. Unfortunately, most enterprises lack large volumes of training data for security attacks.

In turn, the data would have to combine all the indications of an attack along with a label clearly identifying this particular series of events as a genuine attack. To be effective, a threat detection system would need to learn from thousands or even millions of these labeled combinations. Without this training data, systems can only do their best to learn on their own.

In contrast to unsupervised AI, a "supervised AI" system learns from small amounts of data, but requires some human intervention for training. This approach pairs human analysts—in this case, security analysts—with machines.

Not picking the right type of AI system is often the root cause behind disappointing results.

The new IT security frontier: A human-machine partnership

The best way for enterprises to reduce SOC workloads, accelerate threat detection, and defend against data breaches and other security risks is by applying supervised or "human-driven" AI solutions to threat analysis.

In this model, analysts "mentor" the machines, instructing them and correcting their errors. The machines then apply that learning at scale, rapidly scoring threats based on deep correlation across multiple data sources, ultimately becoming a force-multiplier for daily SOC work.

By applying a supervised AI system for threat detection, SOC analysts would be able to apply their contextual understanding and expertise to automate threat analysis while dramatically reducing false positives.

In addition, the decisions of supervised AI systems would be understandable because humans would have contributed to the decision-making. Instead of a black box spewing alerts, supervised AI systems would provide SOCs with an intelligible and programmable way of analyzing threats and triaging alerts.

With this approach, it’s possible to eliminate 90 percent of the manual investigation work required of analysts. By eliminating false positives, supervised AI enables analysts to focus on the positive alerts that genuinely pose a threat to the organization.

A few years ago, the idea of eliminating 90 percent of false positives would have been unimaginable. Now, thanks to supervised AI, it’s possible to imagine getting to zero false positives within the next generation of human-driven machine learning security solutions.

Even a 90 percent reduction is a great boon in helping enterprises defend against the ongoing assault of attempted data breaches.

Kumar Saurabh, CEO and co-founder, LogicHub.

Published under license from ITProPortal.com, a Future plc Publication. All rights reserved.

Photo credit: Lightspring / Shutterstock

Comments are closed.

© 1998-2024 BetaNews, Inc. All Rights Reserved. Privacy Policy - Cookie Policy.