Taking a battlefield approach to cybersecurity [Q&A]

You'll often hear cybersecurity discussed in military terms, as a war, or a battle, or a conflict. So should enterprises be taking a defensive approach that draws on military thinking?

Tom Gorup is VP of security operations at Fortra's Alert Logic, a managed detection and response specialist. He's also a veteran of six years in the US Army with tours of Iraq and Afghanistan. We spoke to him to discover how lessons learned on the battlefield can be applied to cybersecurity.

BN: Is it true to say that cybersecurity has become a focus for attacks and espionage by nation-states?

TG: Absolutely. Attacks from nation-state actors are very much a real threat to both businesses and individuals alike. We have seen these attacks play out on center stage. Examples include Russians directly targeting Americans in psychological warfare through social media platforms like Facebook and Twitter. These attacks came in the form of memes, ads, blog posts and comments to stir up a social divide.

SolarWinds is another example of cyber espionage conducted by nation-state actors. In this case, Russia also had direct access to source code giving them the ability to plant a backdoor in one of the most popular network monitoring solutions on the planet. Not to mention it is heavily used by the US government making it a great target.

As you can probably see these attacks have been quite effective. For businesses to effectively combat these intrusions, they must have awareness and an understanding of their security posture. It must be easy to understand, communicate, and prioritize risks all throughout the business.

BN: Are businesses still too reliant on traditional endpoint security tools?

TG: Traditional security tools can be problematic. Many organizations have a myriad of security tools. These tools, although probably best of breed, inhibit security leaders' ability to effectively manage their security posture in a measurable way. These tools make metrics and reporting difficult due to a lack of integrations and inconsistent nomenclature.

As a result, security leaders are left with having to manually consolidate and translate the outputs to build a cohesive story to explain the organization's security posture. At the end of the day, making basic questions difficult to answer like: Do you understand what your visibility looks like? Do you understand where your exposures are and then how you're being attacked?

Most of these tools only answer a subset of questions at a time, making it very difficult for you to effectively measure your security posture.

BN: What's the value of using a 'red team' to probe networks and uncover exposures?

TG: Red teams are used to identify vulnerabilities, weaknesses, exposures, or even active compromises within an environment. The objective of a red team is to think like an attacker and play out the scenario. Not only is it valuable to run red team exercises to identify exposures or weaknesses within your environment, but it's also extremely valuable in identifying gaps in your visibility.

This approach is like what the military does in field-training exercises. Before we deploy to any combat zone we go through a ton of training. This training would also include going out to the field for maybe a month at a time. There, we would execute our battle drills against an opponent we call 'Op4' (Opposing Force). We would build and secure a battle position, and then Op4 would attempt to attack. Their objective is to make our defenses collapse and to identify weaknesses in our execution. In this case, Op4 is our red team, and they are testing our battlefield processes before we go into live combat.

BN: How important is it to pool intelligence?

TG: It's critical to pool intelligence. That said, data normalization is one of the biggest challenges the security industry, and really any industry, has.

By standardizing our taxonomy and leveraging consistent key value pairs across myriad data sets, we can more effectively combine, or pool, our intelligence. This puts us in a better position to effectively leverage machine learning and allows us to level up our threat detection and response.

It can be particularly challenging when we have one data set -- perhaps a firewall data -- that is completely different from an active directory dataset, and completely different from endpoint detection/response tool data. They're all different, and our objective is to bring them all together to tell a cohesive story. If we can effectively normalize them, we can pool together that information making the data that much more valuable.

BN: Where do you think the balance lies between technology, like AI, and human analysis?

TG: Imagine you have a plot of land with a huge forest area, and you can walk along the perimeter of the forest to identify trailheads that deer or other animals might use to move in and out of that forest. You could also walk into the forest following various trails and plot out various food sources. This is a 100 percent manual process and may require various levels of capability and expertise to be effective and efficient. That was 10 years ago in security where we were manually pouring through logs and events through text files.

Now, continuing with our forest analogy, you could use a satellite image to identify trailheads leading into that forest and even high probability feeding areas. Going further, you could strategically deploy cameras throughout the forest giving you alerts and insights into movement within the forest. This is how far our security tools have come in the last decade.

But even with these tools, you still need to go through the imagery with a fine-tooth comb to find the deer (or the compromise) -- but imagine if we applied artificial intelligence to this data. AI could pick up on nuances along the perimeter of the forest, pick up on heat and moisture patterns to select highly probable feeding areas, and even combine terrain changes with camera data to select likely animal trails. What you're able to do, as a hunter, is dive straight into those potential hotspots, because AI has called them out for you. But is there a deer there? That's where a human comes into play. We can improve our models by informing them when they're right, and when they're wrong, but ultimately increasing the likelihood of a successful hunt.

We need to start thinking this way about our relationship with AI/ML. It's not a replacement for human instinct and decision-making, but it is a powerful enabler that brings greater efficiency to our hunts. We should ask "How can I use this AI, this tool, to augment my day-to-day activity and lessen the burden of learning these new data points?"

Image credit: Gorodenkoff/depositphotos.com

Comments are closed.

© 1998-2024 BetaNews, Inc. All Rights Reserved. Privacy Policy - Cookie Policy.