How microsegmentation can deliver zero trust security [Q&A]

Red and blue security padlock

With a never-ending supply of new security threats presenting themselves every day, it can be tough for IT departments to keep up.

While perimeter security continues to be important, the sheer volume of novel attacks means that, eventually, an attack will bypass defenses and gain a foothold in the interior. To harden the network interior, best practice now calls for microsegmentation to achieve a zero trust environment, but that’s not easy to do.


To get more information on today’s security landscape, we spoke with Peter Smith, the chief executive officer of Edgewise Networks, a security solution provider of zero trust microsegmentation.

BN: What are the biggest cybersecurity challenges companies are facing today, and how are they managing them?

PS: The task is so big, many companies don't know where to start. The statistics take your breath away. In April 2019 alone, there were over 140 new security vulnerabilities identified by the Zero Day Initiative, while AV-TEST reports that it identifies over 350,000 malware programs every day. Companies’ technology ecosystems are vast, and every part of the business is accessing networked systems and data that are the lifeblood of the business’ success. If an unauthorized actor gains access to those systems and causes a data breach, that’s a real, tangible impact on operations, finance, brand reputation, and more. Cybersecurity has become a board-level concern, so the pressure on the security team to keep data and systems secure is immense. Now add to that the field's talent shortage, and it’s understandable that any security professional would feel overwhelmed.

Traditionally, security's main focus has been defending the network perimeter from external threats, mainly through firewalls, and then detecting and responding to threats and vulnerabilities. It's a reactive posture rather than a proactive one. What this means is that defenders are constantly chasing the threat, but, let's face it, it's impossible to keep up. Attackers are almost inevitably going to find a way past perimeter controls into your network, and if your internal controls are focused on detection, it's inevitable that attackers are going to reach their intended target -- the data. Because many internal networks are flat, meaning there isn’t adequate segmentation between sensitive systems or applications, attackers find exposed network pathways and exploit them to move laterally, undetected.

BN: How can organizations start to limit what attackers can do?

PS: Let's start with some good news: Even if you have hundreds or thousands of network pathways, they are still finite in number. If you use automation to find those pathways then secure them to limit east-west communications, any attack that bypasses perimeter defenses won’t be able to progress an attack; unauthorized actions will be prevented.

Therefore, the first step of any security program should be to create an asset inventory and gain an understanding of how an attacker could reach them, i.e., the network pathways. There are a few questions, in particular, that I think security teams need to ask at the beginning of this journey:

  • Where are my assets? On-premises, in the cloud, in container environments?
  • Which of my assets are most crucial to my business?
  • Which of these assets would damage my business most if a threat actor got control of them?
  • What are my vulnerabilities -- insecure code, unpatched systems, potential phishing attack victims -- and where are they?
  • Which pathways in my network would an attacker most likely exploit to move laterally toward my sensitive assets?

BN: Once an organization has assessed its network environment, what actions should it take?

PS: Of course, you can't just do this assessment once and then be done. Change is inevitable in networks, especially cloud and container environments. Therefore, you need a way to continuously assess data flows, configuration changes, patching requirements, new service deployment, etc. This means you need automated scanning tools. The size and complexity of your network -- including things like public cloud and containers -- means there’s no way to do this job manually. To be sure you have an up-to-the-minute map of your assets and network paths, take advantage of the automated discovery tools on the market.

Once you have a real-time map of your network environment, you can remove the network pathways not required for regular business use by authorized applications and services. This will sharply limit attackers in their attempts to move east-west inside your network, and their abilities to affect a breach.

Limiting how an attacker can get to your critical assets reduces risk considerably, but doesn’t get you all the way there. To really be proactive in your security posture, you have to build controlled boundaries around your sensitive systems and data. The best way to do this is through workload-level microsegmentation.

BN: Experience of implementing microsegmentation in the past suggests it can be very labor intensive, is this true?

PS: Yes, microsegmentation used to be pretty painful. That's largely because most microsegmentation tried to repurpose the firewall model, meaning, segments were created with VLANs and security was layered on based on IP addresses. In today's dynamic networks, network addresses change frequently, which means the policies based on them also need to change. This requires excessive time, effort, and resources. And that’s before you even account for all the new applications that are being deployed and updated on a continuous basis. Every time a new application communicates on the network, a new firewall rule must be created for it, if you’re using an address-based approach, and that can take weeks or months. So, I agree, network- and address-based microsegmentation was too labor intensive for the security benefit received, and was understandably untenable for many organizations.

BN: Yet, here we are suggesting microsegmentation as a way to secure your network. What's changed?

PS: The modern microsegmentation we're talking about now is based on identity. Specifically, we at Edgewise believe the identity that matters is software identity -- the immutable identifiers that exist in the software itself. Software identity then becomes the segmentation control plane in zero trust environment where only verified software is allowed to communicate on the network. If the identity doesn’t match what’s expected, the software can't communicate across boundaries and malicious actions are prevented.

This method provides security teams with a number of advantages. It's simply a much more reliable way to enforce communication policies in always-changing hybrid cloud environments. Because application-centric policies are not tied to network information, unlike IP addresses which are the basis of traditional microsegmentation, they automatically adapt to each environment. So, with application-centric segmentation, admins can create policies once, manage them centrally, and retain visibility regardless of the type of network workloads are communicating in.

This modern approach to microsegmentation is surprisingly pain-free, much more reliable in ephemeral environments like container and cloud, and gives organizations a way to proactively manage their network security plan.

Image credit: deepadesigns / Shutterstock

Comments are closed.

© 1998-2022 BetaNews, Inc. All Rights Reserved. Privacy Policy - Cookie Policy.