Why anomalies in network traffic are key to cybersecurity [Q&A]

Networked computers

Major cyberattacks invariably make the headlines, but it seems that rather than take a proactive approach, many CISOs wait for a new threat to emerge before protecting their business. They simply hope they won't be caught up in the first wave of a new attack.

Dave Mitchell, CTO of cybersecurity investigation specialist HYAS Infosec, believes there is a better approach, one that detects threats by monitoring the communications that form the foundations of internet architecture. We recently talked to him to learn more.

BN: Why is it so important to prevent bad actors from gaining a foothold on a network?

DM: Once an attacker has established a solid footing within your network, you have to assume that they know everything you do. This includes access to your ticket queues, internal documentation, and network and system architectures, all of which can be leveraged to figure out where and how to hide without setting off alarms. Preparing for this is like practicing preventative medicine -- the earlier you identify an issue and implement a treatment plan, the better your chances are for a positive outcome. If you don't, the problem goes undetected and only gets worse.

BN: How can you spot issues like Log4j while they're still at an early stage?

DM: Log4j provided a great use case to show the value of real-time domain name system (DNS) monitoring. Because Log4j was a remote code exploit (RCE), it needed to use the system it was exploiting to connect back to its remote infrastructure in order to attempt to download subsequent payloads, which would then be used by bad actors to initiate a variety of attacks. Using DNS to identify such a widespread issue allowed operators to quickly zero in on the systems under attack and use that information to prioritize patching. The alternative is to search through millions of lines of centralized log data, while simultaneously keeping track of the continuously changing fingerprints of the attack. And even though the exploit is now public and companies have had time to patch their devices, we are still finding backdoors from Log4j and there are no guarantees that everything is copacetic. Had real-time DNS monitoring been in place, the suspicious outbound traffic would have been blocked, and we likely would have been able to avoid this endemic cybersecurity threat.

BN: Why is DNS key to protection at the server level?

DM: Too often, perimeter defense constitutes the entirety of a company's cybersecurity strategy. Don't get me wrong; it's important that everyone has a firewall, but it won't keep you completely safe. Unfortunately, a security perimeter is never going to be impenetrable. Better business resiliency will only come from having real-time visibility into what's going on within your network. Since every operating system and nearly every online application uses DNS to communicate, establishing a baseline for what your servers/containers are doing on a daily basis is a good way to determine when you have abnormal deviations. With network visibility, you can see indicators of threats or other anomalies in your environment before they cause any headaches or more significant issues.

BN: How does this go further than a traditional firewall?

DM: Traditional firewalls were developed to restrict inbound access to your network, and they do that quite well. Next-gen firewalls have added a large number of features to protect your outbound traffic as well, but many of these still rely on traditional, static allow/deny lists. DNS operates in real-time and thus utilizes continuous monitoring to let the operator identify and mitigate issues when they crop up. This protects you and provides you with visibility once a bad actor does inevitably get past your firewall and into your network, which will happen, if it hasn't already. You will stop the attacks before they happen, and gain an understanding of the nature of the attacks against you, and how best to proactively adapt defenses.

BN: Is it important to take a 'back to basics' approach to understanding your overall security posture?

DM: One of the unfortunate side effects of the advent and adoption of DevOps and the ubiquity of the cloud is that people are able to build new applications without having to understand the underlying 'how it works' of the infrastructure they're using. People have unfortunately treated DNS no different than electricity: it just works, and you only care when it breaks. While operators and buyers are blinded by promises of machine learning and artificial intelligence solving all their security problems, attackers are more than happy to utilize DNS and other techniques to bypass current investments. By utilizing DNS as the base of your security stack, every other investment you’ve made becomes more intelligent.

Business leaders would do well to have some grasp on how the Internet works at a fundamental level to allow them to understand how bad actor communication can best be seen, detected, and stopped. Protection at the DNS layer provides the required visibility and controls your company needs to keep proceeding full forward while simultaneously giving you the ability to defuse attacks in real time before it's too late.

Image credit: bluebay / Shutterstock

Comments are closed.

© 1998-2024 BetaNews, Inc. All Rights Reserved. Privacy Policy - Cookie Policy.