Do we need a cyber NATO to address the changing threat landscape? [Q&A]
The threat landscape facing enterprises is changing constantly. In recent months, major vulnerabilities like Log4j and malware-based threats have demonstrated the need for organizations to move quickly in order to defend themselves.
Is the best way to stay on top of the most pressing threats to harness the power of the global cybersecurity community for defense in a sort of cyber NATO? We talked to SOC Prime CEO Andrii Bezverkhyi to find out.
BN: The threat landscape facing enterprises is changing constantly, how can security teams stay ahead of global threats?
AB: Industry research shows that the majority of vulnerabilities (58 percent) are exploited as zero-days before the patch release while 42 percent are exploited after. For these non-zero-day vulnerabilities, there is a very small window (often only hours or a few days) between when the patch is released and the first observed instance of attacker exploitation.
In addition, last year set the record for the number of zero-day vulnerabilities identified in a 12-month period, with a 50 percent increase in the number of zero-days challenging the cyber domain as compared to 2020.
The net result is that the time needed to proactively detect and mitigate emerging threats is shrinking. With a talent shortage that is only getting worse and is exacerbated by burnout among SOC analysts who are struggling to keep up with a higher volume of threats, now more than ever, security organizations need to harness tools and techniques at their disposal and tap into the power of the broader community.
BN: As the innovation economy continues to accelerate, what needs to be done to tip the scale for the good guys?
AB: One of the most effective ways to stay on top of evolving threats is to harness the power of collaboration.
The premise is simple. No single organization can marshal the resources equivalent to an enabled and collaborative, global cybersecurity community. Threat hunters all over the world build detections in a matter of hours that help confirm the exploitation of vulnerabilities. This allows organizations to quickly execute threat hunting tasks on particular actively exploited vulnerabilities and immediately engage in remediation activities necessary to ensure security. The key is in harnessing the power of collaborative cyber defense -- something akin to creating a Cyber NATO.
We’ve seen the industry move in this direction in response to the wave of major software supply chain and ransomware attacks, specifically through the framework of public-private partnerships. Two examples are CISA's Joint Cyber Defense Collaborative and NATO's Cooperative Cyber Defence Centre of Excellence. While this represents important progress, more needs to be done to harness the global community of threat hunters and security researchers to post, discuss, and create threat detection code to help organizations address their top security concerns in real-time.
BN: What is the vision for collaborative cyber defense?
AB: Many companies around the world use a lot of tools to protect themselves. NIST defined five domains for cyber threat defense: identify, detect, protect, mitigate and recover. This dates back to the beginning of cyber security. Anti-virus software scanned for signatures, which were created as threats evolved. This has never changed.
What has changed is that companies run on-premises, in the cloud and everywhere in between. All of that needs to send signals to the security team to understand if they are attacked, how and by whom. Understanding those signals requires two things: data and algorithms. Algorithms are key to defining any kind of existing or future threat.
This is a big change from the classic anti-virus industry because, for several decades, signatures were created by a handful of people that made them proprietary. A big breakthrough came with the invention of common languages like Yara that enable signatures or rules to be created by anyone around the world. And then a language called Sigma, invented by Florian Roth and Thomas Patzke, arrived in 2016, and became the basis for creating threat detection algorithms for any digital trace, starting with the endpoint log files, and then extending detection and response from the edge, to the cloud, containers, network data, IoT and beyond. Any researcher or threat hunter can describe the behavior of malware or the latest attack tool and share meta-language code with the community.
This brings about strength in numbers because we can have hundreds or even thousands of people contributing to threat detection at scale. This comes in the form of researchers and threat hunters writing and sharing rules in common language proactively, often before attackers can weaponize the latest tools or zero-day exploits.
BN: What are the mechanisms, drivers and enablers for effective collaborative cyber defense?
AB: For this to be effective, the creation and sharing of threat detection rules needs to reach hyperscale where dozens or even hundreds of rules that are updated every day and every hour need to be made available. At the same time, there also needs to be an efficient way for the rules to be consumed by SIEM and EDR/XDR platforms used by security operations teams to defend their organizations. As it stands now, these platforms can't handle the consumption of rules at that scale.
This is where the efficiency of a marketplace can make an impact.
BN: What is the role of standards in this collective effort? What is the mechanism to ensure that this collaborative content can be consumed?
AB: Standards are critically important and this is where Sigma, which I mentioned earlier, comes into play. It is a meta-language to express threat detection algorithms that are tool or platform-agnostic, invented as a complement to Yara, which is a common language for malware analysis.
Before Sigma, to get improvement on threat detection capabilities, companies needed to buy next-generation tooling. This takes time as it requires implementation, which creates huge overhead for security teams. Sigma allows threat detection rules to be consumed by older tools as well as next-generation platforms. It also allows teams to share threat detection rules quickly and it helps individual researchers to contribute detections as well.
To provide an example of the power of Sigma, in 2017 as part of the team responding to the NotPetya attack, I tagged Sigma-based rules to the MITRE ATT&CK methodology to perform threat actor attribution within five days of the attack taking place. This was the same amount of time that Cisco Talos and ESET achieved attribution through on-site forensics of compromised M.E.Doc servers and malware analysis, respectively. While we each achieved attribution through three different methods, the fact that I did so through Sigma rules and native SIEM content mapped to MITRE ATT&CK meant that SOC teams could perform analysis of their environments for attacks without having to write their own rules. Threat attribution is a hard and time-consuming task that requires the effort of a group of people, samples access, logs, network PCAPs, reverse engineering and malware analysis skills and database. It is not fast and expensive by design.
Using MITRE ATT&CK methodology coupled with TTP (Tactics, Techniques and Procedures) behavior-based Sigma rules, tagged with MITRE ATT&CK techniques, sub-techniques and tools, is a quick and naturally cheaper way to perform attribution. Everything has its cost, so if we sacrifice speed and increase cost, we gain precision for the cases where it is mandatory. If we need a quick estimate, and have very limited time and resources, no access to malware samples or network data dump, we may just as well make the best estimate we can.
This will work, if we have developed rules before the attack took place and if TTPs are being reused. I used this logic in 2017 to perform the attribution and quick illustration on what happened. As a result of this success, in 2018, I formally proposed the concept of tagging Sigma rules with MITRE ATT&CK at the first EU ATT&CK Community event, which officially accepted it, and it has become an industry best practice. While it is not officially recommended to use MITRE ATT&CK for attribution, mainly for the reasons stated above, to emphasize accuracy and correctness over speed, it does work. As long as we have all the rules for all the TTP’s developed proactively, before the actual attack happens.