Vulnerability management is complex, so how can we work smarter to reduce risk?

Risk dial

The saying "too many cooks spoils the broth" could well be true in the case of how we currently approach vulnerability management (VM). The process around vulnerabilities has become increasingly complex, with high levels of pressure to ensure that it is done right.

Vulnerabilities have long been one of the most prominent attack vectors, yet so many are left unpatched by organizations of every size and across every vertical -- the root of catastrophic issues. The Ponemon Institute conducted a recent study that found almost half of respondents (48 percent) reported that their organizations had one or more data breaches in the past two years. In addition, the discovery of high-risk vulns in 2020 alone, has drastically increased by 65 percent -- ultimately alluding to the fact that breaches could potentially become increasingly impactful. The longer a vulnerability remains present, the higher the chance that it will be exploited by bad actors.

To solve this, it is imperative for companies to implement vulnerability management and remediation processes. But this is often easier said than done.

Vulnerability management is continuous...

To address vulnerabilities, organizations must first scan to assess and gain visibility into the organization's status. Based on these findings, any identified vulnerabilities are then passed on to the relevant team to resolve. However, a vulnerability doesn’t always equal a patch. Some can be rectified from reconfiguring the system, some even require both reconfiguration and a patch from the vendor. Each vulnerability, and its required action is unique.

The process of mapping vulnerabilities against necessary actions can be resource intensive, as it involves evaluating severity of the vulnerability, the critical level of the asset, analysis of the fix and the operational risk it may introduce. Historically, security professionals have been forced to manually weigh all these factors before they then decide how best to respond.

With regard to proactive patch management, organizations will typically await the monthly Patch Tuesday updates and respond accordingly. But what happens next is where it gets complicated.

If the vulnerability can be remediated via a patch (and whether that patch has been issued yet), a specialist team responsible for that OS or third-party application will look to issue the patch. Due to the size of the IT estate for enterprises, there are likely to be multiple different teams working on remediation, even for the same vulnerability, across different operating systems. The problem occurs because each team typically has its own process, timeframe and ways of communicating.

In addition to all the to-ing and fro-ing between teams, there’s a separate conversation happening in tandem. This is the conversation between IT and security teams who are debating the level of operational vs. security risk that they’re willing to accept. Finding a compromise that all parties are happy with isn’t always straightforward. Naturally, the security team wants to minimise security risks at all costs and leave no stone unturned. Whereas, the IT team wants to minimize operational risk, even at the expense of unpatched vulnerabilities that could increase the organization's level of exposure over time.

Taking all the above into account, consider then doing this hundreds of times over as more vulnerabilities are found. Scanning must take place continuously to keep up with all the updates and changes across a company’s IT estate. More vulnerabilities are discovered each month, and it’s vital that the exposure is eradicated before it can be exploited. In this necessary yet convoluted process, it’s easy to see how things can be miscommunicated or security gaps remain open for longer than necessary.

Automation doesn’t mean relinquishing control

The word automation often brings many security professionals out in hives. Many hold the fear that they are relinquishing all control, or some might even worry that technology will replace them. Despite this, we need to acknowledge that the security industry is struggling -- we’re overwhelmed and under-resourced in the fight against cybercrime. Our systems have become so sophisticated and complex that we physically can’t keep up without utilizing automation in some form.

If we can get security and IT teams to a point where they agree and prioritize their OSes and third-party apps based on security vs operational risk, they can then start to lessen the workload. For example, the organization's end users may utilize the Chrome web browser. Chrome itself presents a low operational risk since patches from the developer are normally effective and safe, and if a patch were to fail, the impact would be low. Rolling out patches for Chrome should therefore be automated.

Once organizations have acknowledged this, they can use intelligent automation to proactively patch those specific applications. This isn’t a 'patch anything and everything and hope for the best' scenario. Using highly customizable, pre-defined rules that enable automatic patching of apps that are defined by the organization as low risk based on real threat indicators, such as Chrome and other third-party applications, teams can 'set and forget' for specific applications. This allows overwhelmed teams to focus on testing and remediating higher risk systems and vulnerabilities that need more care and attention, safe in the knowledge that the basics are covered.

Automation is a necessary tool that helps us manage overwhelm, but it can’t save us alone. Combining this technology with empirical data is important if we want to get the process right. Detected vulnerabilities can be mapped against products used in the organization's environment, helping IT and security teams make informed decisions around which OS or app is introducing the most vulnerabilities to the system. Without such data, teams are often left making decisions around prioritization and risk levels based on their subjective view or past experiences, which creates a lot of margin for error.

Ultimately, any technology that helps us improve the efficiency of our processes is a good thing. The use of strategic automation, implemented in places where it can add more value than cause harm, and when combined with empirical data, is the key to reducing complexity for VR.

Photo Credit: Olivier Le Moal / Shutterstock

Eran Livne is Director of Product Management, Qualys

Comments are closed.

© 1998-2024 BetaNews, Inc. All Rights Reserved. Privacy Policy - Cookie Policy.