Twenty years of software updates

In the beginning, software ran mostly on a smart server dumb terminal networked or entirely on a local machine. If there was a defect, it was that the given program wouldn’t run. Once desktops, laptops, mobile phones and even physical devices such as refrigerators started interconnecting via the internet, a software defect could open the device to an attack or shut down a life-critical system. The very real need to stay on top of software updates has been escalating every day.

In the early 2000s, when computer malware began as a few innocent viruses before morphing into full-on malevolent worms, the software giants such as Microsoft (but by no means limited to Microsoft) denied responsibility. There was significant push back with vendors saying that compromise was only possible in only a limited number of scenarios -- almost as though the end user was responsible. Increasingly, though, it became clear that maybe the software itself could be responsible for some of the malicious activity on the early internet. And maybe the software industry needed to take that seriously.

Trustworthy Computing

At some point Microsoft and others realized that maybe the problem wasn’t with vigilante malware creators. Maybe the problem was within their grasp, in their own code.

On January 15, 2002, Bill Gates launched Microsoft's "Trustworthy Computing" initiative. While others had also recognized the role of software integrity, the fact that Microsoft was announcing this was significant as it was a dominant player in operating system and desktop software.

Next Came The Lists

In 1999, the MITRE Corporation released to the public the Common Vulnerabilities and Exposures (CVE) system which began to assign numbers to software defects and track whether or not the vendor had created a workaround or a patch. Not every vulnerability however receives a CVE -- for example, if a pen tester finds a vulnerability under NDA, the hiring organization might decide not to go public with that finding.

There's also the Common Weakness Enumeration (CWE) from MITRE which is a categorization system software source code patterns that might lead to a vulnerability. It’s important to note that not every defect is a vulnerability, and not every vulnerability is exploitable.

There are also lists of the most common vulnerabilities. The OWASP Top 10 and the SANS Top 25 both provide a roadmap for developers to test their software against.

The Rise of Software Testing

Over the last twenty years, the software testing industry has itself exploded. There’s static analysis of the source code, where line by line the code is analyzed and flagged with various CWEs.

There’s also Software Composition Analysis (SCA) which deconstructs binaries and informs the organization of software code found within as as any known CVEs against that code.

Finally there are dynamic software testing solutions that focus on the runtime. Interactive Application Security Testing (IAST) involves an agent in the software and identifies potential sources for data leakage. While fuzzing testing dynamically looks at the runtime environment by introducing variables which may or not result in a software crash and therefore reveal a vulnerability. Heartbleed and ShellShock were vulnerabilities that were found via fuzz testing.

So, now that we’re testing software, how do those updates get passed to the end user? The last mile with software updates is always the hardest.

Patch Tuesday

Twenty years ago, in October 2003, Microsoft instituted its first Patch Tuesday. This is not to say that Microsoft was not releasing software updates before then. They were, but sometimes the patches would be released late on a Friday afternoon, leaving the B-team in IT to install them (or not). Sometimes, in the early days, these software updates had conflicts with other dependencies, creating additional headaches and overtime.

Microsoft addressed this problem by rolling out a collection of software updates on the second Tuesday of every month. It was consistent -- you knew it was coming. And it gave IT teams time to test the updates to make sure they knew it would work before the weekend hit. Of course, out of ban emergency patches were still possible. And sometimes, even a Patch Tuesday had to be withdrawn.

While Microsoft did this, other prominent software vendors did not, and this causes confusion with end users who may randomly receive notifications of updates, then choose to dismiss them. The last mile with software updates is always the end user, so why not make it easier?.

Automatic updates

In its Chrome Browser, Google has successfully instituted automatic updates Mozilla has done the same with FireFox. And, if the user is consistently using the browser, a notification will appear to remind the user to relaunch with updates.

There are problems with automatic updates such as needing to join a Zoom meeting only to be informed that an update is needed. Or booting into Windows, only to find a cumulative update is pending and the dreaded "do not power down your computer" message appears for about an hour or so. Still, this is much better than the many programs on your system that haven’t been updated nor even notify you of an update.

In general, over the last two decades, we’ve gotten much better at educating the public to download and install software updates as necessary. We may not always like it. Still it’s better than the alternative.

Photo Credit: fotoscool/Shutterstock

Robert Vamosi is Senior Security Analyst, ForAllSecure.

3 Responses to Twenty years of software updates

© 1998-2024 BetaNews, Inc. All Rights Reserved. Privacy Policy - Cookie Policy.