Did a single security engineer avert a DNS disaster?

Had someone with ill intent been as smart or as lucky as security engineer Dan Kaminsky, the entire Internet could have been rendered mostly inoperative. The extent of just how big a fix he implemented, is only now being realized.

There is an entire subculture that has developed around the notion of deconstructing information technology. And like those who prefer to fish in pre-stocked ponds, the people who populate this subculture are not, for the most part, particularly clever. They may be adept with their tools, but they don't construct exploitation strategies for themselves. Rather, they wait until someone smarter can do it for them.

In fact, that's the whole principle behind the "zero-day exploit," which is a bit like hyenas celebrating the availability of low-hanging fruit. Today, it's security engineers who discover the most clever possible exploits in IT systems and software. But it's typically the way those engineers alert software companies and their customers to the existence of the problem, that in and of itself causes the greatest security risk. When the smarter birds of prey can detect from a high vantage point where the ripest fruit has fallen from the trees, the hyenas can easily track them on their way to dinner.

This was the problem with respect to the implementation of one of the largest-scale fixes in the history of the Internet last month: Since 2002, it's been generally known among network engineers that there was probably a way to pollute Domain Name Server caches, using a trick of accurately guessing the source port from which a DNS name resolution would come, and then spoofing that port with a false response that could redirect users to completely different Web sites without their knowledge.

If the spoofed site was a bank, the spoof could ask for and receive user IDs without them knowing it wasn't that bank. If the spoofed site was a customer service site, users would blithely give them their support ticket numbers and license IDs. There was no telling how far this could have gone.

Maybe, just maybe, some users would have spotted the fact that the certificate sent by the spoofing site didn't match the one that was spoofed. But how many users get those certificate warnings every day, from legitimate sites that simply haven't updated their certificate or are deploying it incorrectly? Users may be growing accustomed to simply clicking on "Allow."

A few months ago, Doxpara Research security engineer Dan Kaminsky -- who had been sounding alarms about this problem for at least six years -- decided he would help manufacturers implement a patch to the DNS deficiency, one which would not only randomize the source port but exponentially increase the size of the pool from which those port numbers are chosen. Both DNS servers and clients (i.e., any computer that uses DNS to resolve a URL with an IP address) would need to implement this patch.

But if Microsoft or Cisco or any one single company simply reacted to his warning by issuing a patch, that could trigger what we now know as the "zero-day effect:" Malicious users could disseminate not only the severity of the potential problem but the dynamics of it, simply by reverse-engineering the fix. Then they could potentially exploit all the other unpatched portions of the Internet, from manufacturers that had not yet caught up.

Wolfgang Kandek is the chief technology officer for Qualys, a vulnerability management company that works with enterprises to devise security policies and implement more secure software. Kandek is personally familiar with Kaminsky's work, and has surmised the huge problem he faced down.

"There is always the potential: You have a vulnerable piece of software, you come up with a fix for it. That's great. But this gives the attackers that didn't know that this vulnerability existed, a way to analyze it," Kandek said in an interview with BetaNews, "[to ask], 'What did they exactly fix here, and how can I, if I find the machine that does not have the fix applied, exploit that machine?...If I find that software somewhere else, and that hole's still there, I might be able to exploit it this way.' So they can then work on an automatic exploit for that."

There's a cottage industry now based around malicious users who can discover new security holes through the typical hunt-and-peck method. But floating on the outskirts, and probably much larger in number, are the less sophisticated, self-proclaimed "hackers" who wait for legitimate security engineers -- people like Kaminsky -- to discover security holes before anyone else does. Typically when they sound the alarm and a manufacturer like Microsoft or Cisco responds, the response itself sounds the starting gun for a race to find out what it is they fixed.

"So I think what Dan wanted to avoid here was this situation; he wanted to enable the majority of vendors to release this patch at the same time, to make that window where it could be analyzed much smaller. And he's actually said publicly that they have spent some time on making the way they fix it difficult to re-engineer; not only fix it, but also, how can you make it difficult for somebody to look at the fix and understand what the exploit was."

Kaminsky avoided the nightmare scenario by compelling companies including Cisco and Microsoft to collaborate on a major fix, but to do something they'd never done before: not tell the general public too much about the fix in advance. That way, they could all implement their different aspects of the fix literally on the same day.

As Kaminsky wrote on his Web site, "After an enormous and secret effort, we've got fixes for all major platforms, all out on the same day. This has not happened before. Everything is genuinely under control."

(Or at least close to the same day: Apple's round of fixes to BIND were announced just last week.)

Next: Is the current DNS bug fix just a stopgap?

22 Responses to Did a single security engineer avert a DNS disaster?

© 1998-2024 BetaNews, Inc. All Rights Reserved. Privacy Policy - Cookie Policy.