Microsoft's 'trust' chief steers his company back toward Trusted model
The first time Microsoft launched a Trustworthy Computing initiative, it was met with skepticism, especially with the way Bill Gates played it up. But six years later, a key Microsoft executive suggests it may be time to revisit the subject.
In a surprisingly frank white paper from the man in charge of Microsoft's Trustworthy Computing strategy, released this morning, Corporate Vice President Scott Charney writes that his company's own first two major initiatives toward providing greater security for software and Internet users fell short of their intended goals, and that a third initiative just now getting under way may still fail to completely address the problem of ensuring consumer safety and privacy.
A former US Justice Dept. official before joining Microsoft, Charney writes in "Establishing End-to-end Trust" (PDF available here) that a key goal of trustworthy computing is still to reliably authenticate users and the companies they represent, especially in business transactions. But the rapidly evolving nature of social computing, coupled with the curious requirement among consumers for not just privacy but anonymity, has thrown the biggest monkey wrench into the system.
"Ensuring that people can be identified raises the most complex social, political, and economic issues, with the No. 1 issue being privacy," Charney writes. "The concern is twofold: (1) If authenticated identity is required to engage in Internet activity, anonymity and the benefits that anonymity provides, will be reduced; and (2) authenticated identifiers may be aggregated and analyzed, thus facilitating profiling."
Identification is critical in the Trusted Computing model that Charney represents and promotes, because every computing transaction in this model, whether on the Internet or locally, must take place between either people or components that can identify themselves, and whose stated identities can pass a reasonable challenge. In a hypothetical world where every component does identify itself according to protocol, it can be assumed that the first task of a malicious user will be to bypass the system of identification, perhaps through spoofing someone or something else, and perhaps by overriding that particular step.
Over the last decade, Microsoft has had to play catch-up in this department, mainly because the distributed computing model it wanted to deploy first over the network -- the Component Object Model -- failed to include any rigorous method of authentication. Since then, the company has moved in stages toward more thoughtful practices, but even the act of migration has exposed some vulnerabilities which malicious users cannot resist the temptation to exploit.
Microsoft's first plausible initiative in this regard, Charney writes, was its "Secure by Design" principle, the current version of which is called SD3. The idea there was to stop producing software whose most exploitable features were turned on by default.
Microsoft Corporate Vice President Scott Charney |
"There was, in fact, nothing wrong with this strategy as a foundation, and SD3 remains important today," Charney wrote. "The problem with SD3 lies in its inherent limitations. Even if products are engineered to be 'Secure by Design' and vulnerability counts continue to drop, it is indisputable that the number of vulnerabilities in large and complex products (several of which are likely to be installed on a single system) cannot be reduced to zero in the foreseeable future. 'Secure by Default' is inherently limited because the attack surface can only be reduced, not eliminated, and features are created precisely because a broad set of users need the feature activated. Similarly, many legacy software applications require the user to run as 'admin,' thus undermining some of the intended security benefits of running as a standard user."
In addition, he added, the practice of releasing patches in regular batches (with a nod to Dr. Seuss) actually helped spawn a cottage industry in reverse-engineering. The patches actually provide a road map to the problem, when a malicious user holds them up to a mirror.
So Microsoft moved on to its second initiative, "Defense-in-Depth." That had a lot to do with strengthening Windows' firewalls and turning off more features by default. But after users have seen all those warnings for the umpteenth time, Charney writes, "it remains true that users will click on malicious attachments sent to them from unknown sources."
And while it's nice to have reduced the attack surface on the surface by turning off volatile features by default, he notes that the reason those features were developed in the first place is so that they could be turned on. So just the off switch isn't enough.
What Charney advocates as a next course of action for Microsoft is a move back toward a bolder, more daring vision of security that it backed away from when "Secure by Design" was launched: a vision that incorporates more of the Trusted Computing principles that Chairman Bill Gates first advocated back in early 2002. Those measures were met with widespread skepticism as the whole notion of "Trusted" or "Trustworthy Computing" coupled with the Microsoft brand sparked notions of Big Brother, or of turning over control of users' hard drives to Hollywood studios. Eventually the negative publicity was so bad that its former Trusted Platform partner Intel steered clear of Microsoft's strategy in 2006.
In light of the less-than-complete objectives of Microsoft's first two public initiatives, though, a third one dedicated to the Trusted Platform may be met more positively today, Charney implies. But the biggest issue blocking that from happening now isn't the fear of Big Brother or DRM, he believes, but the Internet-using public's simultaneous insistence upon anonymity, privacy, and openness. Not all three may absolutely coexist, he suggests.
However, in a curious argument, he proposes that anonymity in the social sense may be completely impossible if there were no infrastructural means of securely identifying the anonymous party, in order to help guarantee that anonymity.
As Scott Charney writes, "Clearly, this approach will not satisfy those who see the Internet's anonymity as the ultimate protector of privacy. This may particularly be true in those cases where anonymity promotes and protects unpopular speech. But the fact remains that if we hope to reduce crime and protect privacy, we need to give users the ability to know with whom they are dealing (if they so choose) and law enforcement the capability to find bad actors.
"It is also important to remember," he continues, "that there are multiple privacy interests at stake here; for example, in the e-mail context it is not just the sender of a communication who may have a privacy interest, but the recipient may wish to be left alone. Indeed, any regime should not only seek to provide greater authentication to those that want to provide it or consume it, but also provide anonymity for those who wish to engage in anonymous activities. Users should be able to choose to send anonymous communications, and users should be able to choose to receive mail only from known sources."