Seagate: The Hard Drive, Reconsidered

The Distrust Problem

If it weren't for George Orwell, the name "Trusted Computing Group" could very well be taken at face value, and the concepts it proposes might not be met by the general public with such instantaneous skepticism. Too much about computing and internetworking has trained its users to distrust, at least with their attitudes if not always in their actions.

While these users complain of the continued deficiencies and vulnerabilities of the software they use, often placing blame where it is generally due, almost with the same breath, they profess the notion that one of the hallmarks of Internet computing is that it upholds the user's right to anonymity.

Yet it is this anonymity -- this need for both users and their processes not to call themselves out for what they are -- that is at the root of the distrust problem. A multi-billion-dollar industry has sprouted forth from the need for some part of the computer to be able to explicitly track down and identify processes that don't properly or correctly identify themselves, in order to isolate malware and stop its spread. And week after week, at some point, all eyes turn to Microsoft to see if some newfound vulnerability will be adequately acknowledged, and if yet another fix is forthcoming.

The cycle has become so commonplace that it's almost comfortable. "Patch Tuesday" is becoming a part of information workers' monthly itineraries, almost like a regular staff lunch. Fewer people, as time goes on, foresee an end to this cycle. Microsoft's ability to continually produce patches rather than architectural remedies seems stretched to the limit, though users are now becoming resigned to the idea that this cycle is permanent.

"MS bringing up all these methods to crackdown on piracy makes me laugh," writes one BetaNews reader, in response to our recent story on Office activation. "They should realize that whatever method they implement, pirates will always crack it. If it's makable, it's crackable."

The picture many have in their minds of their network connections probably resembles Al Gore's metaphorical superhighway, whereupon security measures serve as merely larger and thicker blockades. It's just a matter of time, they conclude, before they succumb to the incoming barrage of artillery. Up goes another set of barricades, like a replenished wave of fortresses in "Space Invaders," and the countdown clock is merely reset.

The permanent solution to the security problem, if there is one, a panel of security experts concluded six years ago during a panel I chaired for COMDEX, lies with the fundamental re-architecture of the computer itself. That redesign, conceivably implemented over stages, would institute the principle of authentication, where both processes and users identify themselves and allow those identities to be confirmed.

The tools of that confirmation, through the wonder of cryptography, would be used to set up encrypted channels of communication between processes and devices that only the authenticated entities could make sense of. If they weren't who they said they were, all they'd see during the interaction would be garbage.

But identity -- as any alcohol abuser under the age of 21 knows -- can be falsified. It shouldn't really matter, then, if the identity of a process is established with 16 digits or 16 million; if one matches the pattern, he theoretically should be able to pass muster.

This is where the principle of trust enters the picture, and the word starts to lose its Orwellian ambiance. In an authentication system using certificates, the default state is distrust. For a process' or a user's identity claim to be verified, the presented certificate is checked against a third party. If that third party's authenticity can be challenged, then it passes on the claim to a more trustworthy party up the chain. This is the chain of trust that is the key component of new architectures for both hardware and software.

The reason a certificate authority (CA), responsible for validating an identity claim, might not be trusted is if it could be spoofed or hacked or otherwise compromised, so that it passes validity without a proper check. The way to avoid this problem without creating an infinite chain of distrust, is to plant the root of trust -- the CA that cannot be spoofed even theoretically -- in a location that is completely secure and impenetrable through the network.

Perhaps the only secure location in a computer system that is impenetrable from the network, is within a component that isn't even logically connected to the network. Most every PC has one. It is, from the network's vantage point, a sarcophagus; yet inside of it is a complete, self-contained computer system, independent of the CPU and the system bus, with its own processor, its own memory, and its own self-contained operating environment.

Welcome inside the hard drive.

Next: Relocating the Root of Trust

71 Responses to Seagate: The Hard Drive, Reconsidered

© 1998-2025 BetaNews, Inc. All Rights Reserved. About Us - Privacy Policy - Cookie Policy - Sitemap.