Engineer Demonstrates iPhone Hack After Acknowledging Apple's Patch
Though security consultant Charlie Miller went through with a scheduled demonstration late yesterday afternoon of how he and his colleagues discovered how to hack the Apple iPhone to obtain personally identifiable information from it, he and his company's Web site both acknowledged that Apple had patched the vulnerability.
The demonstration took place at the BlackHat convention in Las Vegas, where apparently Miller used an unpatched iPhone. Though the specific hole Miller discovered was apparently sealed over, the techniques he demonstrated for uncovering those techniques, and the information he learned along the way, indicate that this may not be the only hole to emerge from what could be characterized as a design flaw.
The problem is an ironic one, according to Miller's slides: The open source library WebKit upon which the iPhone's Safari browser is based, uses "unpatched" code - specifically, a much earlier version of the Perl Regular Expression Library (PREL 6.2) than the current one (7.2). Since the change logs and bug fixes for open source projects are...well, open, it's an academic matter for someone to visit a Web site to read up on the fixes that a library's own developers made to its own software in the process of updating it.
From there, Miller showed, someone out to hack a device that still uses the older, unfixed version of the library can simply employ "fuzzing" - a trial-and-error method of finding sequences of characters that "annoy" a vulnerable portion of the code" - to find out what sequences trip up the code.
In the case of PREL, the change log which listed the discrepancy that developers addressed during the fix, was dated July 2006. Miller pointed out that this could lead to vulnerabilities in Mac OS X versions of Safari as well.
But the iPhone's problems in this case were exacerbated by two fundamental design decisions, he went on, which cannot be changed with mere patches: One is that code appears to be executable from the stack - from the register that's supposed to point to elements of data, not executable instructions.
The other is that internal code addresses are not randomized at boot time, so it's possible to execute system code using memory addresses rather than address names - the latter of which could have been secured.
Last May, Miller raised some eyebrows with the release of a white paper for a Carnegie-Mellon University security workshop, entitled, "The Legitimate Vulnerability Market: Inside the Secretive World of 0-day Exploit Sales" (PDF available here).
As it turned out, he actually demonstrated through experiments with buying and selling exploits on the black market that such a market has yet to become "legitimate," due to the fact that there's no commoditization process with which to establish an index of relative value for any one exploit for sale, with respect to another.