IE9 Tech Preview beats latest Firefox alpha, as Chrome 5 clobbers King Opera
How long ago would you have thought it absolutely impossible for the slowest Windows Web browser currently under development to be coming...from Mozilla? Granted, the Internet Explorer 9 Tech Preview isn't a real browser (typically, these things need their own address bars and Back buttons). But unless Mozilla gets its JaegerMonkeys in a row in time for Microsoft to debut IE9 with real features like buttons, the number two reason users cite for switching from Internet Explorer...will be wiped off the map.
Last week's latest daily preview build of Firefox 3.7 Alpha 4, meanwhile, scored a 10.76, using the same tests on the same machine. The new round of Alpha 4 previews represent Mozilla's fastest browser to date, well ahead of the current Firefox stable browser score of 9.08.
That's not the only headline to emerge from the latest tests: Although it had appeared to be holding back in recent weeks, Google's Chrome 5 team managed to find another gear (what would you call it? Eighth gear? Ninth?). On both our old tests and the new, Chrome snatched back a handful of speed points, to post a total score of 23.32. The latest stable Opera 10.51, released just today -- including bug fixes to the hurriedly released Opera 10.5 -- scored a 23.17 on our new tests, still a big improvement over the stable version score of 21.85. The stable Chrome 4 scores, by comparison, are behind Opera's at 20.05.
Why a new test suite, again?
As Web "browsers" evolve to become Web applications platforms, and as Web "pages" evolve to become applications, it becomes more and more critical for us to understand the differences between the browsers as though they were machines. Readers have told me recently that it might be unfair to keep comparing IE to Chrome, for instance, because (in their words) folks tend to use IE just to browse pages, whereas they may be using Chrome to run Google Apps. For those readers, continuing to declare Google twenty times greater, or so, than IE is like saying over and over again, a tractor's more powerful than a lawnmower. Sure it is. We get it already. But that's not to say lawnmowers don't have their place.
The average BlackBerry has a far slower processor than the common iPhone. That fact is obvious whenever you zoom in and out of a page (the word "zoom" doesn't really apply to most BlackBerrys). And up to now, the standard browser on a BlackBerry is, in my totally unbiased opinion, terrible. (If you're like me, you've replaced it with Opera Mini.) But that doesn't mean the BlackBerry is useless or even inefficient for what it is capable of doing, when it does it well.
Efficiency, for me, is the capability to find another gear and crank out greater work product when the workload increases. You hear marketing folks misuse the word scalability; to me, it's the capability to get more efficient as work gets tougher. Theoretically, if a processor has to do a job x 100 times, you can expect time consumed to grow to 10x if the workload increases to 1,000. If it becomes 15x instead, that's bad.
When Opera Software last month told me its developers' opinion of the relative efficiency of one of the tests I had been using in our Relative Performance Index suite, I decided to pursue whether they were right. They were. Months earlier, I had resurrected an old test battery used by magazines in the Netscape days, which spun a single instruction a few thousand times and measured time elapsed. Well, in modern days, when a single instruction does nothing, and a thousand or a million repetitions of that instruction do nothing, just-in-time compilers see that it does nothing and, quite efficiently, "compile" that instruction to...nothing. So when it takes no time at all to do nothing, I frankly shouldn't be all that amazed.
That's when I became more curious about the way JIT compilers work. If the "digestibility" of a sequence of instructions depends on its sameness, then a more appropriate test of a JIT's efficiency would be to throw different algorithms at it whose relative efficiency sometimes depends on its capability to be differentiated rather than the same, throw varying workloads of the same test at it (100, 1,000, 10,000, ten million iterations), and see how browsers perform under the stress. Will they scale up to meet new demands? Will they opt for easier breakdowns or faster run time?
That's the inspiration behind the latest battery of tests in the Betanews suite: a way to see not only how fast a browser runs and how fast it can become, but why. Back in the 1980s and '90s when I used to test BASIC compilers, I used some common reiterative algorithms and math tests, and I published the results under my old pseudonym, "D. F. Scott." So in honor of my past life, I've christened this new battery "DFScale," a test of varying speeds and scalability under varying conditions.
Next: What scalability teaches us about browsers we didn't know before...