Betanews Comprehensive Relative Performance Index 2.2: How it works and why

The other five elements of CRPI 2.2

  • Nontroppo table rendering test. As has already been proven in the field, CSS is the better platform for rendering complex pages using magazine-style layout. Still, a great many of the world's Web pages continue to use HTML's old <TABLE> element (created to render data in formal tables) for dividing pages into grids. We heard from you that if IE7 is still important (it is our index browser after all), old-style table rendering should still be tested. And we concur.

    The creator of our CSS rendering test has created a similar platform for testing not only how long it takes a browser to render a huge table, but how soon the individual cells (<TD> elements) of that table are available for manipulation. When the test starts, it times the duration until the browser starts rendering the table and then ends that rendering, from the same mark, for two index scores. It also times the loading of the page, for a third index score. Then we have it re-render the contents of the table five times, and average the time elapsed for each one, for a fourth score. The four items are then averaged together for a cumulative score.

  • Nontroppo standard browser load test. (That Nontroppo gets around, eh?) This may very well be the most generally boring test of the suite: It's an extremely ordinary page with ordinary illustrations, followed by a block full of nested <DIV> elements. But it allows us to take away all the variable elements and concentrate on straight rendering and relative load times, especially when we launch the page locally. It produces document load time, document plus image load times, DOM load times, and first access times, all of which are compared to IE7 and averaged.
  • Canvas rendering test. The canvas object in JavaScript is a local memory segment where client-side instructions can plot complex geometry or even render detailed, animated text, all without communicating with the server. The Web page contains all the instructions the object needs; the browser downloads them, and the contents are plotted locally. We discovered on the blog of Web developer Ernest Delgado a personal test originally meant to demonstrate how much faster the Canvas object was than using Vector Markup Language in Internet Explorer, or Scalable Vector Graphics in Firefox. We'd make use of the VML and SVG test ourselves if Apple's Safari -- in the interest of making things faster -- hadn't implemented a system that replaces them with Canvas by default.

    The Canvas rendering test by Ernest Delgado, appropriated by Betanews for CRPI 2.2.

    So we use Delgado's rendering test to grab two sets of plot points from Yahoo's database -- one outlining the 48 contiguous United States, and one set outlining Alaska complete with all its islands. Those plot points are rendered on top of Google Maps projections of the mainland US and Alaska at equal scale, and both renderings are timed separately. Those times are compared against IE7, and the two results are averaged with one another for a final score.

  • Testnet.World JS performance test. A decade ago now, someone tried to make a respectable JavaScript benchmark suite out of estimating how long the engine took to execute common math instructions. To be able to count the estimate, each test would run a thousand or so iterations. For today's computers, they have to run 1,000,000. But at the time the test was released, it was criticized by many who thought timing just the single instructions over and over, didn't represent overall performance. It doesn't, but it does represent the efficiency of small parts of the engine, and it's precisely the part we needed to fill in after Sean P. Kane changed his benchmark suite. So this battery -- a rewritten version of the code at this address -- times how long it takes the engine to process 15 common keywords and regular functions, from if branches to for loops to string concatenations, one million times each. Results are rendered in milliseconds, and scores in each heat are compared to those from IE7 in Vista SP2. The results are 15 per-heat relative index scores, which we then average out to gain a final score in the battery.
  • Acid3 standards compliance test. The function of the Acid3 test has changed dramatically, especially as most of our browsers become fully compliant. IE7 only scored a 12% on the Acid3, and IE8 scored 20%; but today, most of the alternative browsers are at 100% compliance, with Firefox at 93% and with 3.7 Alpha 1 scoring 96%. So it means less now than it did in earlier months to have Acid3 yield an index score of 8.33, which is the score for any browser that scores 100% thanks to IE7. Now that cumulative index scores are closer to 20, having an eight-and-a-third in the mix has become a deadweight rather than a reward.

    So now we're making Acid3 count in a different way: For the other batteries that have to do with rendering (all three Nontroppos and TestCube 3D), plus the native JavaScript library portion of the SlickSpeed test, we're multiplying the index score by the Acid3 percentage. As a result, the amount of any non-compliance with the Web Standards Project's assessment is applied as a penalty against those rendering scores. Today, only Mozilla and Microsoft browsers are affected by this penalty, and Firefox only slightly -- all the others are unaffected.

Next: Our physical test platform, and why it doesn't matter...

16 Responses to Betanews Comprehensive Relative Performance Index 2.2: How it works and why

© 1998-2025 BetaNews, Inc. All Rights Reserved. Privacy Policy - Cookie Policy.