In the Nehalem era, it'll be AMD's HyperTransport vs. Intel's QuickPath

Just when Intel thinks it's getting ready to score one of the biggest "ticks" in its "tick-tock" cadence, designed to aggravate AMD, its competition may go beyond just leveling the playing field, with a new point-to-point interconnect scheme.

A day before the kickoff of Intel's biggest Developers' Forum since the initiation of Core Microarchitecture two years ago, a consortium led by its biggest competitor is announcing it's ready to help it catch up: The HyperTransport memory bus used in AMD processors -- at one time, AMD's ace in the hole -- could be significantly accelerated for future AMD motherboards, helping it catch up with a new goalpost set earlier this year by Intel's upcoming Nehalem architecture.

This morning, the HyperTransport Consortium -- of which AMD is perhaps the most important member -- announced it's readying version 3.1 of its specification, which it says could enable 6.4 gigatransfers per second (GT/sec) -- or 6.4 billion transactions between the CPU and memory. Piling theory on top of theory, that could produce a motherboard whose CPU and memory provide an aggregate peak transfer bandwidth of 51.6 gigabytes per second (GB/sec).

Advertisement

To make this feasible, the clock speed for HyperTransport 3.1 will be hiked up 23% from its current maximum, from 2.8 to 3.2 GHz.

It was HyperTransport that enabled AMD to remove its processors' dependence upon the front-side bus (FSB), the connection between the CPU and devices that's maintained by a separate memory controller chip. Without the FSB, not only could memory control be moved onto the CPU directly, but the path between the CPU and memory could be made direct and very, very fast.

Tomorrow, we'll learn more about how Intel's Nehalem architecture, due to be introduced first on a desktop platform called Core i7, will use a similar concept -- albeit eight years after AMD -- which Intel calls QuickPath Interconnect. It, too, will provide a point-to-point interconnection between the CPU and memory. But as that company hinted back in March, QuickPath could (maybe) provide memory bus bandwidth of "up to" 25 GB/sec (probably presuming 32-bit bytes), enabling 6.4 GT/sec -- exactly the HyperTransport Consortium's number this morning.

The key phrase in Intel's literature is "up to" -- that could mean 6.4 GB/sec is a maximum or peak rate, rather than a sustained rate.

For comparison: PCI Express is the most common expansion bus used in PCs today. It typically uses a signaling rate of 2.5 GHz, which should theoretically enable as much as 8 GB/sec. In practice, that theory goes out the window, as the usual perceived transfer rate is more like 250 MB/sec. The PCI Express 2.0 specification enables a transfer rate of, shall we say, "up to" 5.0 GT/sec.

Now, for a front-side bus, DDR2 memory uses a signaling rate of 266 MHz, which is then multiplied by a fractional value to get the clock speed. Multiple fetches per clock "tick" then boost the bandwidth even further; while DDR2-1066 memory does exist and is feasible, motherboards tend not to support that high a multiplier, so DDR2-800 is more common. Meanwhile, with DDR3 memory, the signaling rate rises to 500 MHz, so the multipliers have so much more power. Rambus, which reluctantly embraced the DDR technique after the failure of its own RDRAM, claims it's designing its own point-to-point memory bus that enables a 32x multiplier for 16 gigabit per second (Gbps) signaling.

So to make it a possible three-way race in the interconnect bus division, HyperTransport today announced that it is publishing a new specification for its bus interconnection, called HTX3, that could conceivably replace the PCI Express expansion bus on PC motherboards entirely. As a result, graphics cards could find their GPUs linked to the CPU using the same point-to-point interconnect that links the CPU's cores to memory.

2 Responses to In the Nehalem era, it'll be AMD's HyperTransport vs. Intel's QuickPath

© 1998-2022 BetaNews, Inc. All Rights Reserved. Privacy Policy - Cookie Policy.