AMD-ATI to Offer CPU/GPU Combo

If there was any doubt that CPU maker AMD’s principal reason for acquiring graphics chip powerhouse ATI was to build a mobile computing platform that would rival Intel’s Centrino, it was erased this morning with AMD’s announcement of a platform project it’s currently calling “Fusion.”

Ostensibly, the purpose of this morning’s AMD statement was to announce that it had completed its merger with ATI. As a visit to ATI's Web site will also make clear, AMD is clearly in the driver’s seat, although the ATI brand will apparently continue to adorn Radeon graphics cards for the near future.

But this morning’s comment from AMD’s CTO, Phil Hester, fires a broadside squarely at Intel’s design philosophy of multicore schemes in multiples of two:

“In this increasingly diverse x86 computing environment, simply adding more CPU cores to a baseline architecture will not be enough,” said Hester. “As x86 scales from palmtops to petaFLOPS, modular processor designs leveraging both CPU and GPU compute capabilities will be essential in meeting the requirements of computing in 2008 and beyond.”

In other words, AMD’s future “Fusion” platform, which the company says could be ready by late 2008 or early 2009, won’t just be an Athlon and a Radeon sharing the same silicon. The companies will truly be looking into the potential benefits of using multiple pipelines along with multiple cores, as a new approach to parallelism in computing tasks.

When the two companies first announced their merger intentions last July, although both sides were using strange new hybrid almost-terms like “GP-CPU” to describe the research they had read about –- if not actually participated in –- their willingness to head down this new architectural road seemed more tentative than definitive.

With three months having passed, and analysts throughout the duration of that time having discussed the real possibility of Intel’s having recaptured the market share momentum, the new AMD seems more willing to embrace a development path that could more clearly distinguish itself from its core competitor.

Almost from the day that pipeline processing was added to the first hardware-assisted rendering systems for PCs, researchers (though mostly with universities, not manufacturers) have been exploring the notion of co-opting graphics processor-style parallelism for use with everyday tasks. Both multicore CPUs and modern GPUs use parallelism to divide and conquer complex tasks, although their approaches are radically different from one another. That difference, researchers say, could make them complementary rather than contradictory.

Multicore parallelism delegates discrete computing tasks among differentiated logic units. If you can imagine the CPU performing triage when it first examines a passage of object code, it delegates tasks into separate operating rooms where individual doctors, if you will, can concentrate upon each one exclusively.

The approach the CPU takes depends on both its own architecture and that of the program. In discrete parallelism, programs are compiled by software that already “knows” how tasks should be delegated, so the compilers “toe-tag” tasks in advance. The processor reads these tags and assigns the appropriate logic units, without much decision involved on the CPU’s part.

This is the kind of “discrete parallelism” approach that Intel’s Itanium and Itanium-2 series (IA-32 and IA-64 architectures, respectively) take. As you probably already know, the fact that Itaniums have not been hardware-compatible with x86 architecture, has been one of Intel’s historical detriments.

The other approach CPUs take is with implied parallelism where the processor analyzes a program that would otherwise run perfectly well in a single-core system, but divides the stream into tasks as it sees fit. This is how Intel’s hyperthreading architecture worked during its short-lived, pre-multicore era; and it’s how x86 programs can run in multicore x86 systems, running Core 2 Duo or Athlon processors, today.

But there are techniques programmers could take to help out the implied parallelism process, usually involving compiler switches, including on Intel’s own compilers. It wouldn’t change the program much, except for possibly a few extra pragmas and some tighter restrictions on types. Still, as Intel’s own developers have noted, programmers aren’t often willing to make these quick, subtle changes to their code, even when the opportunity is right in front of them.

Keep that in mind for a moment as we compare these to GPU-style parallelism. In a GPU, there aren’t multiple cores or logic units. Instead, since their key purpose is often to render vast scenes using the same math over and over, often literally millions of times repeated, GPUs parallelize their data structures. They don’t delegate multiple tasks; they simply make the same repetitive task more efficient by replicating it over multiple pipes simultaneously. It’s as though the same doctor could perform the same operation through a monitor, on multiple patients with the same affliction simultaneously.

Although CPU manufacturers arguably pioneered the concept of Single Instruction/Multiple Data for use with the first graphics acceleration drivers in the late 1980s, it’s GPU manufacturers such as ATI and Nvidia which took that ball and ran with it, to some extent clear out of the CPU companies’ ballpark.

As university researchers have learned, co-opting the GPU for use in processing everyday, complex data structures, yields unprecedented processing speed, on a scale considered commensurate with supercomputers just a few years ago.

But unlike the case with implied parallelism in multicore CPUs, a shift to SIMD-style programming architecture would, by most analyses, mean a monumental transformation in how programs are written. We’re not talking about compiler switches any more.

Several new questions and concerns emerge: How does the new AMD evangelize programmers into making the changes necessary to embrace this new platform, in only two years’ time, while simultaneously (to borrow a phrase from parallelism) trumpeting “Fusion’s” compatibility with x86 architecture. When Itanium weighted down Intel’s momentum, AMD didn’t pull any punches in pointing that out to everyone. It’s made astonishing success by touting its fully-compatible, x86-oriented approach to 64-bit computing, at Intel’s expense.

AMD cannot afford to make a U-turn in its philosophy now, not with Intel looking for every weakness it could possibly exploit. And if AMD succeeds in hybridizing an x86/Radeon platform while staying compatible and holding true to its word, if there are no automatic performance gains in the process of upgrading –- as there clearly are with multi-core -– will customers see the effort as a benefit to them?

31 Responses to AMD-ATI to Offer CPU/GPU Combo

© 1998-2024 BetaNews, Inc. All Rights Reserved. Privacy Policy - Cookie Policy.