nVidia DirectX 10 Graphics Cards Do CPU Work

While laboratories and hobbyists have been actively experimenting with the notion of putting graphics processors’ multiple pipelines to work in executing more general computing tasks, such as number crunching and statistical analysis, nVidia today became the first GPU producer to officially pursue the use of GPU co-opting as a feature.

With the release this afternoon of its first series of DirectX 10-oriented graphics cards, the 8800 GTX, comes the announcement of a software development environment called CUDA, exclusively dedicated to building C++ programs especially compiled to take advantage of GPU parallelism.

The type of parallelism that takes place with a graphics processor is intrinsically different from what’s necessary for a multicore environment, as BetaNews reported two weeks ago in this story on nVidia competitor ATI.

A GPU is designed to be able to execute a single type of instruction on a wide array of data simultaneously, which is different from executing multiple threads. This is how a GPU can accomplish such beautiful shading, the depth and richness of which appear to have increased yet again with today’s 8800 GTX introduction.

But a typical C++ compiler, such as those made by Intel or one that’s part of Microsoft Visual Studio, is designed for the generation of source code that executes on the CPU. Today’s announcement deals with the creation of programs, which will probably work more like math libraries, that are executed from the GPU.

They may, or may not, actually produce graphics; but if you’re just running Excel anyway, it might be worthwhile to utilize all that processing power you’d normally use for playing Age of Empires III, for solving larger problems like simulating stress variables in complex structures.

For several months, nVidia has been actively working on Compute Unified Device Architecture (CUDA), which is a more intriguing concept than it is an acronym. The idea is to leverage the fact that graphics processors use a larger number of pipelines (the 8800 has 128) to coordinate threads.

This way, rather than simply setting up a large data array for multiple execution by a single instruction, the execution stream can be balanced out, by opening up that same array to multiple simultaneous instructions. Helping them out is a parallel data cache, which serves as a kind of collective inbox/outbox for threads to pass information between one another.

This type of coordination requires an exclusive level of programming, which is where nVidia’s dedicated CUDA C/C++ compiler enters the picture. Compiled CUDA threads, the company explains, will be addressed in CPU programs using a new set of nVidia drivers, which would arguably not be graphics drivers. As a result, you could be seeing for the first time add-ins for statistical packages and for Excel making reference to nVidia library functions, for tasks that have zero to do with graphics whatsoever.

The company’s new architecture puts forth the now-conceivable notion of the assembly of a real supercomputer, not from cobbled together CPUs but instead a mere four cores or less. The tasks that supercomputers would be expected to perform to prove themselves, would in this instance be handed off to a string of up to 128 graphics cards, all strung together in SLI mode. NVidia’s chief scientist, David Kirk, may be working with the University of Illinois to advance that very possibility.

The announcement of CUDA architecture, along with that of the 8800 GTX graphics card and nVidia’s new 680 SLI chipset -– for use with Intel’s Core 2 Quad processors –- were made in a gala rollout event in San Jose today by nVidia’s new, very own virtual spokesperson: a digital rendition based on a digitization of actress and model Adrienne Curry.

11 Responses to nVidia DirectX 10 Graphics Cards Do CPU Work

© 1998-2025 BetaNews, Inc. All Rights Reserved. About Us - Privacy Policy - Cookie Policy - Sitemap.