Everyone talk at once: .NET 4.0 will include Parallel Extensions

Parallelism in programming has largely been conducted in the laboratories. But with the next version of the .NET Framework, developers everywhere will be able to experiment with what could become a monumental change in languages.

In perhaps the most significant development in the brief history of the field of implicit parallelism in computing, one of Microsoft's development teams announced last Friday that the next .NET Framework 4.0 -- the first glimpses of which we'll see later this month from PDC in Los Angeles -- will include the so-called Parallel Extensions as a standard feature. This after the Extensions were first introduced in a Community Technology Preview last November.

The significance of these extensions is that they enable existing .NET languages (today, most predominantly, C#) to incorporate implicit parallelism directly in programs. In other words, rather than simply write ordinary procedural code and use compiler switches to determine whether code can be forked into parallel threads, a developer can use entirely new syntax to invoke methods that execute multiple threads concurrently.

In conjunction with the new Language Integrated Query (LINQ) that Microsoft already introduced formally earlier this year, the possibilities for parallel applications that run on multicore servers or data clusters is astounding. To explain: In the old procedural model of algorithmic programming, any function that affects a set of data in a table based on conditions, has to include instructions that explicitly examine each record in that table, test it for the current criteria, and invoke changes to all records that pass. With LINQ, a more SQL-like structure is utilized instead, where a single instruction can point automatically to all records that match criteria, and the change is stated once and once only.

Now, pair that with the Parallel Extensions: Using an up-and-coming syntax gleaned from calculus called lambda expressions, a C# developer can write an instruction where the criteria are expressed inline, similar to an anonymous delegate. It becomes a way of saying in a single expression, "For all x where x meets these criteria, make a change according to the following..."

One difference, as Microsoft developer and corporate VP Scott Guthrie explained in a recent blog post, has to do with explicit typing. Unlike the case with C#'s traditional anonymous delegates (a feature added with version 2.0), types in such inline functions do not need to be explicitly declared. "Unlike anonymous methods, which require parameter type declarations to be explicitly stated, Lambda expressions permit parameter types to be omitted and instead allow them to be inferred based on the usage," Guthrie wrote.

With a myriad of parallel threads operating on data concurrently -- most importantly, on the same data -- how will the compiler be able to keep track of which changes should be implemented when? For a database management system, that subject has already been explored and largely solved, using something called the transactional model.

Recently, a team of Microsoft researchers working with the Parallel Extensions have been investigating whether a similar transactional model can be applied at a much lower level.

"Transactional memory is not about 'removing locks' but is about abstracting away the requirement to specify a particular lock," wrote Microsoft researcher Dana Groff in a blog post last week. "Instead, you can structure your code in well defined sequential blocks of code, what in the database world we call 'units of work,' and then let the underlying runtime system, compiler, or hardware provide you the guarantees you desire. Further, you want this work to scale. To do that, the underlying system provides concurrency control optimistically. Instead of always locking a resource, the transactional memory system assumes that there is no contention. Instead, it detects when these assumptions are incorrect and rolls back changes that were made in the block. Depending on the implementation, the transactional memory system may then re-execute your block of code."

A transactional memory model would drastically reduce, if not completely eliminate, contention between multiple threads acting upon differing views of the same data in memory. The implementation of that model would most likely take place using extensions to programming languages made possible by Microsoft's Task Parallel Library, which will be one part of Parallel Extensions in .NET Framework 4.0.

© 1998-2014 BetaNews, Inc. All Rights Reserved. Privacy Policy.