Windows 7 multi-touch SDK being readied for PDC in October
As details continue to emerge about Microsoft's evidently well-made plans for its next operating system, we learn that full documentation for how multi-touch capabilities will work in Windows, will be ready for demonstration by this fall.
For Microsoft's next Professional Developers' Conference currently scheduled for late October in Los Angeles, the company plans to demonstrate the use of a system developers' kit for producing multi-touch applications for Windows 7. Such applications would follow the model unveiled yesterday by executives Bill Gates and Steve Ballmer at a Wall Street Journal technology conference in Carlsbad, California yesterday.
For the session tentatively entitled "Windows 7: Touch Computing," the PDC Web site -- which went live just this morning -- describes, "In Windows 7, innovative touch and gesture support will enable more direct and natural interaction in your applications. This session will highlight the new multi-touch gesture APIs and explain how you can leverage them in your applications."
We were surprised to find the PDC site reads better when viewed in Internet Explorer.
The early suggestions from Microsoft's developers -- some of whom have been openly hinting that multi-touch was coming to Windows 7 since last December -- is that the next version of Windows will be endowed with technology that emerged from the company's Surface project, its first to implement such controls. Surface is actually an extension of the Windows Vista platform -- specifically, it's the Windows Presentation Foundation extended so that it sees a surface display device as essentially just another container control, with an expanded list of supported graphic devices.
What is not known at this stage is how much today's Windows Vista will have to be extended to enable multi-touch in Windows 7, especially for the sake of downward compatibility with existing and earlier applications.
Prior to the advent of Windows XP, when applications were largely compiled using Microsoft Foundation Classes (MFC), application windows were very generic containers with standardized window gadgets and menu bars. When a developer used the standard MFC library, he could be assured that scroll bars could respond to mouse events and that contents that spilled off the edge of the visible area would not, as a result, descend into some invisible twilight zone.
Holding that MFC fabric together was the concept that graphic elements responded to individual events, often called "mouse events." And the basic premise of a mouse event was that it had to do with a single element positioned at one spot, or one set of coordinates, on the screen. A keyboard event could alternately trigger a mouse event (pressing Enter while the highlight was over "OK," for example), but the developer would only have to write one event handler for managing what happened after clicking on OK.
The first touch sensitivity in Windows came by way of Tablet PC, which was a platform extension to Windows, coupled with a series of drivers. Adding a stylus as a new device for input could indeed change the way applications worked unto themselves; they could add all kinds of new gadgets that would have been pointless under mouse control only.
In addition, Microsoft opened up a wide array of so-called semantic gestures, which was a library of simple things one could do with a stylus that could potentially mean something within an application. For example, scratching on top of a word could be taken to mean, "Delete this word." Drawing a long arrow beside a graphic object could mean, "Please move this object over here." It all depended on how the application developer wanted the user to see things; and there were certainly some good suggestions, but not the kind or level of standardization as prescribed by IBM's Common User Access model (PDF available here) of the early 1990s.
However, outside of the application's native context, whatever a stylus can do in the Windows workspace is relegated to substituting for a mouse event. In other words, the Windows desktop was not supposed to know or care whether the user was operating a mouse, a keyboard, or a stylus, just as long as the same events were triggered.
For instance, a tap of the stylus on the surface will send an event whose constant code in Visual Studio is WM_LBUTTONDOWN, followed immediately by WM_LBUTTONUP, as though the user had pressed and released the left mouse button (the "L" in these constant codes). By comparison, holding down the pen on the surface will trigger the WM_RBUTTONDOWN event just after the time the pen touches the surface, followed by WM_RBUTTONUP when the user lifts it from the surface. However Windows would normally respond to a left or right button click, respectively, is how the Tablet PC developer would expect Windows to respond to a stylus tap or a press-and-hold.
Here, because standard Windows functions must be capable of working reasonably within a Tablet PC environment, the interface between the general functions and the outside world is standardized.
Since that time, we've seen the advent of Windows Presentation Foundation, a little piece of which is distributed with every copy of Silverlight. An application built to support WPF operates under a new set of rules.
Next: What Windows 7 will learn from Surface