Keynote #4 – Rashid

PDC 2008, Day #3, Keynote #4, 1.5 hrs

Rick Rashid

Rick Rashid, Senior Vice President of Microsoft Research, delivered the final PDC 2008 keynote.

Rick described how Microsoft Research is organized and talked about their mission statement:

  • Expand state of the art in each area in which we do research
  • Rapidly transfer innovative technologies into Microsoft products
  • Ensure that Microsoft products have a future

The Microsoft behemoth is truly impressive.  Here are a couple of tidbits:

  • 10-30% of papers presented at most CSci and Software Engineering academic conferences are  by Microsoft Research
  • Microsoft Research employees about 850 Ph.D. researchers—about as large a staff as most research-oriented universities

Mr. Rashid made a good case for why we do basic research.  It’s not so much for the immediate applications.  Instead, he argued the goals are to enable a company to respond quickly to change, based on an existing reservoir of people and technologies that can be brought to bear on new problems.

I for one was expecting some cool demos at the Research keynote.  There were several demos, but they started out as fairly mundane.

Feng Zhao – Energy Sensing

The first demo was by Feng Zhao, a Principal Researcher.  He talked about a little climate sensor that Microsoft developed and which they been using to gather climate data.

The first example was indoors.  Microsoft had actually hung a large number of these sensors from the ceiling of the keynote auditorium.  They’d then been acquiring basic temperature data for several days and transmitting that data back to a server.  Feng was able to show all kinds of graphs showing the temperature map of the room, including how it warmed up a little when people came in.

Feng also explained how they are using similar sensors in outdoor climactic research projects.  For example, they collect various data about Alpine climate data in Switzerland.

World Wide Telescope

The next demo was definitely a notch above the energy sensing.

The WorldWide Telescope is a Microsoft Research project that went public earlier this year.  It’s a web site that ties together a huge database of space images from all different sources.  The end result is a 3D virtual universe that you can fly around in, navigating to and viewing various objects.  As you zip around in the universe, you automatically see stitched together images of whatever objects would be in your view.

Microsoft announced in this keynote a new version of WorldWide Telescope, being released today.  It included lots of new images, as well as improved views of our solar system.

The demo of the new WorldWide Telescope site was truly awe-inspiring.

Boku

The energy level  went up a little bit more as Matt MacLaurin came out to demo Boku—an animated world used to teach kids how to program.

In the Boku world, kids create “programs” visually by selecting objects and then icons indicating what those objects should do.  Actions can include things like moving towards other objects, eating objects, or shooting at objects.

The demo was pretty impressive.  The little Boku world was rendered beautifully in 3D and Matt was able to very quickly create and animate objects.  More important is that he said that kids find the resulting “programming” environment very intuitive to use.  This allows them to learn basic logic and programming skills at a very early age.

Boku was great, but nothing compared to what came next.

SecondLight

Most of us have seen the online videos and demos of the Microsoft Surface.  The basic idea is that a PC projects an image of a user interface surface up onto a flat table that you can interact with by touching.  It’s sort of a combination coffee table and touchable PC screen.

The big thing about Surface is that it supports something called “multi-touch”.  So not only can you move things around on the surface by touch/dragging with one finger, you can initiate more complicated gestures by using two fingers at the same time.  For example, you might  use your thumb and forefinger to zoom into a photo by putting both fingers down and then spreading them apart.

Surface also includes an infrared camera that allows it to “see” things placed on the surface itself.  This allows user interaction with simple objects, or even something like a user’s arm.  (Think about a Poker game that would flip your cards over when it saw you put your arm down to block them from another player’s view).

That’s the basic idea of Surface.  It’s available today, for something like $15,000.

But at today’s keynote, several of the guys from the Surface team demoed the next big extension to Surface, which they called SecondLight.

The basic idea of SecondLight is to extend both the project area and the infrared detection mechanism out into the space above the surface.  So if I held up a piece of tracing paper 8-10” above the surface, I’d see an image on it, as well as the surface.  Ok, no big deal, right?  Well, the big deal is that the image on the tracing paper is different from the image on the surface below it?

The guys’ demo showed how this might work.  Let’s say you’re looking at a Virtual Earth map that showed an aerial view of some location.  The Surface would display the standard aerial view of the place.  Now let’s say that you hold a little sheet of tracing paper above the Surface, over one particular area of the map.  Surface might project out a street view, rather than aerial view onto your paper.  But the main surface of the table will continue to show the original aerial view.  Other applications might be to provide a photo on the Surface and then some description about that photo on the paper that you hold over it.

How the hell do they do that?  Well, they explained that their projector down under the table can actually interlace two completely separate images—one that they project up to the surface and one that they project beyond the surface.  This happens at such a high frequency that you don’t see any flicker and see both images simultaneously.

But wait, there’s more!  Now the guys demonstrated how Surface can also detect objects above the table with its infrared camera.  They took a little picture frame with a plastic see-through surface—something about 4-5” across.  To start with, there was a silhouette of a man on the main Surface.  Now when they held the see-through frame above the Surface, the man moved from the Surface up to the surface of their handheld picture frame.  Truly a “holy shit” moment.  Even more incredible, the presenter started slowly tilting the picture frame from horizontal to vertical.  As he did this, the aspect ratio of the man on the frame stayed the same—as opposed to the projection becoming narrower.  The effect was as if you’d grabbed the silhouette off the flat surface and stood it up.  Unbelievable.

How did they do this last part?  Well, simple—the infrared camera can see the outline of the handheld frame, so it knows its dimensions.  And it then pre-foreshortens the image of the man, so the entire image of the man is mapped to the entire size of the picture frame.  The result is that the man stands up.

This final demo was truly the highlight of the keynote and it made up for all of the boring opening acts.  There are many ways that you can imagine using GUI technology like this.  Just think of a computer that can recognize your face and see where you are.  We’re in for some truly amazing advances in user interaction over the next few years.

You can find a video of the demo at: http://www.youtube.com/watch?v=XfzplPIrzjY